Many machine learning methods have been proposed to learn techniques of specialists. A machine has to learn techniques by trial and error when there are no training examples. A reinforcement learning is a powerful machine learning system, which is able to learn without giving training examples to a learning unit. But it is impossible for the reinforcement learning to support large environments because the number of
if-then rules defined by combinations of a relationship between one environment and one action becomes huge. In a previous paper, we proposed a new reinforcement learning with fuzzy evaluation environment, called FEERL (Fuzzy Environment Evaluation Reinforcement Learning). The FEERL is made up from a fuzzy evaluation, an environment simulator and a search. It was applied to the chess and its effectiveness was confirmed. In this paper, we apply the FEERL to LightsOut game having no opponent as an example of huge environment and show that the FEERL avoids detour actions in search and then get a proper solution.
抄録全体を表示