Abstract
This paper proposes a reinforcement learning system that uses fuzzy ART neural networks for segmentation of state space. One of the problems in reinforcement learning with a real robot is to need a large number of trials. In reinforcement learning, some efficient method for the state-space segmentation is necessary to improve the learning quality and to reduce the learning time. By using an incremental state-space construction method with fuzzy ART neural networks, we are able to economize the software resources and reduce learning time. Whenever the fuzzy ART neural network encounters a new situation, it adds a new category unit to the state-space. We propose adding methods of a new category unit that inherit the state-value and the policy from a similar unit. This system estimated from simulation of a two-link robot and experiment of a multi-link mobile robot. It is shown that the state-space has become small and the learning time has decreased.