2009 Volume 129 Issue 6 Pages 1115-1122
In recent years, researches on autonomous robots in real life have developed. Especially, moving obstacle avoidance is one of the most important tasks for robots. Reinforcement learning is a typical method of action acquisitions of autonomous mobile robots for obstacle avoidance. However, it has been indicated that reinforcement learning has various problems in unknown environment. In order to solve these problems, we propose predictive reinforcement learning for moving obstacle avoidance. In predictive reinforcement learning, our rules are not defined as a pair of actions and states like conventional reinforcement learning. The rules are defined as the transition of the states by robot action between steps. We think that proposed rules enable robots to adapt to unknown environment because these rules are independent from any environment where moving obstacles exist. The robots implemented these rules predict the next state. After this prediction, the robots reinforce its rules by comparing observed states with predicted ones and foresee collisions on obstacles. Then the robots select safer actions. In this paper, we verify the efficiency of our method in several simulations. First, the robot is trained in learning environment where moving obstacles exist. After that, we experiment to verify the ability of adaptation to unknown environments. As a result, the robot acquires moving obstacle avoidance actions.
The transactions of the Institute of Electrical Engineers of Japan.C
The Journal of the Institute of Electrical Engineers of Japan