2020 年 56 巻 3 号 p. 156-166
In motion planning of robots such as quadrotors, potential field methods are useful so that robots avoid obstacles. The artificial potential field method, which is one of the potential field ones, enables us to plan actions. However, the quadrotors sometimes fail to avoid the obstacles because the artificial potential field method does not take into consideration the inertia effect arising from the velocity of the quadrotors. To overcome the inertia effect, we give an idea of applying deep reinforcement learning to the artificial potential field method to determine an additional reference signal to the quadrotor. Thanks to this reference signal, the quadrotor improves the performance in trial and error to avoid the obstacles. Then the robot achieves an optimal action from the velocity of the robot and the position of the obstacles.