2022 年 40 巻 2 号 p. 162-165
Motion planning is an essential capability for autonomous navigation of a mobile robot. This paper focuses on end-to-end motion planners. The presented motion planner predicts a control output from the image input. For input-output mapping, the planning policy is represented by a CNN. In this paper, both the static and dynamic obstacles are the targets. For dynamic obstacles, the robot is required to take into account the time series variation in the input images. Therefore, we propose to use a LSTM block within the CNN. Imitation learning is applied to the planning policy for behavior cloning. The results of experiments show the effectiveness of the motion planner through mediated perception not only for a standing person, but also for a walking person as dynamic obstacles.