Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 06, 2021 - June 08, 2021
This paper proposes motion planning in low-dimensional space, so-called latent state space, embedded by variational autoencoder. We hypothesize that the path passing latent states with high likelihood works as motion planning. Assuming latent states follow the normal standard distribution, finding the path can be seen as a two-point boundary value problem of an ordinary differential equation. Then, the proposed method makes an agent follow the path by a policy pre-trained by reinforcement learning. We evaluated the proposed method in simple maze environments and found that the proposed method works well for long-horizon tasks and has a low computational cost.