The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)
Online ISSN : 2424-3124
2023
Session ID : 2P1-G07
Conference information

Imitation of path-tracking behavior by end-to-end learning of vision and action
- Investigation of a method to collect datasets and train them offline -
*Yuki TAKAHASHIKazuki SHIRASUMasaki FUJIWARARyuichi UEDAYasuo HYASHIBARA
Author information
CONFERENCE PROCEEDINGS RESTRICTED ACCESS

Details
Abstract

We investigate a method for offline learning of vision-based path-following behavior using pre-collected images and actions. Our previous method has learned such behavior online. The feature of the method is that it imitates the behavior generated by self-position estimation using a LiDAR sensor as an input to the behavior using vision as an input. However, it has been a problem that imitation learning requires a long training time. Therefore, we try to shorten the training time by using offline learning. Furthermore, we will clarify how much visual information around the path is required for applying the method to a real robot. As a result, we verified that the method shortens the training time. We also clarified the required visual information through experiments.

Content from these authors
© 2023 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top