Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 28, 2023 - July 01, 2023
Visual Teach & Repeat Navigation (VT&R) is a traditional way to let a mobile robot navigate without explicit localization. Most VT&R methods perform image matching between images, which the robot obtained while operated by a human (teaching process) and while running by itself (repeating process), in various ways including classical image processing and deep learning. In this paper, we investigate how the image matching methods work when considering outdoor VT&R scenarios with extreme appearance changes of environments. Our offline experimental results suggest that deep feature point extraction and matching techniques such as SuperPoint [1] and SuperGlue [2] help accurate repeat navigation even if illumination condition drastically changes from teaching process.