主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2023
開催日: 2023/06/28 - 2023/07/01
Visual Teach & Repeat Navigation (VT&R) is a traditional way to let a mobile robot navigate without explicit localization. Most VT&R methods perform image matching between images, which the robot obtained while operated by a human (teaching process) and while running by itself (repeating process), in various ways including classical image processing and deep learning. In this paper, we investigate how the image matching methods work when considering outdoor VT&R scenarios with extreme appearance changes of environments. Our offline experimental results suggest that deep feature point extraction and matching techniques such as SuperPoint [1] and SuperGlue [2] help accurate repeat navigation even if illumination condition drastically changes from teaching process.