主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2021
開催日: 2021/06/06 - 2021/06/08
Differ from extracting feature description such as SIFT and SURF, semantic segmentation has strong robustness to optical changes such as cross-seasonal and day-night changes. In this paper, we propose advanced visual localization method using semantic segmented images and a mesh map with semantic information made from annotated LiDAR scan data and have solved the problems of previous study. In localization phase, we use traditional Monte-Carlo Localization and calculate likelihood by comparing segmented image from on-board camera and an image of the mesh map landscape as seen from a possible predicted location. This method achieved practical localization accuracy with keeping the benefit of semantic segmentation. A source code used in this experiment is available at following github page. github.com/amslabtech/semantic mesh localization