ロボティクス・メカトロニクス講演会講演概要集
Online ISSN : 2424-3124
セッションID: 1A1-G16
会議情報

ストリートビュー画像を用いたビューベーストナビゲーションによる移動ロボットの自己位置推定
―道路と空領域の除去による自己位置推定能力の改善―
*松崎 伸彦三上 貞芳
著者情報
会議録・要旨集 認証あり

詳細
抄録

Self-localization is essential for navigation and is generally done by GPS at outdoor. However, GPS tends to cause large errors where radio reflection occurs, such as in urban areas, which sometimes prohibits precise self-localization. Meanwhile, a human may collate hisher surroundings with street view images when grasping the current location. To implement this, we have to solve image matching between the current scene and the images in a street view database. However, since the field angle, time, and season between images differ widely, standard pattern matching by feature is difficult. DeepMatching can precisely match images that have differences in lightings and field angles. Nevertheless, DeepMatching tends to misjudge street images because it may find unnecessary feature points in the road and sky. This paper proposes a method that gains image similarity with features like building by excluding road and sky. This paper also investigates appropriate parameters through experiments using various images and resolutions.

著者関連情報
© 2021 一般社団法人 日本機械学会
前の記事 次の記事
feedback
Top