抄録
Recently, view-based or appearance-based approaches have been attracting the interests of computer vision research. We have already proposed a visual view-based navigation method using a model of the route called the “View Sequence, ” which contains a sequence of front views along a route memorized in the recording run. In this paper, we apply an omnidirectional vision sensor to our view-based navigation and propose an extended model of a route called “Omni-View Sequence.” Then we propose a map representation named the “View-Sequenced Map”which can represent a whole corridor environment on a floor in a building. Then we also describe the method for acquiring a View-Sequenced Map automatically based on the exploration of a robot in a corridor using both stereo and omnidirectional vision. Finally experimental results of the autonomous navigation and map acquisition are presented to show the feasibility of the proposed map representation.