抄録
We have proposed a method of vision-based navigation for mobile robots in indoor environments by a teaching and playback scheme. It uses an omni-directional camera to perceive environmental information. At the teaching stage, the robot memorizes a sequence of robot’s motion and omni-directional images as environmental information, and extracts vertical edges from the images as landmarks. In the course of autonomous navigation, the robot compares the currently taken image with the memorized one to estimate its relative position using the matched vertical edges, and then calculates a trajectory to track the given route in real-time. In this paper, we report on the use of the color information in the areas on both sides of vertical edges so as to match vertical edges more robustly. And, we also explain the use of the taught path information to make out a map with graph representation for realizing navigation in a corridor network environment.