2018 年 54 巻 5 号 p. 483-493
The goal of our research is to construct intelligence for autonomous robots which navigate through the real world using images from attached cameras. Localization is one of the key elements for autonomous navigation. In this paper, we propose a grid-based localization from a single image using deep learning. By using a grid map, uncertainty of localization can be defined in a natural way, which is useful for fusion with other sensors The network takes spherical panoramic images with position and orientation of the robot as training data. Position and orientation are expressed by multi-dimensional grid. The network behaves as a classifier, where each grid corresponds to indivisual class and its probability represents that of position and orientation. The experimental results support the effectiveness of the proposed method.