抄録
The problem of memory-based robot self-localization is approached from a novel 3D reconstruction technique in the paper. A single-view 3D reconstruction technique employing on the "Manhattan world" assumption is introduced for synthesizing views at various possible viewpoints. Experiments are conducted on indoor mobile robots with an omni-directional vision sensor.