2025 Volume 37 Issue 2 Pages 607-617
Highly accurate environmental perception is essential for autonomous mobile robots, and saliency has been studied as a method for identifying conspicuous regions and areas of interest. Research on saliency began with the principle model proposed by Koch and Ullman, and numerous methods have been developed using 2D images and RGB-D data as inputs. However, these image-based approaches do not adequately capture the 3D spatial features needed for autonomous navigation. Therefore, to meet accuracy requirements, it is necessary to use 3D point clouds as input when applying saliency to autonomous mobile robots. In this study, we constructed a topological structure from 3D point clouds using Growing Neural Gas which is one of unsupervised learning, and developed a saliency map utilizing a center-surround suppression mechanism. This approach allows for the identification of more detailed regions of focus compared to previous studies. Through experiments on both benchmark datasets and real-world 3D point clouds, we demonstrate the effectiveness of the proposed method.