2025 年 37 巻 6 号 p. 1283-1292
The mainstream approach employing light detection and ranging (LiDAR) estimates the self-position of mobile robot by matching the point cloud acquired during navigation with that recorded in advance, in order to autonomously navigate to the goal point. However, this method is problematic in that it is vulnerable to environmental changes and that much effort and expenses are required to construct and update the point cloud map. Thus, in this paper, we propose an autonomous navigation method that does not require constructing a point cloud map by visiting the site in advance and is robust against environmental changes. The proposed method carries out autonomous navigation by using RTK-GNSS, and deep-learning algorithm of semantic segmentation and YOLO, A* algorithm for path planning, and pure pursuit algorithm for path navigation. Furthermore, obstacle avoidance is carried out using semantic segmentation, YOLO, and kernel density estimation. We conducted a navigation experiment, in which a 300 m section was autonomously navigated, thus verifying the validity of proposed method.
この記事は最新の被引用情報を取得できません。