2025 年 37 巻 2 号 p. 466-477
This paper describes a gesture interface for the operation of autonomous mobile robots (AMRs) for transportation in industrial factories. The proposed gesture interface recognizes pointing directions by human operators, who are workers in the factory, based on deep learning using images captured by a fovea-lens camera. The interface could classify pointing gestures into seven directions with a recognition accuracy of 0.89. This paper also introduces the navigation method for AMR to implement the proposed interface. This navigation method enabled the AMR to approach the pointed target by adjusting its horizontal angle based on the object recognition using RGB images. The AMR achieved high position accuracy with a mean position error of 0.052 m by implementing the proposed gesture interface and the navigation method.
この記事は最新の被引用情報を取得できません。