抄録
Environmental mapping plays an important role for autonomous mobile robots. To add semantic information to map for robot, smooth human-robot interaction is required when teaching the object name. As the first step, we have developed a prototype system to estimate 3D direction of the tutor's hand pointing gesture using a range camera. The system also has a functions to project a mark in the work space and to recognize the tutor's voice.