抄録
There are many researches on the communications between human and robot. The authors are working on controlling the robot with users' intuitive gestures. In this paper, the systems are enhanced to handle symbolic expressions for more suitable use of human side. Firstly, the robot learns how the users want to control it with intuitive gestures. Gestures are classified using growing self-organizing maps. The system learns the correspondence between the gesture and the command to be executed. To enhance the communication scheme more easily, the correspondences between the gesture and the symbolic expression through LED/sound have been developed. Some simple experiments show the feasibility of our proposal.