2007 Volume 73 Issue 729 Pages 1408-1415
In this study, we propose an object-handing robot system with a multimodal human-machine interface which is composed of speech recognition and image processing units. Using this multimodal human-machine interface, the cooperator can order the object-handing robot system using voice commands and hand gestures. In this robot system, the motion parameters of the robot, which are maximum velocity, velocity profile peak and handing position, can be adjusted by the voice commands or the hand gestures in order to realize the most appropriate of the robot. Furthermore, the cooperator can order the handing of objects using voice commands along with hand gestures. In these voice commands, the cooperator can use adverbs. This permits the cooperator to realize efficient adjustments, because the adjustment value of each motion parameters is determined by adverbs. In particular, adjustment values corresponding to adverbs are estimated by fuzuy inference in order to take into consideration the ambiguities of human speech.