Abstract
We have developed a human-machine interface for object manipulation by a multi-fingered robotic hand. The information needed for manipulation is various kinds of information such as the desired pose of a manipulated object, contact points and so forth. It is very troublesome for a user to input this information, so we have developed a teaching-by-showing system to obtain this information by measuring the human hand and its surroundings using an omni-directional camera. A user holds or moves the camera as an input device in order to input the desired pose of the manipulated object. The pose is calculated by matching features extracted from images taken by the camera. The hand is in contact with cylindrical surface of the camera, so that the desired contact points are also inputted by processing hand images taken by the camera.