抄録
A grasping algorithm that takes into account both visual and tactile feedback has been developed for a system consists of a multifingered hand and a vision. The grasping process consists of two phases: the non-contact phase, i.e. the approach of the robot hand on the object, is realized using visual information while the contact phase, i.e. the touch of the object by the robot fingers is made using both visual and tactile information. To achieve these two processes we present an original algorithm that allows a multifingered hand to grasp an object using visual and tactile feedback. In this algorithm two types of motion are taken into account: the grasping motion for bringing fingers to the object surface, and the preshaping motion for changing the shape of the hand to the optimal position. Because these two types of motion are controlled based on sensory feedback, adaptiveness to changes of the environment is realized. The effectiveness of the proposed algorithm has been verified experimentally.