Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 06, 2021 - June 08, 2021
Service robots are expected to be used in many situations, such as homes and factories, instead of people. However, it takes a lot of effort to teach the robots how to perform behaviors which human have been doing. To expand the use of service robots, it is necessary to develop the support system for teaching robots; the system should easily transfer the behaviors from human to robots. In this paper, we propose a teaching system by extracting hand-object interaction from first-person view videos acquired by a camera attached to a human’s head. The proposed system generates a sequence of hand motions by extracting three types of simple motion elements (Translate, Rotate, and Grasp) from first-person view video as teaching information to a robot. In the experiment, we confirm the usefulness of this system by actually having a person perform the pouring task as an example, extracting the motion element sequence from the first-person view video, and reconstructing the robot’s pouring motion using it.