Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 06, 2021 - June 08, 2021
We propose robotic buttoning task by a dual-arm robot driven by deep neural networks (DNNs). The previous study indicated the robot was able to perform buttoning tasks by marker-based recognition; however, some issues remain. In this study, we trained two DNNs; CAE and LSTM with the robot’s task experiences consist of camera images and joint angles of the robot. We verified our method through the simulator and real robot experiments. The method generated motion online based on sensor information acquired from the robot, and the robot successfully performed the buttoning task. In addition, the results showed our method could generate smooth motion based on only one viewpoint.