Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : May 29, 2024 - June 01, 2024
For large object handling, a robot uses both arms and hands. In this paper, we apply a socalled view-based teach and playback approach to a dual-arm robot. In this approach, the robot is required to measure the position of target object. However, it is difficult for the robot to measure the accurate position by using an RGB-D camera for instance. In this paper, therefore, we allow the robot to estimate the grasping positions of both hands without measuring the target position. This grasping position estimator is based on deep neural networks. The robot then modifies the instructed motions by comparing the estimated grasping positions with those of instructed. In the experiments, we show that the robot with the approach through the proposed grasping position estimator is able to reach the hands toward the target object placed at various locations on a workbench.