Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : May 29, 2024 - June 01, 2024
The efficiency of mobile manipulation tasks has been expected to improve. One way is to take advantage of mobile grasping as like this study. This study first simplifies mobile grasping as two types of grasp action primitives and a moving action primitive and then develops three fully convolutional neural network (FCN) models that predict a static grasp primitive, dynamic grasp primitive, and residual error of moving velocity on visual observation. The development of multiple task-specific FCN models enables the learning of mobile grasping for various shapes at different mobile manipulator velocities. Our experiments on mobile grasping for variously shaped household objects using a mobile manipulator HSR at different moving velocities demonstrated that the proposed method outperforms comparative methods in terms of grasping performance and pick-and-place efficiency.