The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)
Online ISSN : 2424-3124
2024
Session ID : 1P1-H04
Conference information

Self-Supervised Learning of Vision-Based Mobile Grasping
Eiki NAGATA*Takuya KIYOKAWAYoshihisa TSURUMINEYuhwan KWONTakamitsu MATSUBARA
Author information
CONFERENCE PROCEEDINGS RESTRICTED ACCESS

Details
Abstract

The efficiency of mobile manipulation tasks has been expected to improve. One way is to take advantage of mobile grasping as like this study. This study first simplifies mobile grasping as two types of grasp action primitives and a moving action primitive and then develops three fully convolutional neural network (FCN) models that predict a static grasp primitive, dynamic grasp primitive, and residual error of moving velocity on visual observation. The development of multiple task-specific FCN models enables the learning of mobile grasping for various shapes at different mobile manipulator velocities. Our experiments on mobile grasping for variously shaped household objects using a mobile manipulator HSR at different moving velocities demonstrated that the proposed method outperforms comparative methods in terms of grasping performance and pick-and-place efficiency.

Content from these authors
© 2024 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top