The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)
Online ISSN : 2424-3124
2021
Session ID : 1A1-F09
Conference information

Self-Surpervised Tableware Grasp Learning from Visual Information and Grasp Pose Selection Considering Grasp Redundancy
*Shumpei WAKABAYASHIShingo KITAGAWAKento KAWAHARAZUKATakayuki MUROOKAKei OKADAMasayuki INABA
Author information
CONFERENCE PROCEEDINGS RESTRICTED ACCESS

Details
Abstract

In the research of object grasping, systems that output consistent results from recognition to grasp motion have been actively studied. Usually, a single grasp point is determined even though an object such as tableware has redundancy to be grasped. In addition, it is difficult to reflect the input constraints due to the robot’s hardware or the surrounding environment. In this study, we propose a neural network that modifies the grasp pose around the initial pose from visual information and the actual trial. Our system can autonomously collect supervised data so that the robot can learn by itself. Since the search points are narrowed down to the edge points of the object, the real robot can efficiently acquire the grasp ability in fewer trials. As a result, it can grasp unknown objects, and flexibly change its grasp position because the input can be easily constrained.

Content from these authors
© 2021 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top