Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 3O1-OS-16b-03
Conference information

Improving Accuracy of Flexible Object Manipulation through Depth-Aware Motion Generation Model Handling Multimodal Information
*Sachiya FUJITAHiroshi ITOHideyuki ICHIWARANamiko SAITOAyuna KUBOTetsuya OGATAShigeki SUGANO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

In this study, we achieve recognition and motion planning for flexible objects considering depth information. We propose a motion generation model that incorporates a model expressing disparity as a difference in the position of the point of attention in the left and right stereo images into a model that suppresses learning bias for each modality, and is added tactile information to solve occlusion and to improve motion accuracy. To validate the effectiveness of our proposed approach, we adopt the task of hanging a suit on a hanger. Given the changing depth positions of the suit's shape and the hem to be grasped, accurate motion generation with depth awareness is crucial. We conducted experiments using the dual-armed multi-degree-of-freedom robot Dry-AIREC to compare several comparative models (monocular/stereo vision, with/without tactile perception), and confirmed that tactile and disparity information contribute to the understanding of depth and the improvement of accuracy of motion.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top