The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)
Online ISSN : 2424-3124
2023
Session ID : 2A1-H05
Conference information

Motion Generation of Multiple Tasks by Sharing the Object Features
*Ayuna KuboNamiko SaitoKanata SuzukiHiroshi ItoTetsuya OgataShigeki Sugano
Author information
CONFERENCE PROCEEDINGS RESTRICTED ACCESS

Details
Abstract

To conduct daily chores consisting of multiple tasks, recognizing the target objects and sharing the feature information among the executing tasks is effective. In previous research on motion generation for sequential tasks by robots, they needed to identify object features for each task. We propose deep learning models whose latent spaces that represent the object feature can be shared and taken over. With our models, a robot can acquire features during the first task, and then utilize the information in the latter tasks, which omits re-training and enable efficient motion generation. We evaluated the models with cooking tasks: pouring and stirring pasta and soup. We verified the models could acquire ingredient features and the robot could generate both pouring and stirring motions.

Content from these authors
© 2023 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top