主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2023
開催日: 2023/06/28 - 2023/07/01
To conduct daily chores consisting of multiple tasks, recognizing the target objects and sharing the feature information among the executing tasks is effective. In previous research on motion generation for sequential tasks by robots, they needed to identify object features for each task. We propose deep learning models whose latent spaces that represent the object feature can be shared and taken over. With our models, a robot can acquire features during the first task, and then utilize the information in the latter tasks, which omits re-training and enable efficient motion generation. We evaluated the models with cooking tasks: pouring and stirring pasta and soup. We verified the models could acquire ingredient features and the robot could generate both pouring and stirring motions.