Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 28, 2023 - July 01, 2023
To conduct daily chores consisting of multiple tasks, recognizing the target objects and sharing the feature information among the executing tasks is effective. In previous research on motion generation for sequential tasks by robots, they needed to identify object features for each task. We propose deep learning models whose latent spaces that represent the object feature can be shared and taken over. With our models, a robot can acquire features during the first task, and then utilize the information in the latter tasks, which omits re-training and enable efficient motion generation. We evaluated the models with cooking tasks: pouring and stirring pasta and soup. We verified the models could acquire ingredient features and the robot could generate both pouring and stirring motions.