主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2023
開催日: 2023/06/28 - 2023/07/01
The types of daily life support tasks that robots can perform are increasing. For a robot to perform spontaneous life-support actions, it must be able to determine what situations exist in its moving environment and decide on tasks accordingly. In this paper, in addition to the situation classification method using a large-scale visual-language model proposed in a previous study, we describe a system in which a person instructs a robot to perform a task according to the situation using a chat interface that can send and receive images and text, and the robot then automatically executes the task. Experimental results show that the system enables the robot to judge the situation and execute the task automatically.