ロボティクス・メカトロニクス講演会講演概要集
Online ISSN : 2424-3124
セッションID: 1P1-D06
会議情報

大規模視覚-言語モデルとチャットインタフェースを用いた生活環境の分類とロボットタスクマッピングシステム
*大日方 慶樹河原塚 健人金沢 直晃山口 直也塚本 直人矢野倉 伊織北川 晋吾岡田 慧稲葉 雅幸
著者情報
会議録・要旨集 認証あり

詳細
抄録

The types of daily life support tasks that robots can perform are increasing. For a robot to perform spontaneous life-support actions, it must be able to determine what situations exist in its moving environment and decide on tasks accordingly. In this paper, in addition to the situation classification method using a large-scale visual-language model proposed in a previous study, we describe a system in which a person instructs a robot to perform a task according to the situation using a chat interface that can send and receive images and text, and the robot then automatically executes the task. Experimental results show that the system enables the robot to judge the situation and execute the task automatically.

著者関連情報
© 2023 一般社団法人 日本機械学会
前の記事 次の記事
feedback
Top