Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Original Paper
M3IL: Multi-Modal Meta-Imitation Learning
Xin ZhangTatsuya MatsushimaYutaka MatsuoYusuke Iwasawa
Author information
JOURNAL FREE ACCESS

2023 Volume 38 Issue 2 Pages A-LB3_1-10

Details
Abstract

Imitation Learning (IL) is anticipated to achieve intelligent robots since it allows the user to teach the various robot tasks easily.In particular, Few-Shot Imitation Learning (FSIL) aims to infer and adapt fast to unseen tasks with a small amount of data. Though FSIL requires few-shot of data, the high cost of demonstrations in IL is still a critical problem. Especially when we want to teach the robot a new task, we need to execute the task for the assignment every time. Inspired by the fact that humans specify tasks using language instructions without executing them, we propose a multi-modal FSIL setting in this work. The model leverages image and language information in the training phase and utilizes both image and language or only language information in the testing phase. We also propose a Multi-Modal Meta-Imitation Learning or M3IL, which can infer with only image or language information. The result of M3IL outperforms the baseline in the standard and proposed settings. Our result shows the effectiveness of M3IL and the importance of language instructions in the FSIL setting.

Content from these authors
© The Japanese Society for Artificial Intelligence 2023
Next article
feedback
Top