Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 2Q1-OS-27a-02
Conference information

Offline Model-Based Imitation Learning with Entropy Regularization of Model and Policy
*Eiji UCHIBE
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Model-Based Entropy-Regularized Imitation Learning (MB-ERIL) is an online model-based generative adversarial imitation learning method that introduces entropy regularization of policy and state transition model. Online-MB-ERIL learns the policy and model from expert data, learner's data, and generated data. Costly interactions with an actual environment are needed to obtain the first two datasets, while the policy and model quickly generate the last one. This report discusses an offline learning setting without using the second data obtained from the interaction between the policy and the actual environment. Next, we propose Offline-MB-ERIL, which introduces the idea of Positive and Unlabeled data learning. Given sub-optimal data, Offline-MB-ERIL can recover policy and model efficiently using them as unlabeled data. Through a vision-based arm-reaching task, we show that Offline-MB-ERIL can better use suboptimal data than Online-MB-ERIL.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top