Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
33rd (2019)
Session ID : 1I3-J-2-03
Conference information

Imitation learning based on entropy-regularized reinforcement learning
*Eiji UCHIBE
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

This paper proposes Entropy-Regularized Imitation Learning (ERIL) that is given by a combination of forward and inverse reinforcement learning. ERIL utilizes the soft Bellman optimality equation in which the reward function is augmented by the entropy of the learning policy and the Kullback-Leibler (KL) divergence between the learning and the baseline policies. We show that inverse RL is interpreted as estimating the log-ratio between two policies and the log-ratio is efficiently solved by binary logistic regression. Forward RL is given by a variant of Dynamic Policy Programming and our algorithm is interpreted as minimization of the KL divergence between the learning policy and the estimated expert policy. Experimental results on the MuJoCo-simulated environments show that ERIL is more sample efficient than the previous methods such as GAIL and AIRL because the forward RL step of ERIL is off-policy.

Content from these authors
© 2019 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top