Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
34th (2020)
Session ID : 2K6-ES-2-05
Conference information

Combining Local and Global Exploration via Intrinsic Rewards
*Nicolas BOUGIERyutaro ICHISE
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Reinforcement learning methods rely on well-designed rewards provided by the environment. However, rewards are often sparse in the real world, which entails that exploration remains one of the key challenges of reinforcement learning. While prior work on intrinsic motivation hold promise of better local exploration, discovering global exploration strategies is beyond the reach of current methods. We propose a novel end-to-end intrinsic reward formulation that introduces high-level exploration in reinforcement learning. Our technique decomposes the exploration bonus into a fast reward that deals with local exploration and a slow reward that incentivizes long-time horizon exploration. We formulate curiosity as the error in an agent’s ability to reconstruct the observations given their contexts. We further propose to balance local and high-level strategies by estimating state diversity. Experimental results show that this long-time horizon exploration bonus enables our agents to outperform prior work in most tasks, including Minigrid, and Atari games.

Content from these authors
© 2020 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top