Host: The Japanese Society for Artificial Intelligence
Name : 34th Annual Conference, 2020
Number : 34
Location : Online
Date : June 09, 2020 - June 12, 2020
Reinforcement learning methods rely on well-designed rewards provided by the environment. However, rewards are often sparse in the real world, which entails that exploration remains one of the key challenges of reinforcement learning. While prior work on intrinsic motivation hold promise of better local exploration, discovering global exploration strategies is beyond the reach of current methods. We propose a novel end-to-end intrinsic reward formulation that introduces high-level exploration in reinforcement learning. Our technique decomposes the exploration bonus into a fast reward that deals with local exploration and a slow reward that incentivizes long-time horizon exploration. We formulate curiosity as the error in an agent’s ability to reconstruct the observations given their contexts. We further propose to balance local and high-level strategies by estimating state diversity. Experimental results show that this long-time horizon exploration bonus enables our agents to outperform prior work in most tasks, including Minigrid, and Atari games.