人工知能学会全国大会論文集
Online ISSN : 2758-7347
34th (2020)
セッションID: 2K6-ES-2-05
会議情報

Combining Local and Global Exploration via Intrinsic Rewards
*Nicolas BOUGIERyutaro ICHISE
著者情報
会議録・要旨集 フリー

詳細
抄録

Reinforcement learning methods rely on well-designed rewards provided by the environment. However, rewards are often sparse in the real world, which entails that exploration remains one of the key challenges of reinforcement learning. While prior work on intrinsic motivation hold promise of better local exploration, discovering global exploration strategies is beyond the reach of current methods. We propose a novel end-to-end intrinsic reward formulation that introduces high-level exploration in reinforcement learning. Our technique decomposes the exploration bonus into a fast reward that deals with local exploration and a slow reward that incentivizes long-time horizon exploration. We formulate curiosity as the error in an agent’s ability to reconstruct the observations given their contexts. We further propose to balance local and high-level strategies by estimating state diversity. Experimental results show that this long-time horizon exploration bonus enables our agents to outperform prior work in most tasks, including Minigrid, and Atari games.

著者関連情報
© 2020 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top