Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
34th (2020)
Session ID : 4G2-GS-7-03
Conference information

Autonomous adjustment of exploration in weakly supervised reinforcement learning
*Kuniaki SATORITakumi KAMIYATakahashi TATSUJI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Optimization in vast search spaces may be intractable, especially in reinforcement learning, and when the environment is real. On the other hand, humans seem to balance exploration and exploitation quite well in many tasks, and one reason is because they satisfice rather than optimize. That is to say, they stop exploring when a certain (aspiration) level is satisfied. Takahashi and others have introduced the risk-sensitive satisficing (RS) model that realizes efficient satisficing in the bandit problems. To enable the application of RS to general reinforcement learning tasks, the global reference conversion (GRC) was introduced. GRC allocates local aspiration levels to individual states from the global aspiration level, based on the difference between the global goal and the actual profits. However, its performance depends sensitively on the scale parameter. In this paper, we propose a new algorithm that autonomously adjusts the allocation and evaluates the current satisfaction accurately.

Content from these authors
© 2020 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top