Host: The Japanese Society for Artificial Intelligence
Name : 34th Annual Conference, 2020
Number : 34
Location : Online
Date : June 09, 2020 - June 12, 2020
Optimization in vast search spaces may be intractable, especially in reinforcement learning, and when the environment is real. On the other hand, humans seem to balance exploration and exploitation quite well in many tasks, and one reason is because they satisfice rather than optimize. That is to say, they stop exploring when a certain (aspiration) level is satisfied. Takahashi and others have introduced the risk-sensitive satisficing (RS) model that realizes efficient satisficing in the bandit problems. To enable the application of RS to general reinforcement learning tasks, the global reference conversion (GRC) was introduced. GRC allocates local aspiration levels to individual states from the global aspiration level, based on the difference between the global goal and the actual profits. However, its performance depends sensitively on the scale parameter. In this paper, we propose a new algorithm that autonomously adjusts the allocation and evaluates the current satisfaction accurately.