Host: The Japanese Society for Artificial Intelligence
Name : The 35th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 35
Location : [in Japanese]
Date : June 08, 2021 - June 11, 2021
Multi-armed bandit problems, which are the most fundamental tasks in reinforcement learning, have been widely applied to a range of problems such as online advertisement delivery and game tree search. In contrast to these traditional bandit problems that require absolute rewards to be quantifiable, dueling bandit problems (DBP) can deal with relative rewards by pairwise comparisons. In DBP, one of the most effective solutions is Double Thompson Sampling (D-TS). However, due to the pairwise comparisons, solving DBP requires many trials and errors, and that causes D-TS to do a lot of computation. In this paper, we focus on the fact that “satisficing” action selection leads to quick search for an action that satisfies a certain target level. We propose an algorithm that is based on Risk-sensitive Satisficing (RS) model. The result showed that there are some datasets on which its performance was inferior to D-TS’s. However, we propose a new method combining RS and T-DS that improves the performance for weak regret in DBP.