Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
35th (2021)
Session ID : 1G2-GS-2a-02
Conference information

Cognitive Satisficing Exploration in Dueling Bandit Problems
*Kuratomo OYOTakuma WADATakumi KAMIYATatsuji TAKAHASHI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Multi-armed bandit problems, which are the most fundamental tasks in reinforcement learning, have been widely applied to a range of problems such as online advertisement delivery and game tree search. In contrast to these traditional bandit problems that require absolute rewards to be quantifiable, dueling bandit problems (DBP) can deal with relative rewards by pairwise comparisons. In DBP, one of the most effective solutions is Double Thompson Sampling (D-TS). However, due to the pairwise comparisons, solving DBP requires many trials and errors, and that causes D-TS to do a lot of computation. In this paper, we focus on the fact that “satisficing” action selection leads to quick search for an action that satisfies a certain target level. We propose an algorithm that is based on Risk-sensitive Satisficing (RS) model. The result showed that there are some datasets on which its performance was inferior to D-TS’s. However, we propose a new method combining RS and T-DS that improves the performance for weak regret in DBP.

Content from these authors
© 2021 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top