Computer Software
Print ISSN : 0289-6540
How does Q-learning Maintain Cooperation in Prisoner's Dilemma Games?
Koichi MORIYAMA
Author information
JOURNAL FREE ACCESS

2008 Volume 25 Issue 4 Pages 4_145-4_153

Details
Abstract

This work deals with Q-learning in a multiagent environment. There are many multiagent Q-learning methods, and most of them aim to converge to a Nash equilibrium, which is not desirable in games like the Prisoner's Dilemma (PD). However, normal Q-learning agents that use a stochastic method in choosing actions to avoid local optima may bring mutual cooperation in PD. Although such mutual cooperation usually occurs singly, it can be maintained if the Q-function of cooperation becomes larger than that of defection after the cooperation. This work derives a theorem on how many times the cooperation is needed to make the Q-function of cooperation larger than that of defection. In addition, from the perspective of the author's previous works that discriminate utilities from rewards and use utilities for learning in PD, this work also derives a corollary on how much utility is necessary to make the Q-function larger by one-shot mutual cooperation.

Content from these authors
© Japan Society for Software Science and Technology 2008
Previous article Next article
feedback
Top