Transactions of the Society of Instrument and Control Engineers
Online ISSN : 1883-8189
Print ISSN : 0453-4654
ISSN-L : 0453-4654
Paper
Conflict Avoidance for Multi-agent Q-learning Based on Learning Progress
Yoshihiro ICHIKAWAKeiki TAKADAMA
Author information
JOURNAL FREE ACCESS

2012 Volume 48 Issue 11 Pages 764-772

Details
Abstract

This paper focuses on the conflict which occurs even in the cooperated task in multi-agent reinforcement learning domain and explores its avoidance methods from the viewpoint of the learning progress of agents. For this purpose, we propose the conflict avoidance method in multi-agent Q-learning which enables agents to learn according to the learning progress of other agents, and investigate its effectiveness in the lane crossing road problem where agents aim at going to their own directions but they cannot cross each other due to the narrow lane. The intensive simulations on the problem has the following implications: (1) the temporary decrease of the discount factor of the Q-learning agent with a high learning progress (i.e., the agents who learns well) contributes to avoiding conflict more successfully than the increase of the random degree of agent behaviors; (2) the increase of the discount factor of the Q-learning agent with a low learning progress (i.e., the agents who has not yet learned well) contributes to avoiding the conflict more successfully; and (3) a balance of learning progress among agents is critical to solve the conflict situation.

Content from these authors
© 2012 The Society of Instrument and Control Engineers
Previous article Next article
feedback
Top