Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Original Paper
Model-Based Reinforcement Learning using Model Mediator in Dynamic Multi-Agent Environment
Shota ImaiYusuke IwasawaYutaka Matsuo
Author information
JOURNAL FREE ACCESS

2023 Volume 38 Issue 5 Pages A-MB1_1-14

Details
Abstract

Centralised training and decentralised execution (CTDE) is one of the most effective approaches in multiagent reinforcement learning (MARL). However, these CTDE methods still require large amounts of interaction with the environment, even to reach the same performance as very simple heuristic-based algorithms. Although modelbased RL is a prominent approach to improve sample efficiency, its adaptation to a multi-agent setting combining existing CTDE methods has not been well studied in the literature. The few existing studies only consider settings with relaxed restrictions on the number of agents and observable range. In this paper, we consider CTDE settings where some information about each agent’s observations (e.g. each agent’s visibility, number of agents) are changed dynamically. In such a setting, the fundamental challenge is how to train models that accurately generate each agent’s observations with complex transitions in addition to the central state, and how to use it for sample efficient policy learning. We propose a multi-agent model based RL algorithm based on the novel model architecture consisting of global and local prediction models with model mediator. We evaluate our model-based RL approach applied to an existing CTDE method on challenging StarCraft II micromanagement tasks and show that it can learn an effective policy with fewer interactions with the environment.

Content from these authors
© The Japanese Society for Artificial Intelligence 2023
Next article
feedback
Top