2023 Volume 38 Issue 5 Pages A-MB1_1-14
Centralised training and decentralised execution (CTDE) is one of the most effective approaches in multiagent reinforcement learning (MARL). However, these CTDE methods still require large amounts of interaction with the environment, even to reach the same performance as very simple heuristic-based algorithms. Although modelbased RL is a prominent approach to improve sample efficiency, its adaptation to a multi-agent setting combining existing CTDE methods has not been well studied in the literature. The few existing studies only consider settings with relaxed restrictions on the number of agents and observable range. In this paper, we consider CTDE settings where some information about each agent’s observations (e.g. each agent’s visibility, number of agents) are changed dynamically. In such a setting, the fundamental challenge is how to train models that accurately generate each agent’s observations with complex transitions in addition to the central state, and how to use it for sample efficient policy learning. We propose a multi-agent model based RL algorithm based on the novel model architecture consisting of global and local prediction models with model mediator. We evaluate our model-based RL approach applied to an existing CTDE method on challenging StarCraft II micromanagement tasks and show that it can learn an effective policy with fewer interactions with the environment.