人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
原著論文
動的なマルチエージェント環境におけるモデルメディエータを利用したモデルベース強化学習
今井 翔太岩澤 有祐松尾 豊
著者情報
ジャーナル フリー

2023 年 38 巻 5 号 p. A-MB1_1-14

詳細
抄録

Centralised training and decentralised execution (CTDE) is one of the most effective approaches in multiagent reinforcement learning (MARL). However, these CTDE methods still require large amounts of interaction with the environment, even to reach the same performance as very simple heuristic-based algorithms. Although modelbased RL is a prominent approach to improve sample efficiency, its adaptation to a multi-agent setting combining existing CTDE methods has not been well studied in the literature. The few existing studies only consider settings with relaxed restrictions on the number of agents and observable range. In this paper, we consider CTDE settings where some information about each agent’s observations (e.g. each agent’s visibility, number of agents) are changed dynamically. In such a setting, the fundamental challenge is how to train models that accurately generate each agent’s observations with complex transitions in addition to the central state, and how to use it for sample efficient policy learning. We propose a multi-agent model based RL algorithm based on the novel model architecture consisting of global and local prediction models with model mediator. We evaluate our model-based RL approach applied to an existing CTDE method on challenging StarCraft II micromanagement tasks and show that it can learn an effective policy with fewer interactions with the environment.

著者関連情報
© 人工知能学会2023
次の記事
feedback
Top