人工知能学会全国大会論文集
Online ISSN : 2758-7347
35th (2021)
セッションID: 3N3-IS-2e-02
会議情報

Improving Exploration and Convergence Speed with Multi-Actor Control DDPG
*David John Lucien FELICESMitsuhiko KIMOTOShoya MATSUMORIMichita IMAI
著者情報
会議録・要旨集 フリー

詳細
抄録

In Reinforcement Learning, the Deep Deterministic Policy Gradient (DDPG) algorithm is considered to be a powerful tool for continuous control tasks. However, when it comes to complex environments, DDPG does not always show positive results due to its inefficient exploration mechanism. To deal with such issues, several studies decided to increase the number of actors, but without considering if there was an actual optimal number of actors that an agent could have. We propose MAC-DDPG, which consists of a DDPG architecture with a variable number of actor networks. We also compare the computational cost and learning curves of using different numbers of actor networks on various OpenAI Gym environments. The main goal of this research is to keep the computational cost as low as possible while improving deep exploration so that increasing the number of actors is not detrimental in solving less complex environments fast. Currently, results show a potential increase in scores obtained on some environments (around +10%) compared with those obtained with classic DDPG, but greatly increase the time necessary to run the same number of epochs (time linearly increases with the number of actors).

著者関連情報
© 2021 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top