Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
35th (2021)
Session ID : 3N3-IS-2e-02
Conference information

Improving Exploration and Convergence Speed with Multi-Actor Control DDPG
*David John Lucien FELICESMitsuhiko KIMOTOShoya MATSUMORIMichita IMAI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

In Reinforcement Learning, the Deep Deterministic Policy Gradient (DDPG) algorithm is considered to be a powerful tool for continuous control tasks. However, when it comes to complex environments, DDPG does not always show positive results due to its inefficient exploration mechanism. To deal with such issues, several studies decided to increase the number of actors, but without considering if there was an actual optimal number of actors that an agent could have. We propose MAC-DDPG, which consists of a DDPG architecture with a variable number of actor networks. We also compare the computational cost and learning curves of using different numbers of actor networks on various OpenAI Gym environments. The main goal of this research is to keep the computational cost as low as possible while improving deep exploration so that increasing the number of actors is not detrimental in solving less complex environments fast. Currently, results show a potential increase in scores obtained on some environments (around +10%) compared with those obtained with classic DDPG, but greatly increase the time necessary to run the same number of epochs (time linearly increases with the number of actors).

Content from these authors
© 2021 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top