The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)
Online ISSN : 2424-3124
2018
Session ID : 1A1-C15
Conference information

Efficient exploration with parameter noise in deep reinforcement learning
*Izumi KARINOKazutoshi TANAKARyuma NIIYAMAYasuo KUNIYOSHI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

In recent years, reinforcement learning has developed rapidly with deep learning and achieves great performance not only in the game playing but also in the continuous control of robots. Reinforcement learning requires exploratory behavior, and action noise is widely used to realize it. Recent researches have tackled exploration problems in deep reinforcement learning by using parameter noise. It has been experimentally shown that parameter noise performs a better exploration than commonly used action noise. However, the methods used so far need long time to update noise distribution or explore uniformly in a huge parameter space by using isotropic noise distribution. This paper proposes a method which improves the update of the noise distribution for faster learning.

Content from these authors
© 2018 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top