Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
32nd (2018)
Session ID : 3Pin1-35
Conference information

Action acquisition by Memory Reinforcement Learning useing a prior knowledge
*Yuna INAMORITsubasa HIRAKAWATakayoshi YAMASHITAHironobu FUJIYOSHIRyota KASHIHAWAMasaki INABANaoki NITANDA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Obtaining a human-level control through reinforcement learning (RL) requires massive training. Furthermore, a deep learning-based RL method such as deep Q network (DQN) is difficult to obtain a stable control. In this paper, we propose a novel deep reinforcement learning method to learn stable controls efficiently. Our approach leverages the technique of experience replay and a replay buffer architecture. We manually create a desirable transition sequence and store the transition in the replay buffer at the beginning of training. This hand-crafted transition sequence enables us to avoid random action selections and optimum local policy. Experimental results on a lane-changing task of autonomous driving show that the proposed method can efficiently acquire a stable control.

Content from these authors
© 2018 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top