計測自動制御学会論文集
Online ISSN : 1883-8189
Print ISSN : 0453-4654
ISSN-L : 0453-4654
A Study on Architecture, Algorithms and Internal Representation for Reinforcement Learning with Recurrent Neural Networks
Ahmet ONATHajime KITAYoshikazu NISHIKAWA
著者情報
ジャーナル フリー

1999 年 35 巻 12 号 p. 1599-1608

詳細
抄録
Most algorithms for reinforcement learning face difficulty in achieving optimal performance when the state of the environment is not completely known. The authors have proposed a method for overcoming this problem by using recurrent neural networks in a learning agent. In this paper, we discuss the implementation of the proposed method using several types of network architecture and supervised learning algorithms. Further, the internal representation of the environment acquired in the learning agent is examined using a technique of cluster analysis. The results show that the learning agent achieves optimal performance in reinforcement learning tasks by constructing an accurate internal model, despite incomplete perception of the state of the environment.
著者関連情報
© The Society of Instrument and Control Engineers (SICE)
前の記事 次の記事
feedback
Top