計測自動制御学会論文集
Online ISSN : 1883-8189
Print ISSN : 0453-4654
ISSN-L : 0453-4654
システム・情報
POMDPsでの強化学習における状態フィルタ
永吉 雅人村尾 元玉置 久
著者情報
ジャーナル フリー

2009 年 45 巻 1 号 p. 41-50

詳細
抄録

This paper presents a technique for reinforcement learning to deal with both discrete and continuous state space systems in POMDPs while keeping the state space of an agent compact. First, in our computational model for MDP environments, a concept of “ state space filtering ” has been introduced and constructed to make the state space of an agent smaller properly by referring to “ entropy ” calculated based on the state-action mapping. The model is extended to be applicable in POMDP environments by introducing the mechanism of utilizing effectively of history information. The extended model is capable of being dealt with a continuous state space as well as a discrete state space by the extended model. Here, the mechanism of adjusting the amount of history information is also introduced so that the state space of an agent should be compact. Moreover, some computational experiments with a robot navigation problem with a continuous state space have been carried out. The potential and the effectiveness of the extended approach have been confirmed through these experiments.

著者関連情報
© 2009 公益社団法人 計測自動制御学会
前の記事 次の記事
feedback
Top