電気学会論文誌C(電子・情報・システム部門誌)
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<知能,ロボティクス>
エージェントの行動履歴を活用したQ-learningアルゴリズムの提案
齋藤 雅矩瀬古沢 照治
著者情報
ジャーナル フリー

2016 年 136 巻 8 号 p. 1209-1217

詳細
抄録

Q-learning is learning the optimal policy by updating in action-state value function(Q-value) to maximize a expectation reward by a trial and error search. However, there is major issues slowness of learning speed. Therefore, we added technique agent memorize environmental information and useing with update of the Q-value in many states. By updating the Q-value in the number of conditions to give a lot of information to the agent, be able to reduce learning time. Further, by incorporating the stored environmental information into action selection method, and the action selection to avoid the failure behavior, such as learning to stagnation, improved the learning speed of learning the initial stage. In addition, we design a new action area value function, in order to search for much more statas from the learning initial. Finally, numerical examples which solved maze problem showed the usefulness of the proposed method.

著者関連情報
© 2016 電気学会
前の記事 次の記事
feedback
Top