人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
原著論文
複数環境におけるエキスパート軌跡を用いたベイジアン逆強化学習
転移可能な報酬の推定に向けたアプローチ
中田 勇介荒井 幸代
著者情報
ジャーナル フリー

2020 年 35 巻 1 号 p. G-J73_1-10

詳細
抄録

Though a reinforcement learning framework has numerous achievements, it requires a careful shaping of a reward function that represents the objective of a task. There is a class of task in which an expert could demonstrate the optimal way of doing, but it is difficult to design a proper reward function. For these tasks, an inverse reinforcement learning approach seems useful because it makes it possible to estimates a reward function from expert’s demonstrations. Most existing inverse reinforcement learning algorithms assume that an expert gives demonstrations in a unique environment. However, an expert also could provide demonstrations of tasks within other environments of which have a specific objective function. For example, though it is hard to represent objective explicitly for a driving task, the driver could give demonstrations under multiple situations. In such cases, it is natural to utilize these demonstrations in multiple environments to estimate expert’s reward functions. We formulate this problem as Bayesian Inverse Reinforcement Learning problem and propose a Markov Chain Monte Carlo method for the problem. Experimental results show that the proposed method quantitatively overperforms existing methods.

著者関連情報
© 人工知能学会 2020
前の記事 次の記事
feedback
Top