人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
原著論文
認知的満足化
限定合理性の強化学習における効用
高橋 達二甲野 佑浦上 大輔
著者情報
ジャーナル フリー

2016 年 31 巻 6 号 p. AI30-M_1-11

詳細
抄録

As the scope of reinforcement learning broadens, the number of possible states and of executable actions, and hence the product of the two sets explode. Often, there are more feasible options than allowed trials, because of physical and computational constraints imposed on the agents. In such an occasion, optimization procedures that require first trying all the options once do not work. The situation is what the theory of bounded rationality was proposed to deal with. We formalize the central heuristics of bounded rationality theory named satisficing. Instead of the traditional formulation of satisficing at the policy level in terms of reinforcement learning, we introduce a value function that implements the asymmetric risk attitudes characteristic of human cognition. Operated under the simple greedy policy, the RS (reference satisficing) value function enables an efficient satisficing in K-armed bandit problems, and when the reference level for satisficing is set at an appropriate value, it leads to effective optimization. RS is also tested in a robotic motion learning task in which a robot learns to perform giant-swings (acrobot). While the standard algorithms fail because of the coarse-grained state space, RS shows a stable performance and autonomous exploration that goes without randomized exploration and its gradual annealing necessary for the standard methods.

著者関連情報
© 人工知能学会 2016
前の記事 次の記事
feedback
Top