Journal of Japan Society for Fuzzy Theory and Intelligent Informatics
Online ISSN : 1881-7203
Print ISSN : 1347-7986
ISSN-L : 1347-7986
Original Papers
Hierarchical Reinforcement Learning Using Stepwise Subgoal Generation Based on Fuzzy ART and Experience Acquisition
Hiromichi KUROSUYuya KURODOYusuke MANABE
Author information
JOURNAL FREE ACCESS

2023 Volume 35 Issue 4 Pages 759-768

Details
Abstract

Hierarchical reinforcement learning (HRL) is an approach that incorporates intrinsic motivation mechanisms into reinforcement learning. HRL divides the agent’s internal mechanism into two components: a higher-level policy (application order of subgoals) and a lower-level policy (behavior sequence to subgoals) for problem solving. It has been demonstrated that HRL can solve problems in environments with sparse rewards and environments that require learning of long action sequences, which are difficult to address with conventional reinforcement learning, provided that the definition of subgoals is appropriate. However, existing HRL assumes the availability of predefined subgoals necessary for problem solving and does not provide an algorithm for achieving autonomous reinforcement learning. In this study, we propose stepwise unified hierarchical reinforcement learning (SUHRL), a new reinforcement learning algorithm that introduces a mechanism to gradually generate necessary experiences and appropriate subgoals for problem solving. SUHRL solves problems by stepwise clustering using Fuzzy ART and experience acquisition processing to generate suitable subgoals incrementally. Evaluation experiments conducted on MiniGrid environments and Montezuma’s Revenge demonstrate that the proposed method can generate the required subgoals incrementally and achieve autonomous problem solving.

Content from these authors
© 2023 Japan Society for Fuzzy Theory and Intelligent Informatics
Previous article Next article
feedback
Top