Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 06, 2021 - June 08, 2021
In this study, we explore a systematic debugging method for model-based reinforcement learning where a library of skills is introduced. When the performance (learning speed, obtained quality of behavior) of model-based reinforcement learning is not sufficient, identifying the reason is difficult especially when the dynamics are complicated such as liquid pouring. In our previous work, we introduced a library of skills in reinforcement learning of such complicated tasks. We think that the use of a skill library is also beneficial to investigate the performance issues since we can test each subset of skills separately. Our goal is making a systematic debugging way of reinforcement learning based on this idea. This paper reports a preliminary development toward this goal where we repeatedly increase and decrease the complexity of a subtask to make debug easier like curriculum learning until we can obtain sufficient results with the original task. We conducted simulation experiments of liquid pouring to investigate this approach. The results show a performance improvement.