IIEEJ Transactions on Image Electronics and Visual Computing
Online ISSN : 2188-1901
Print ISSN : 2188-1898
ISSN-L : 2188-191X
Contributed Papers
Adaptive Selection of Auxiliary Tasks Using Deep Reinforcement Learning for Video Game Strategy
Hidenori ITAYATsubasa HIRAKAWATakayoshi YAMASHITAHironobu FUJIYOSHI
Author information
JOURNAL RESTRICTED ACCESS

2024 Volume 12 Issue 1 Pages 23-31

Details
Abstract

Multitask learning can be utilized to efficiently acquire common factors and useful features among several different tasks. This learning method has been applied in various fields because it can improve the performance of a model by solving related tasks with a single model. One type of multitask learning utilizes auxiliary tasks, which improves the performance of the target task by learning auxiliary tasks simultaneously. In the video game strategy task, unsupervised reinforcement learning and auxiliary learning (UNREAL) has achieved a high performance in a maze game by introducing an auxiliary task. However, in this method, the auxiliary task must be appropriate for the target task, which is very difficult to determine in advance because the most effective auxiliary task will change dynamically in accordance with the learning status of the target task. Therefore, we propose an adaptive selection mechanism called auxiliary selection for auxiliary tasks based on deep reinforcement learning. We applied our method to UNREAL and experimentally confirmed its effectiveness in a variety of video games.

Content from these authors
© 2024 The Institute of Image Electronics Engineers of Japan
Previous article Next article
feedback
Top