Transactions of the Institute of Systems, Control and Information Engineers
Online ISSN : 2185-811X
Print ISSN : 1342-5668
ISSN-L : 1342-5668
Paper
Architecture for Hierarchical System with Each Learning Structure
Ayafumi KikuyaTomonori Sadamoto
Author information
JOURNAL FREE ACCESS

2022 Volume 35 Issue 12 Pages 289-299

Details
Abstract

In this paper, we propose an architecture for realizing distributed reinforcement learning of distributed controllers for a class of unknown hierarchical systems, where homogeneous subsystems are interconnected through a complete graph. All these controllers consist of two sub-controllers for average and difference dynamics of the system, respectively. First, we show that optimal sub-controllers can be trained individually by a reinforcement learning (RL) method for average/difference data. Due to the smaller-scale of the data, the learning time of the proposed method can be drastically reduced compared to existing RL methods. However, the computation for obtaining the average data requires all-to-all communication among subsystems, which is undesirable in terms of communication costs and security. Hence, by exploiting a distributed consensus observer, we propose an architecture that enables us to learn distributed optimal controllers in a distributed manner. The control performance of the trained controller is shown to be ideally optimal. Moreover, the proposed architecture is completely scalable, i.e., its computational cost is independent from the number of subsystems. The effectiveness is shown through numerical simulations.

Content from these authors
© 2022 The Institute of Systems, Control and Information Engineers
Next article
feedback
Top