Proceedings of the Annual Conference of the Institute of Systems, Control and Information Engineers
The 47th Annual Conference of the Institute of Systems, Control and Information Engineers
Conference information
Incremental Learning in Dynamic Environments Using Neural Network with Long-term Memory
Kenji TsumoriSeiichi OzawaShigeo Abe
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Pages 5506

Details
Abstract
When the environment is dynamically changed for agents, knowledge acquired from an environment might be useless in the future environments. Therefore, agents should not only acquire new knowledge but also modify or delete old knowledge. However, this modification and deletion are not always efficient in learning. Because the knowledge once acquired in the past can be useful again in the future when the same environment reappears. To learn efficiently in this situation, agents should have memory to store old knowledge. In this paper, we propose an agent architecture that consists of four modules: resource allocating network (RAN), long-term memory (LTM), association buffer (A-Buffer), and environmental change detector (ECD). To evaluate the adaptability in a class of dynamic environments, we apply this model to a simple problem that some target functions to be approximated are changed in turn.
Content from these authors
© 2003 The Institute of Systems, Control and Information Engineers
Previous article Next article
feedback
Top