Transactions of the Society of Instrument and Control Engineers
Online ISSN : 1883-8189
Print ISSN : 0453-4654
ISSN-L : 0453-4654
Incremental Learning Algorithm for Feedforward Neural Network with Long-Term Memory
Masataka KOBAYASHISeiichi OZAWAShigeo ABE
Author information
JOURNAL FREE ACCESS

2002 Volume 38 Issue 9 Pages 792-799

Details
Abstract

When neural networks are trained incrementally, input-output relations that are trained formerly tend to be collapsed by the learning of new data. This phenomenon is often called interference. To suppress the interference efficiently, we propose an incremental learning model, in which Long-Term Memory (LTM) is introduced into Resource Allocating Network (RAN) proposed by Platt. This type of memory is utilized for storing useful training data (called LTM data) that are generated adaptively in the learning phase. When a new training datum is given, the proposed system searches several LTM data that are useful for suppressing the interference. The retrieved LTM data as well as the new training datum are trained simultaneously in RAN. In the simulations, the proposed model is applied to various incremental learning problems to evaluate the function approximation accuracy and the learning speed. From the simulation results, we certify that the proposed model can attain good approximation accuracy with small computation costs.

Content from these authors
© The Society of Instrument and Control Engineers (SICE)
Previous article Next article
feedback
Top