2002 年 38 巻 9 号 p. 792-799
When neural networks are trained incrementally, input-output relations that are trained formerly tend to be collapsed by the learning of new data. This phenomenon is often called interference. To suppress the interference efficiently, we propose an incremental learning model, in which Long-Term Memory (LTM) is introduced into Resource Allocating Network (RAN) proposed by Platt. This type of memory is utilized for storing useful training data (called LTM data) that are generated adaptively in the learning phase. When a new training datum is given, the proposed system searches several LTM data that are useful for suppressing the interference. The retrieved LTM data as well as the new training datum are trained simultaneously in RAN. In the simulations, the proposed model is applied to various incremental learning problems to evaluate the function approximation accuracy and the learning speed. From the simulation results, we certify that the proposed model can attain good approximation accuracy with small computation costs.