計測自動制御学会論文集
Online ISSN : 1883-8189
Print ISSN : 0453-4654
ISSN-L : 0453-4654
長期記憶を導入した階層型ニューラルネットの追加学習アルゴリズム
小林 正宜小澤 誠一阿部 重夫
著者情報
ジャーナル フリー

2002 年 38 巻 9 号 p. 792-799

詳細
抄録

When neural networks are trained incrementally, input-output relations that are trained formerly tend to be collapsed by the learning of new data. This phenomenon is often called interference. To suppress the interference efficiently, we propose an incremental learning model, in which Long-Term Memory (LTM) is introduced into Resource Allocating Network (RAN) proposed by Platt. This type of memory is utilized for storing useful training data (called LTM data) that are generated adaptively in the learning phase. When a new training datum is given, the proposed system searches several LTM data that are useful for suppressing the interference. The retrieved LTM data as well as the new training datum are trained simultaneously in RAN. In the simulations, the proposed model is applied to various incremental learning problems to evaluate the function approximation accuracy and the learning speed. From the simulation results, we certify that the proposed model can attain good approximation accuracy with small computation costs.

著者関連情報
© 社団法人 計測自動制御学会
前の記事 次の記事
feedback
Top