計測自動制御学会論文集
Online ISSN : 1883-8189
Print ISSN : 0453-4654
ISSN-L : 0453-4654
ラジアル基底関数ネットの高速追加学習アルゴリズム
岡本 圭介小澤 誠一阿部 重夫
著者情報
ジャーナル フリー

2004 年 40 巻 12 号 p. 1227-1235

詳細
抄録

When training samples are given incrementally, neural networks often suffer from the catastrophic interference, which results in forgetting input-output relationships acquired in the past. To avoid the catastrophic interference, we have proposed Resource Allocating Network with Long-Term Memory (RAN-LTM). In RAN-LTM, not only a new training sample but also some memory items stored in long-term memory are used for training based on a gradient descent algorithm. In general, the gradient descent algorithm is usually slow and can be easily fallen into local minima. In this paper, to alleviate these problems, we introduce a linear regression approach into the learning of RAN-LTM, in which its centers are not trained but selected based on output errors in an incremental fashion. In this approach, the regression is carried out for not only a training sample and memory items but also pseudodata that are selected around the centers of hidden units based on the complexity of an approximated function. This selection reduces the total number of pseudodata at each learning step; as a result, fast incremental learning is realized in RAN-LTM. Since only memory items are stored in memory, the proposed RAN-LTM does not need so much memory capacity when the incremental learning is carried out. This property is useful especially for small-scale systems. To verify these characteristics of RAN-LTM, we apply it to several function approximation problems, in which the performance in approximation accuracy, learning time, and needed memory capacity are investigated by comparison with some conventional models. Moreover, when extending the learning domain with time, the increase trends in learning time and needed memory capacity are investigated. From the experimental results, it is verified that the proposed model can learn fast and accurately, and that it needs rather small memory capacity so far as the learning domain is not too large.

著者関連情報
© 社団法人 計測自動制御学会
前の記事 次の記事
feedback
Top