Behaviormetrika
Online ISSN : 1349-6964
Print ISSN : 0385-7417
ISSN-L : 0385-7417
AN EFFICIENT ALGORITHM FOR FEED-FORWARD NEURAL NETWORK REGRESSION ANALYSIS
Shin-ichi Mayekawa
著者情報
ジャーナル 認証あり

1995 年 22 巻 1 号 p. 67-90

詳細
抄録

The feed-forward neural network model can be considered a very poweful nonlinear regression analytic tool. However, the existing popular back-propagation algorithm using the steepest descent method is very slow to converge prohibiting the every day use of the neural network regression model. In this regard, a fast converging algorithm for the estimation of the weights in feed-forward neural network models was developed using the alternating least squares (ALS) or conditional Gauss-Newton method. In essence the algorithm alternates the minimization of the residual sums of squares (RSS) with respect to the weights in each layer until the reduction of RSS is negligible. With this approach, neither the calculation of a complex second derivative matrix nor the inversion of a large matrix is necessary. In order to avoid the inflation of the weight values, a ridge method and a quasi Bayesian method were also investigated. The methods were evaluated using several problems and found to be very fast compared to the steepest descent method. With a fast converging algorithm at hand, it is hoped that the statistical nature of the neural network model as a nonlinear regression analysis model is clearly revealed.

著者関連情報
© The Behaviormetric Society of Japan
前の記事 次の記事
feedback
Top