Abstract
This paper discusses an interpretation on processes in hidden and output layers of a feedforward neural network which is trained by the backpropagation algorithm. We treat a network which is applied to class discrimination task. Based on the obtained interpretation we propose a new method for detemining initial values of backpropagation algorithm. The basic idea is as gollows: Initial weight vectors of hidden layer are determined by linear discriminant analisys on original input data. Those of output layer are chosen so as to be equal to the coefficients of linear discriminant function which is derived from output of hidden layer. The proposed method can significantly improve the rate of learning. It is verified by hand-written numeral recognition experiments.