1996 Volume 9 Issue 5 Pages 201-209
The back propagation method is well known as a supervised learning rule of a neural network.
In this paper, a new learning rule is proposed where the output error vector is adjusted to zero by correcting two kinds of vectors, the one is weighting vectors (matrix) and the other is an input vector of the layer. The corrected input vector has a role of a tentative teacher of the following layer. In this way, the output error is propagated backward, and is partly corrected by each weighting vector.
Computatinal method is also presented for a matrix inversion which is required in the proposed method and nonsingularity of the matix is discussed.
Simulation result of the Exclusive-OR problem shows the effectiveness of the proposed method.