1995 Volume 38 Issue 4 Pages 686-692
A theoretical formulation of a new fast learning method based on back propagation is presented in this paper. In contrast to the existing back propagation algorithm which is based solely on the modification of connecting weights in between units (i.e., neurons) of different layers of the neural network, the present method calculates the optimum slope of the sigmoid function for each unit together with the variation of the connecting weights. The effectiveness and versatility of the present method is verified by the system identification of (a) linear and (b) nonlinear (Duffing and fluid-type) single degree of freedom mass-spring dynamic models. In all of the three cases, the present method excels in speed and accuracy compared to that of the existing method using a fixed slope sigmoid function.