抄録
This paper summarizes intriguing results obtained
by our recently-developed stagewise backpropagation algorithm that evaluates the Hessian matrix of a given objective function explicitly in a block-arrow matrix form. Its computational organization facilitates the exploitation of layered structure embedded in a multi-stage neural-network model. Notably, in nonlinear least squares learning, our stagewise procedure evaluates the Hessian matrix of the squared-error function at the essentially same cost as the Gauss-Newton Hessian, faster than standard rankupdate methods; this computational convenience is immensely significant.