Abstract
First, asynchronous and synchronous types of interconnected neural networks with discrete state transition are proposed in order to solve continuous optimization problems with convex and quadratic functions.
Next, a computing method is proposed in order to apply these neural networks to ordinary nonlinear functions. The neural computing method adopts an idea of the quasi-Newton method. In this method, Hessian matrix of the minimizing function is approximated by a positive symmetric matrix, a subsidiary quadratic programming problem having this matrix as a quadratic coefficient matrix is solved by the proposed neural network. After its stationary solution is obtained, the approximated matrix is updated to a new appropriate matrix by the quasi-Newton method formula. The updating procedure is equivalent to adjustment of the connective coefficients between neurons. In particular, as a revision of the above method, “Neural Pseudo-Quasi-Newton Method” is presented in which the updating of the matrix is performed after the number of transitions equal to the number of neurons.
Lastly, the neural pseudo-quasi-Newton method is applied to solve the learning problem of a three-layered neural network. This is the learning of three-layered neural networks by using inter-connected neural networks, i. e., “the learning of neural networks by neural networks”.