Abstract
In order to realize the desired input-output relations of a neural network, the Back-Propagation has been successfully used as a learning algorithm to adjust the synaptic weights of a multilayer network. Not only reproduction ability of the learned input-output relations, but also additional abilities, such as a generalization ability for inputs which the network has not yet learned (non-training data), are required of the organized network. But these abilities are generally competitive, and then balancing between them is one of the unsettled matters in the neural network learning.
In this paper, we formulate the learning problem considering generalization as a inequally constrained optimization problem which requires minimization of the magnitude of synaptic weights to obtain the generalization ability for non-training data, under the satisfactory conditions that the output errors in response to the training data are less than the given permissible levels. By assignment of parameters of the permissible levels, the balancing of the mentioned two abilities is achieved. Then, we can define the learning algorithm as an iteratively searching process for the optimal solution to the formulated problem by starting from the non-feasible initial state of the synaptic weights, which is an unsatisfactory state for the output errors. As the learning algorithm, we adopt the augmented Lagrangian function method, which is known as an effective constrained optimization method. The proposed method is tested on simple pattern recognition problems in order to certify the generalization ability in the organized neural network by our learning algorithm.