2012 Volume 16 Issue 5 Pages 451-458
Various techniques based on the gradient descent method have been studied as training algorithms for neural networks. Neural network training poses data-driven optimization problems in which the objective function involves the summation of loss terms over a set of data to be modeled. For a given training data set, the gradient-based algorithm operates in one of two modes: online (stochastic) or batch. In this paper, a robust training algorithm is proposed, combining "online" mode with "batch" one. The validity of the proposed algorithm is demonstrated through computer simulations compared with the previous quasi-Newton based training methods.