2022 年 13 巻 2 号 p. 361-366
First-order methods such as SGD and Adam are popularly used in training Neural networks. On the other hand, second-order methods have shown to have better performance and faster convergence despite their high computational cost by incorporating the curvature information. While second-order methods determine the step size by line search approaches, first-order methods achieve efficient learning by devising a way to adjust the step size. In this paper, we propose a new learning algorithm for training neural networks by combining first-order and second-order methods. We investigate the effectiveness of our proposed method when combined with popular first-order methods - SGD, Adagrad, and Adam, through experiments using image classification problems.