Host: The Japanese Society for Artificial Intelligence
Name : The 33rd Annual Conference of the Japanese Society for Artificial Intelligence, 2019
Number : 33
Location : [in Japanese]
Date : June 04, 2019 - June 07, 2019
Gradient descent, which helps to search the global minimum of a complex (high dimension) function, is widely used in the deep neural network to minimize the total loss. The representative methods: stochastic gradient descent (SGD) and ADAM (Kingma & Ba, 2014) are the dominant ones to train neural network today. While some sensitive hyper-parameters like learning rate will affect the descent speed or even the convergence. In previous work, these hyper-parameters are often fixed or set by feedback and experience. I propose using reinforcement learning (RL) to optimize the gradient descent process with neural network feedback as input and hyper-parameter action as output to control these hyper-parameters. The experiment results of using RL based optimizer in both fixed and random start point shows better performance than normal optimizers which are set by default hyper-parameters.