人工知能学会全国大会論文集
Online ISSN : 2758-7347
第33回 (2019)
セッションID: 2H4-E-2-04
会議情報

Gradient Descent Optimization by Reinforcement Learning
*Zhu YingdaHayashi TeruakiOhsawa Yukio
著者情報
会議録・要旨集 フリー

詳細
抄録

Gradient descent, which helps to search the global minimum of a complex (high dimension) function, is widely used in the deep neural network to minimize the total loss. The representative methods: stochastic gradient descent (SGD) and ADAM (Kingma & Ba, 2014) are the dominant ones to train neural network today. While some sensitive hyper-parameters like learning rate will affect the descent speed or even the convergence. In previous work, these hyper-parameters are often fixed or set by feedback and experience. I propose using reinforcement learning (RL) to optimize the gradient descent process with neural network feedback as input and hyper-parameter action as output to control these hyper-parameters. The experiment results of using RL based optimizer in both fixed and random start point shows better performance than normal optimizers which are set by default hyper-parameters.

著者関連情報
© 2019 一般社団法人 人工知能学会
前の記事 次の記事
feedback
Top