Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
34th (2020)
Session ID : 1J3-OS-10-01
Conference information

Convex Optimization Theory and Algorithms for Hyperparameter Optimization: Toward AutoML
*Ichiro TAKEUCHI
Author information
Keywords: AutoML
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Optimization problems in machine learning (ML) often contain several tunable parameters called hyper-parameters, and careful hyper-parameter tuning is indispensable for constructing good models. If we naively solve the optimization problem for each candidate of hyper-parameters, the computational cost could be extremely large. In the field of convex optimization, there are several techniques to analyze the relationship between changes in optimal solutions and changes of hyper-parameters, and these techniques can be utilized for efficient hyper-parameter tuning. However, most of the current state of the art ML method including deep neural networks (DNN), are formulated as non-convex optimization problems, and thus the above convex-optimization techniques cannot be used as they are. In this talk, we first present the theories and algorithms of hyper-parameter tuning in convex optimization field and discuss the application of these techniques to non-convex optimization problems such as DNN.

Content from these authors
© 2020 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top