Journal of the Japan Statistical Society, Japanese Issue
Online ISSN : 2189-1478
Print ISSN : 0389-5602
ISSN-L : 0389-5602
Special Topic: The JSS Research Prize Lecture
Learning Theory of Multiple Kernel Learning
Taiji Suzuki
Author information
JOURNAL FREE ACCESS

2018 Volume 47 Issue 2 Pages 141-157

Details
Abstract

We review convergence rate analyses about multiple kernel leaning (MKL) by regularization methods and a Bayes method. As for regularization methods, we show convergence rates for ℓ1 and elastic-net regularizations. The elastic-net regularization is a composition of an ℓ1-regularizer for inducing the sparsity and an ℓ2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates ever shown for both ℓ1 and elastic-net regularizations. Moreover, we show that, using a Bayesian method with Gaussian process priors, we don't need a strong conditions on the design to achieve a fast learning rate.

Content from these authors
© 2018 Japan Statistical Society
Previous article Next article
feedback
Top