2018 Volume 47 Issue 2 Pages 141-157
We review convergence rate analyses about multiple kernel leaning (MKL) by regularization methods and a Bayes method. As for regularization methods, we show convergence rates for ℓ1 and elastic-net regularizations. The elastic-net regularization is a composition of an ℓ1-regularizer for inducing the sparsity and an ℓ2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates ever shown for both ℓ1 and elastic-net regularizations. Moreover, we show that, using a Bayesian method with Gaussian process priors, we don't need a strong conditions on the design to achieve a fast learning rate.