Abstract
The variational Bayesian (VB) learning is known to be a promising approximation method to Bayesian learning for many practical models, such as matrix factorization models, mixture models, and hidden Markov models, where Bayesian learning is computationally hard. The VB learning has been empirically demonstrated to perform excellently in many applications, which stimulated theoretical analysis. Interesting properties, including phase transition phenomena that induce sparsity, have been revealed. In this paper, we review recent advances in VB learning theory.