Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Original Paper
Scalable Inference of Topic Models by Stochastic Gradient MCMC
Soma YokoiIssei SatoHiroshi Nakagawa
Author information
JOURNAL FREE ACCESS

2016 Volume 31 Issue 6 Pages AI30-C_1-9

Details
Abstract

Topic models are generative models of documents, automatically clustering frequently co-occurring words (topics) from corpora. Topics can be used as stable features that represent the substances of documents, so that topic models have been extensively studied as technology for extracting latent information behind large data. Unfortunately, the typical time complexity of topic model computation is the product of the data size and the number of topics, therefore the traditional Markov chain Monte Carlo (MCMC) method cannot estimate many topics on large corpora within a realistic time. The data size is a common concern in Bayesian learning and there are general approaches to avoid it, such as variational Bayes and stochastic gradient MCMC. On the other hand, the number of topics is a specific problem to topic models and most solutions are proposed to the traditional Gibbs sampler. However, it is natural to solve these problems at once, because as the data size grows, so does the number of topics in corpora. Accordingly, we propose new methods coping with both data and topic scalability, by using fast computing techniques of the Gibbs sampler on stochastic gradient MCMC. Our experiments demonstrate that the proposed method outperforms the state-of-the-art of traditional MCMC in mini-batch setting, showing a better mixing rate and faster updating.

Content from these authors
© The Japanese Society for Artificial Intelligence 2016
Previous article Next article
feedback
Top