IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Regular Section
Training Set Selection for Building Compact and Efficient Language Models
Keiji YASUDAHirofumi YAMAMOTOEiichiro SUMITA
Author information
JOURNAL FREE ACCESS

2009 Volume E92.D Issue 3 Pages 506-511

Details
Abstract

For statistical language model training, target domain matched corpora are required. However, training corpora sometimes include both target domain matched and unmatched sentences. In such a case, training set selection is effective for both reducing model size and improving model performance. In this paper, training set selection method for statistical language model training is described. The method provides two advantages for training a language model. One is its capacity to improve the language model performance, and the other is its capacity to reduce computational loads for the language model. The method has four steps. 1) Sentence clustering is applied to all available corpora. 2) Language models are trained on each cluster. 3) Perplexity on the development set is calculated using the language models. 4) For the final language model training, we use the clusters whose language models yield low perplexities. The experimental results indicate that the language model trained on the data selected by our method gives lower perplexity on an open test set than a language model trained on all available corpora.

Content from these authors
© 2009 The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top