Chinese word segmentation is an initial and important step in Chinese language processing. Recent advances in machine learning techniques have boosted the performance of Chinese word segmentation systems, yet the identification of out-of-vocabulary words is still a major problem in this field of study. Recent research has attempted to address this problem by exploiting characteristics of frequent substrings in unlabeled data. We propose a simple yet effective approach for extracting a specific type of frequent substrings, called maximized substrings, which provide good estimations of unknown word boundaries. In the task of Chinese word segmentation, we use these substrings which are extracted from large scale unlabeled data to improve the segmentation accuracy. The effectiveness of this approach is demonstrated through experiments using various data sets from different domains. In the task of unknown word extraction, we apply post-processing techniques that effectively reduce the noise in the extracted substrings. We demonstrate the effectiveness and efficiency of our approach by comparing the results with a widely applied Chinese word recognition method in a previous study.
In this paper we describe a generalized dependency tree language model for machine translation. We consider in detail the question of how to define tree-based n-grams, or ‘t-treelets’, and thoroughly explore the strengths and weaknesses of our approach by evaluating the effect on translation quality for nine major languages. In addition, we show that it is possible to attain a significant improvement in translation quality for even non-structured machine translation by reranking filtered parses of k-best string output.