2014 年 18 巻 3 号 p. 366-374
Most semi-supervised learning methods are based on extending existing supervised or unsupervised techniques by incorporating additional information from unlabeled or labeled data. Unlabeled instances help in learning statistical models that fully describe the global property of our data, whereas labeled instances make learned knowledge more human-interpretable. In this paper we present a novel way of extending conventional non-negative matrix factorization (NMF) and probabilistic latent semantic analysis (pLSA) to semi-supervised versions by incorporating label information for learning semantics. The proposed algorithm consists of two steps, first acquiring prior bases representing some classes from labeled data and second utilizing them to guide the learning of final bases that are semantically interpretable.
この記事は最新の被引用情報を取得できません。