Distributional similarity is a widely adopted concept to compute lexical semantic relatedness of words.Whereas the calculation is based on the
distributional hypothesis and utilizes contextual clues of words, little attention has been paid to what kind of contextual information is effective for the purpose.As one of the ways to extend contextual information, we pay attention to the use of
indirect dependency, where two or more words are related via several contiguous dependency relations.We have investigated the effect of indirect dependency using automatic synonym acquisition task, and shown that the performance can be improved by using indirect dependency in addition to normal direct dependency.We have also verified its effectiveness under various experimental settings including weight functions, similarity measures, and context representations, and shown that context representations which incorporate richer syntactic information are more effective.
View full abstract