Host: The Japanese Society for Artificial Intelligence
Name : The 36th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 36
Location : [in Japanese]
Date : June 14, 2022 - June 17, 2022
Pre-training models such as BERT are improving the accuracy of Natural Language Processing (NLP) tasks. One of the NLP tasks is Word Sense Disambiguation (WSD). WSD is the problem of identifying the meaning of words used in a sentence. The accuracy of WSD by supervised learning is over 90%. On the other hand, the accuracy of unsupervised learning for WSD is about 60 - 70%. This is because unsupervised learning does not have the ability to access word meanings. In this paper, we investigate the features of unsupervised learning for WSD. In our experiments, we focused on the ”hypernyms” and ”hyponyms” defined in WordNet. The target words are common nouns in Japanese. The results show that the relations defined by WordNet may be useful features for some words.