2023 Volume 30 Issue 2 Pages 275-303
The task of detecting words with semantic differences across corpora is primarily addressed by word representations such as Word2Vec or BERT. However, there are no abundant computing resources available in the real world where linguists and sociologists apply these techniques. In this paper, we extend an existing CPU-trainable model which trains vectors of all time periods simultaneously. Experimental results demonstrate that the extended models achieved comparable or superior results to strong baselines in English corpora, SemEval-2020 Task 1, and Japanese. Furthermore, we compared the training time of each model and conducted a comprehensive analysis of Japanese corpora.