Journal of Natural Language Processing
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
General Paper (Peer-Reviewed)
Long-Tail Crisis in Nearest Neighbor Language Models
Yuto NishidaMakoto MorishitaHiroyuki DeguchiHidetaka KamigaitoTaro Watanabe
Author information
JOURNAL FREE ACCESS

2025 Volume 32 Issue 4 Pages 1272-1298

Details
Abstract

The k-nearest-neighbor language model (kNN-LM), one of the retrieval-augmented language models, improves the perplexity for given text by directly accessing a large datastore built from any text data during inference. A widely held hypothesis for the success of kNN-LM is that its explicit memory, i.e., the datastore, enhances predictions for long-tail phenomena. However, prior works have primarily shown its ability to retrieve long-tail contexts, leaving the model’s performance remain underexplored in estimating the probabilities of long-tail target tokens during inference. In this paper, we investigate the behavior of kNN-LM on low-frequency tokens, examining prediction probability, retrieval accuracy, token distribution in the datastore, and approximation error of the product quantization. Our experimental results reveal that kNN-LM does not improve prediction performance for low-frequency tokens.

Content from these authors
© 2025 The Association for Natural Language Processing
Previous article Next article
feedback
Top