Journal of Natural Language Processing
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
General Paper (Peer-Reviewed)
Cross-lingual Contextualized Phrase Retrieval
Huayang LiDeng CaiZhi QuQu CuiHidetaka KamigaitoLemao LiuTaro Watanabe
Author information
JOURNAL FREE ACCESS

2025 Volume 32 Issue 3 Pages 886-917

Details
Abstract

Phrase-level dense retrieval has shown many appealing characteristics in downstream NLP tasks by leveraging the fine-grained information that phrases offer. In our work, we propose a new task formulation of dense retrieval, cross-lingual contextualized phrase retrieval, which aims to augment cross-lingual applications by addressing polysemy using context information. However, the lack of specific training data and models are the primary challenges to achieve our goal. As a result, we extract pairs of cross-lingual phrases using word alignment information automatically induced from parallel sentences. Subsequently, we train our Cross-lingual Contextualized Phrase Retriever (CCPR) using contrastive learning, which encourages the hidden representations of phrases with similar contexts and semantics to align closely. Comprehensive experiments on both the cross-lingual phrase retrieval task and a downstream task, i.e, machine translation, demonstrate the effectiveness of CCPR. On the phrase retrieval task, CCPR surpasses baselines by a significant margin, achieving a top-1 accuracy that is at least 13 points higher. When utilizing CCPR to augment the large-language-model-based translator, it achieves average gains of 0.7 and 1.5 in BERTScore for translations from X↠En and vice versa, respectively, where X is one of the six Indo-European languages in the WMT16 dataset. It also achieves gains of at least 0.7 BERTScore on translations into East Asian languages, such as Japanese, Chinese, and Korean.

Content from these authors
© 2025 The Association for Natural Language Processing
Previous article Next article
feedback
Top