自然言語処理
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
一般論文(査読有)
Knowledge Editing of Large Language Models Unconstrained by Word Order
Ryoma IshigakiJundai SuzukiMasaki ShuzoEisaku Maeda
著者情報
ジャーナル フリー

2025 年 32 巻 4 号 p. 1062-1102

詳細
抄録

Large Language Models (LLMs) possess potentially extensive knowledge; however, because their internal processing operates as a black box, directly editing the knowledge embedded within the LLMs is difficult. To address this issue, a method known as local-modification-based knowledge editing has been developed. This method identifies the “knowledge neurons” that encode the target knowledge and adjusts the parameters associated with these neurons to update the stored information. Knowledge neurons are identified by masking the object (o) from sentences representing relational triplets (s, r, o), with the LLM predicting the masked element, and observing its internal activation patterns during the prediction. When the architecture is decoder-based, the predicted object (o) must be located at the end of the sentence. Previous local-modification-based knowledge-editing methods for decoder-based models have assumed subject-verb-object languages and faced challenges when applied to subject-object-verb languages such as Japanese. In this study, we propose a knowledge-editing method that eliminates the need for word order constraints by converting the input used to identify knowledge neurons into a question, where object (o) is the answer. We conducted validation experiments using a known-facts dataset and confirmed that the proposed method is effective for Japanese language, which is a non- subject-verb-object language.

著者関連情報
© 2025 The Association for Natural Language Processing
前の記事 次の記事
feedback
Top