認知科学
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
特集 ことばの認知科学:言語の基盤とは何か
主観情報入力型BERTによる発話の意味理解:自己注意の連鎖に注目した内部表現の分析
萬處 修平岡 夏樹松島 茜深田 智吉村 優子川原 功司田中 一晶
著者情報
ジャーナル フリー

2024 年 31 巻 1 号 p. 205-224

詳細
抄録

Japanese sentence-final particles such as “yo” and “ne” reflect our cognitive mechanism of manipulating and integrating linguistic and pragmatic/sociolinguistic information. However, despite the importance of this mechanism in cognitive science, few constructive approaches have been presented so far. Therefore, we developed the Subjective-BERT model for acquiring the meanings of the two sentence-final particles, “yo” and “ne” and those of content words. The proposed model was pre-trained on the dataset with not only linguistic information but also sensory and mental information. In a previous study (Mandokoro et al., 2022a), we demonstrated that the model acquires the meanings of the particles by learning the relations between not only words but also words and sensory/mental information. This paper further explores how the meaning of each word is represented and processed in the model by analyzing the internal representations. Among the main results: (i) the analysis of the attention chain between three tokens (a particle, a content word, and its referent), revealed that the information of the particle is first conveyed to the content word, which in turn controls the subsequent information flow from the content word to its referent; and (ii) the contrastive analysis of the transformation processes of embeddings when each sentence-final particle is used, enables the identification of the mechanism used to generate the attention chains, suggesting that the residual connections and feedforward neural network layers, as well as self-attention, contribute to the understanding of utterance meaning.

著者関連情報
© 2024 日本認知科学会
前の記事 次の記事
feedback
Top