認知科学
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
特集 ことばの認知科学:言語の基盤とは何か
AIは言語の基盤を獲得するか:推論の体系性の観点から
谷中 瞳峯島 宏次
著者情報
ジャーナル フリー

2024 年 31 巻 1 号 p. 27-45

詳細
抄録

In recent years, artificial intelligence based on deep neural networks (DNNs) has made remarkable progress. Particularly in natural language processing (NLP), various DNN-based language models have emerged, using Transformer architectures that are pre-trained on large-scale text data. These pre-trained large language models (LLMs) have demonstrated high accuracy across a range of NLP tasks, leading to claims that they surpass human capabilities in language understanding. However, due to the black-box nature of LLMs, it is not clear whether LLMs realize human-like language understanding. The question of whether neural networks can acquire compositionality has been debated between two modeling of human cognition: connectionism and classical computationalism. In 1988, Fodor and Pylyshin proposed two types of systematicity regarding compositionality: systematicity of thought and systematicity of inference. This study revisits Fodor and Pylyshin’s discussion on systematicity and presents a method for evaluating whether LLMs demonstrate compositional language understanding, specifically focusing on inferential systematicity through natural language inference tasks. The findings of this paper indicate that there is still room for debate regarding whether current LLMs achieve the systematicity that underlies human language understanding and reasoning. Consequently, it emphasizes the need for further complementary research bridging the fields of NLP and cognitive science to delve deeper into this topic.

著者関連情報
© 2024 日本認知科学会
前の記事 次の記事
feedback
Top