Cognitive Studies: Bulletin of the Japanese Cognitive Science Society
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
Feature Cognitive science on language-What are the bases of language?
Do AIs obtain foundations of language? From the viewpoint of inferential systematicity
Hitomi YanakaKoji Mineshima
Author information
JOURNAL FREE ACCESS

2024 Volume 31 Issue 1 Pages 27-45

Details
Abstract

In recent years, artificial intelligence based on deep neural networks (DNNs) has made remarkable progress. Particularly in natural language processing (NLP), various DNN-based language models have emerged, using Transformer architectures that are pre-trained on large-scale text data. These pre-trained large language models (LLMs) have demonstrated high accuracy across a range of NLP tasks, leading to claims that they surpass human capabilities in language understanding. However, due to the black-box nature of LLMs, it is not clear whether LLMs realize human-like language understanding. The question of whether neural networks can acquire compositionality has been debated between two modeling of human cognition: connectionism and classical computationalism. In 1988, Fodor and Pylyshin proposed two types of systematicity regarding compositionality: systematicity of thought and systematicity of inference. This study revisits Fodor and Pylyshin’s discussion on systematicity and presents a method for evaluating whether LLMs demonstrate compositional language understanding, specifically focusing on inferential systematicity through natural language inference tasks. The findings of this paper indicate that there is still room for debate regarding whether current LLMs achieve the systematicity that underlies human language understanding and reasoning. Consequently, it emphasizes the need for further complementary research bridging the fields of NLP and cognitive science to delve deeper into this topic.

Content from these authors
© 2024 Japanese Cognitive Science Society
Previous article Next article
feedback
Top