認知科学
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
研究論文
文章生成深層モデルによる刺激文の再構成
四辻 嵩直赤間 啓之
著者情報
ジャーナル フリー

2023 年 30 巻 4 号 p. 465-478

詳細
抄録

The neural basis of our language comprehension system has been explored using neuroimaging techniques, such as functional magnetic resonance imaging. Despite having identified brain regions and systems related to various linguistic information aspects, the entire image of a neurocomputational model of language comprehension remains unsolved. Contrastingly, in machine learning, the rapid development of natural language models using deep learning allowed sentence generation models to generate high-accuracy sentences. Mainly, this study aimed to build a method that reconstructs stimulus sentences directly only from neural representations to evaluate a neurocomputational model for understanding linguistic information using these text generation models. Consequently, the variational autoencoder model combined with pre-trained deep neural network models showed the highest decoding accuracy, and we succeeded in reconstructing stimulus sentences directly only from neural representations using this model. Although we only achieved topic-level sentence generation, we still exploratorily analyzed the characteristics of neural representations in language comprehension, considering this model as a neurocomputational model.

著者関連情報
© 2023 日本認知科学会
前の記事 次の記事
feedback
Top