基礎心理学研究
Online ISSN : 2188-7977
Print ISSN : 0287-7651
ISSN-L : 0287-7651
最新号
選択された号の論文の9件中1~9を表示しています
書評
講演論文
2024年度第1回フォーラム「AIを用いた実験心理学研究」
  • 浅野 倫子, 山田 祐樹
    2025 年 43 巻 2 号 p. 122
    発行日: 2025/03/31
    公開日: 2025/06/18
    ジャーナル フリー
  • 林 隆介
    2025 年 43 巻 2 号 p. 123-128
    発行日: 2025/03/31
    公開日: 2025/06/18
    ジャーナル フリー

    When humanoid characters appear partially similar to humans, they often evoke a sense of eeriness rather than a favorable feeling, a phenomenon known as the ‘Uncanny Valley’ effect. In this study, we tested whether a state-of-the-art AI model, which performs at a human level in image evaluation tasks, also demonstrates the Uncanny Valley effect. We input various human-like images that gradually morphed between ‘human face images’ and ‘images of other objects’ into a CLIP model trained to associate images with text. We then analyzed the words the model tended to associate with each image. The results showed that the AI associated the ‘intermediate morphing images,’ where cognitive cues are most conflicted, with negative words related to ‘eeriness,’ demonstrating a similar Uncanny Valley effect as observed in humans. Using AI as a model for human perceptual and cognitive processing in experimental psychology research is a promising approach to examine the general tendencies in human image evaluation and can help narrow the parameters included in human experiments.

  • 小松 英彦, 上田 恭平, 渡辺 英治
    2025 年 43 巻 2 号 p. 129-134
    発行日: 2025/03/31
    公開日: 2025/06/18
    ジャーナル フリー

    Large Language Models (LLMs), which learn vast amounts of knowledge and generate sentences that are indistinguishable from human sentences, may be useful as a new platform for psychological experiments. We conducted an experiment in which GPT-3 was asked to name basic colors associated with alphabets and numbers, and found that the frequency of basic color names answered had a high similarity to humans. Next, in an attempt to investigate how GPT generates color names, we conducted a test in which the color names were asked directly without using characters. As a result, we found that GPT-3 has a unique pattern of association between characters and color names, and this pattern is similar to that of humans. Furthermore, by combining the results of questions that ask for characters from color names, we were able to express the behavior of GPT-3 to some extent as a combination of random variables for color names and characters. These results indicate the possibility that advanced LLMs can be used as substitutes for human subjects in psychological experiments, and that analyzing the answers of LLMs may provide new understanding of the mechanism of human behavior in the same problem.

  • 濱田 太陽
    2025 年 43 巻 2 号 p. 135-141
    発行日: 2025/03/31
    公開日: 2025/06/18
    ジャーナル フリー

    This paper examines why neuroscientists in the 2020s are using large language models rather than brains as research subjects, from temporal, systemic, and career-oriented perspectives. First, we reviewed the evolution of neuroscience into big science and the development of integrated data platforms since the 2010s. Next, as a new movement, we discussed the emergence of representation alignment and NeuroAI, which use pre-trained DNNs including large language models as research targets and platform. Furthermore, we introduced our research on quantifying psychological constructs using language models and discussed its significance as analytical platform of psychological constructs. Through this, we explore the potential of pre-trained DNN as a new research method in neuroscience and consider the career turning point faced by early-career (neuro)scientists.

解説
報告
紹介
大会発表要旨
feedback
Top