Cognitive Studies: Bulletin of the Japanese Cognitive Science Society
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
Volume 29, Issue 1
Cognitive studies: Bulletin of the Japanese Cognitive Science Society
Displaying 1-18 of 18 articles from this issue
Foreword
Feature The horizon of deep learning from the perspective of cognitive science
  • Masahiro Takamura, Akira Midorikawa, Shin’ichi Asakawa
    Article type: other
    2022 Volume 29 Issue 1 Pages 3-4
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS
    Download PDF (366K)
  • Shun-ichi Amari
    Article type: Invited Paper
    2022 Volume 29 Issue 1 Pages 5-13
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    Deep learning makes it possible to recognize patterns, play games, process and translate sentences, or do other works by learning from examples. It sometimes outperforms humans for some specific problems. Then, there naturally arises a fundamental question how different are the ways of information processing in deep learning and humans. To answer this question, we recapitulate the history of AI and deep learning shortly. We then show that deep learning generates very high-dimensional experimental formulae of interpolation and extrapolation. Humans do similar, but after finding the experimental formulae, they search for the reasons why such formulae work well. Humans search for fundamental principles underlying phenomena in the environment whereas deep learning does not. Humans cognize and understand the world they live in with consciousness. Furthermore, humans have a mind. Humans have obtained mind and consciousness through a long history of evolution, which deep learning does not. What is the role of mind and consciousness for cognition and understanding? The human brain has an excellent ability of prediction (as well as other animals), which is fundamental for surviving in the severe environment. However, humans have developed the ability of postdiction, which reviews the action plan based on a prediction before execution by integrating various pieces of evidence. This is an important function of consciousness, which deep learning does not have.

    Download PDF (413K)
  • Kunihiko Fukushima
    Article type: Invited Paper
    2022 Volume 29 Issue 1 Pages 14-23
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    Deep convolutional neural networks (deep CNN) show a large power for robust recognition of visual patterns. The neocognitron, which was first proposed by Fukushima (1979), is recognized as the origin of deep CNNs. Its architecture was suggested by neurophysiological findings on the visual systems of mammals. It acquires the ability to recognize visual patterns robustly through learning. Although the neocognitron has a long history, improvements of the network are still continuing. For example, learning rule AiS (add-if-silent) for intermediate layers, learning rule mWTA (margined WTA) for the deepest layer, pattern classification by IntVec (interpolating-vector), a method for reducing the computational cost of IntVec without sacrificing the recognition rate, and so on. This paper discusses the recent neocognitron, focusing on differences from the conventional deep CNN. Some other functions of the visual system can also be realized by networks extended from the neocognitron, for example, recognition of partly occluded patterns.

    Download PDF (549K)
  • Takuya Matsuda
    Article type: Invited Paper
    2022 Volume 29 Issue 1 Pages 24-35
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    We discuss possible paths to create a superintelligence, which is an artificial general intelligence (AGI) of immense intellectual capacity. We argue that the creation of the superintelligence has huge political/economic/cultural impacts on the world. There could be two approaches to creating AGI: the biological approach and the engineering one. In the former case, it is necessary to find the master algorithm operating in the human neocortex. We argue that a possible model of the master algorithm is a dynamical Bayesian network. In the latter approach, we are witnessing a big progress in Natural Language Processing AI such as Transformer, BERT, GPT-3 etc. which are exhibiting a glimpse of real human intelligence. We discuss that the key element is the attention mechanism.

    Download PDF (634K)
  • Yutaka Matsuo
    Article type: Invited Paper
    2022 Volume 29 Issue 1 Pages 36-46
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    This paper proposes an integrated architecture for intelligence based on recent advances in deep learning. Two systems, called BeastOS and Language App, represent the sensori-motor and symbolic processing systems. The world model is acquired through physical interaction in the environment. By disentangling factors in the world model, a counter-factual imagination becomes possible. A query to Language App can trigger the generation of data using the world model and generate an answer based on that. Such integration of deep learning models with external modules has been shown to be possible in a number of existing studies. Furthermore, we argue that primitive features such as knowledge processing, reasoning, long-term planning, and decision making can be obtained by learning on the corresponding datasets or tasks, called linguistic tasks. The main claim of this proposal is that symbolic processing is a set of functions acquired through deep learning and discrete inputs and outputs. The proposed model is novel in that it integrates a large amount of prior research discussion in the field of AI and cognitive science with the latest findings in deep learning.

    Download PDF (754K)
  • Shogo Makioka
    Article type: Review Paper
    2022 Volume 29 Issue 1 Pages 47-62
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    Synesthesia is a phenomenon in which specific stimuli consistently and automatically induce additional conscious experiences. The stimuli that cause synesthesia are called inducers, and the sensations evoked by inducers are called concurrents. This paper is organized around the following two questions. (1) How are concurrents generated? (2) Why do concurrents that are inconsistent with external stimuli not disappear through learning? Question (1) has been explained by innate connections or learned associations between modalities. However, the mere assumption of intermodal connections cannot explain the mixture of regularity and irregularity observed in synesthesia. In this paper, we discuss the self-organizing model of spatial sequence synesthesia proposed by the author, psychological experiments on the commonalities between synesthetes and nonsynesthetes, and the possibility that these experiments indicate that self-organizing learning between modalities takes place in both synesthetes and nonsynesthetes. Many theories of perception assume that learning takes place in such a way as to minimize the error between the predictions made by the internal model and the sensory input. This is also true for deep learning networks. Such learning should work to eliminate concurrents that are inconsistent with external stimuli, but concurrents do not disappear in synesthetes. This leads to question (2), and we discuss this issue in light of Seth's (2014) discussion of hierarchical generative models, Gershman's (2019) discussion of adversarial generative networks, and Cleeremans et al.' s (2020) self-organizing metarepresentational account of consciousness.

    Download PDF (5813K)
  • Akitoshi Ogawa, Masaki Tanaka
    Article type: Review Paper
    2022 Volume 29 Issue 1 Pages 63-73
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    Functional neuroimaging (FNI) plays an essential role in cognitive science investigating information processes in cognitive mechanisms. Computational models that explain the behavior and its underlying information processing have become indispensable for the functional mapping of cognition in the FNI field. However, it is challenging to use computational models consisting of simple equations and several parameters to reveal the distributed representation of information processing in the brain. Machine learning has analyzed the activation pattern for information processing in the brain. Even before the deep learning revolution, machine learning was used to predict brain activity patterns from stimuli (i.e., encoding) and to discriminate or reconstruct the stimuli and behavior from brain activity (i.e., decoding). Convolutional neural network (CNN), one of the deep neural networks (DNNs) mimicking the visual nervous system for object recognition, was a pioneering example of the potential of deep learning as a computational model of the brain. The activity of the middle layers of CNN can reflect distributed processes for object recognition in the ventral visual pathway. To use DNNs as computational models of FNI for more broad perceptions and cognitions, the activity of the middle layer of DNN should correspond to the activation of a brain region. This article briefly reviews the computational models of FNI and deep learning included in FNI machine learning and discusses the DNN as a computational model in FNI. We suggest that deep learning can serve as a computational model in FNI, connecting the activation pattern in the brain and hierarchical/distributed cognitive processes.

    Download PDF (1311K)
  • Yasushi Terao
    Article type: 資料論文
    2022 Volume 29 Issue 1 Pages 74-84
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    This paper aimed to reconfirm the contribution of error data observed in spontaneous speeches to language production research, and to find suggestions for the development of current simulation models. First, I introduced the basic perspective of analysis and quantitatively showed the overall trend of speech errors in the Japanese language. Then, I divided the interactive activation model, which has been very effective in describing speech errors, into two versions, conceptual model and computational model, based on when simulation became possible, and showed their behaviors. The factors treated as “noise” in the computational version were examined by analyzing lexical substitution errors from contextual, semantic,and phonological perspectives. The results not only show the importance of finding a way to incorporate contextuality into computational models but also provide suggestions for developing “more human-oriented” computational models.

    Download PDF (818K)
Research papers
  • Amane Sagasaki, Tatsunori Ishii, Katsumi Watanabe
    Article type: Research Paper
    2022 Volume 29 Issue 1 Pages 85-99
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    State authenticity is the subjective sense of being one’s true self in a particular situation (Lenton, Bruder, et al., 2013). Previous studies suggested that the current mood influenced state authenticity because the judgment of state authenticity was made heuristically via available information (i.e., mood). However, the results were not clear. We examined whether the mood manipulation with short movies would affect state authenticity. The results partially supported the hypotheses; participants in a positive mood reported a higher degree of true self and less self-alienation than those in a neutral mood. However, the negative mood had no consistent effect on state authenticity. These results suggest that the effect of mood on state authenticity is limited or more complex than previously thought and point to the necessity for further investigations.

    Download PDF (1400K)
Review Papers
  • Miho Fuyama, Hayato Saigo
    Article type: Review Paper
    2022 Volume 29 Issue 1 Pages 100-119
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    When readers comprehend a text, they can have multiple simultaneous interpretations. Literary theories and art studies have indicated that multiple and indeterminate interpretations allow us to create novel understanding and provide us with aesthetic experiences. However, in the cognitive science field, these multiple and indeterminate interpretations have not been represented and modeled. This article proposes a way of modeling the states of multiple and indeterminate interpretations as a time series based on quantum probability theory and discusses the prospect of our approach, focusing on comprehension and aesthetic research. First, we discuss the worth of indeterminate and multiple interpretations for text comprehension and aesthetic experiences. Then, after reviewing the previous comprehension studies, we propose a model of interpretation state, including indeterminacy and multiplicity, as a superposition state based on quantum probability theory and physics. Further, we share the basis of quantum probability theory and the recent studies on quantum cognition for this proposal. Finally, we discuss the prospects of comprehension studies using our approach.

    Download PDF (2023K)
Brief Articles
  • Ryunosuke Oka, Takashi Kusumi
    Article type: Brief Article
    2022 Volume 29 Issue 1 Pages 120-124
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    This study examined the correlation between interpretive diversity and topic semantic role in Japanese nominal metaphorical expressions, and in particular the relationship between interpretive diversity and topic semantic neighborhood density (how close the topic word is to its nearest neighbor in semantic space) and topic abstractness. In all, 43 metaphors (in both metaphor and simile form) and their interpretations were used as stimuli. The results showed topic semantic neighborhood density and abstractness to be negatively and positively correlated, respectively, with the interpretive diversity of the metaphor. In contrast, the vehicle semantic neighborhood density to be positively correlated with the interpretive diversity of the simile. These results suggest that the role of topic meanings and vehicle meanings in semantic space on their interpretations are different for metaphors and similes.

    Download PDF (416K)
  • Chika Shimizu, Masasi Hattoriy
    Article type: Brief Article
    2022 Volume 29 Issue 1 Pages 125-130
    Published: March 01, 2022
    Released on J-STAGE: March 15, 2022
    JOURNAL FREE ACCESS

    The Monty Hall problem is a well-known but difficult probabilistic inference task; however, its difficulty is not solely associated with the probabilistic calculation. This problem is concerned with social interaction between a participant and the character in the problem, Monty. A dialogue between the contestant (i.e., the participant) and the host can induce strategic thought in the participant. In this study, we examined how trust in the host affects the participants’ choice. Answers from 232 participants revealed that people tended to be more conservative (i.e., they did not choose to change the door) when they mistrusted the host, suggesting that this non-normative behavior may be a result of the participant’s defense against the host’s betrayal. However, it has become clear that further examination is needed into information regarding the credibility of the host interacting with the participant.

    Download PDF (403K)
Presentation Awards
Literature Guides
feedback
Top