Cognitive Studies: Bulletin of the Japanese Cognitive Science Society
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
Volume 24, Issue 1
COGNITIVE SUDIES
Displaying 1-17 of 17 articles from this issue
Foreword
Message to Members
Feature:Development of Cognitive Science Driven by Recent Computational Models
  • Tadahiro Taniguchi, Hiroyuki Okada
    2017 Volume 24 Issue 1 Pages 5-7
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (996K)
  • Ichiro Kobayashi
    2017 Volume 24 Issue 1 Pages 8-15
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     This paper introduces an approach to ‘meaning' from a viewpoint of Hallidayan lin-
    guistics, that is, systemic functional linguistics (SFL).It explains the basic idea of SFL,
    the comparison between Hallidayan and Chomskyan linguistics in terms of an approach
    to meaning, and the relation between SFL and Wittgenstein's philosophy.
    Download PDF (1548K)
  • Yuya Unno
    2017 Volume 24 Issue 1 Pages 16-22
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     Despite recent artificial intelligence technologies achieved prominent results, machines
    cannot behave like humans yet. I compare the ability of humans’ language acquisition
    with natural language processing by machine learning technologies, taking learning of
    word embedding vectors as an example. And I introduce some notable researches which
    may fill in the gap, one-shot learnig, memory models and language game tasks. Al-
    though these researches have just begun, it will be important for artificial intelligence
    to aquire human intelligence.
    Download PDF (1878K)
  • Tomoaki Nakamura, Takayuki Nagai
    2016 Volume 24 Issue 1 Pages 23-32
    Published: December 30, 2016
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     In this study, we define concepts as categories into which a robot classifies perceptual
    information obtained through interaction with others and the environment, and the
    inference of unobserved information through the concepts is defined as understanding.
    Furthermore, a robot can infer unobserved perceptual information from words by con-
    necting concepts and words. This inference is the understanding of word meanings.
    We propose probabilistic models that enable robots to learn concepts and language. In
    this paper, we present an overview of the proposed models.
    Download PDF (21815K)
  • Akihito Sudo, Naoya Fujiwara, Keita Tokuda, Hidehito Honda, Kazuhiro ...
    2017 Volume 24 Issue 1 Pages 33-51
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     Automatic creation of concepts is important for various situations. Previous re-
    searches in the conceptual blending and the concept invention proposed the cognitive
    models which represent the process by which people combine concepts and those rela-
    tionships. However, those researches do not allow one to create new concepts automat-
    ically in the real world, where there are innumerable notions and the meanings of them
    are time-varying. Because the previous models can not discover which notions should
    be combined to create successful concepts, it is necessary for a user to find an appro-
    priate combination of notions. There are approximately 50 million combinations in the
    business domain. Therefore, we propose a novel model representing concept creation
    processes, which makes automatic creation of new and successful concepts possible even
    in such a real world setting. We formalize the concept creation process as discovering
    new connections between existing concepts and it can be mathematically represented
    using the chronological change of the semantic networks. The data of the input and
    output of this process can be built using a large document set. Hence, machine learning
    technique can reveal a law underlying the concept creation process. After extracting
    such a law, the machine learning model can provide new concepts in accordance with
    its law. In experiments, we evaluated the validity of this approach using real successful
    concepts and document sets, and created new concepts in food category.
    Download PDF (1692K)
  • A Model for Overproduction of an English Past-Tense Morpheme and a Japanese Case Particle
    Yuji Kawai, Yuji Oshima, Minoru Asada
    2017 Volume 24 Issue 1 Pages 55-76
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     Young children produce multi-word sentences including some systematic errors or
    overproduction. It has been reported that English-speaking children may add a mor-
    pheme “ed” to an irregular verb as its past tense while Japanese-speaking children may
    position a case particle “NO” after an adjective. We hypothesize that an insufficient in-
    crease in grammatical categories causes such overproduction, which can be expected to
    disappear with a sufficient increase.We assume that hidden states of a hidden Markov
    model (HMM) correspond to grammatical categories acquired from language input.
    Based on the HMM, the simulation results could partially verify the above hypothesis.
    In the English-trained model, the overproduction could appear and then decline. How-
    ever, it did not completely disappear because categories of regular and irregular verbs
    did not differentiate even when the model had many categories. In the Japanese-trained
    model, the overproduction could appear and then disappear through differentiation of
    categories of nouns and adjectives. The limitations of the proposed model are pointed
    out and future issues are discussed.
    Download PDF (2325K)
  • Shohei Hidaka
    2017 Volume 24 Issue 1 Pages 67-78
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     One of major paradigms in cognitive science is to model cognitive process as an in-
    formation processing in the digital computer. Marr (1982) has proposed to capture the
    cognitive process by the three levels of information processing, known as the levels of
    hardware implementation, algorithm and representation, and computational theory. In
    particular, the “computational theory” level is supposed most important among the
    three, as it captures the goal of the information process and explains why the process
    is organized so. It is, however, often controversial what to count as the computational
    theory, and there are several variations in its interpretation. In this article, we review
    these views on the computational theory, and overview the potential problems of the
    computational theory in a narrow sense which have been pointed out in past literature.
    By doing so, we discuss the aspects of the current paradigm to be extended toward a
    new alternative paradigm beyond the formulation of cognition as optimization.
    Download PDF (2184K)
  • Motoyuki Saito
    2017 Volume 24 Issue 1 Pages 79-95
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     Causal knowledge enables us to explain past events, to control present environment,
    and to predict future outcomes. Over the last decade, causal Bayes nets have been rec-
    ognized as a normative framework for causality and used as a psychological model to
    account for human causal learning and inference. This article provides an introduction
    to causal Bayes nets. According to causal Bayes nets, causal inference can be divided
    into three processes: (a) learning the structure of the causal network, (b) learning the
    strength of the causal relations, and (c) inferring the effect from the cause or the cause
    from the effect. For each process, I describe the predictions of causal Bayes nets, review
    experimental results, and suggest future directions. Although there are a few excep-
    tions (e.g., Markov violation), most of the results are consistent with the predictions
    of causal Bayes nets. The current problems of the Bayesian approach and its future
    perspective are discussed.
    Download PDF (1067K)
  • Katsunari Shibata, Yuki Goto
    2017 Volume 24 Issue 1 Pages 96-117
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     It is propounded that in order to avoid the “frame problem” or “symbol grounding
    problem” and to create a way to analyze and realize human-like intelligence with higher
    functions, it is not enough just to introduce deep learning, but it is significant to get
    out of deeply penetrated “division into functional modules” and to take the approach of
    “function emergence through end-to-end reinforcement learning.” The functions that
    have been shown to emerge according to this approach in past works are summarized,
    and the reason for the difficulty in the emergence of thinking that is a typical higher
    function is made clear.
     It is claimed that the proposed hypothesis that exploration grows towards think-
    ing through learning, becomes a key to break through the difficulty. To realize that,
    “reinforcement learning using a chaotic neural network” in which adding external ex-
    ploration noises is not necessary is introduced. It is shown that a robot with two
    wheels and a simple visual sensor can learn an obstacle avoidance task by using this
    new reinforcement learning method.
    Download PDF (4519K)
  • Toshimasa Yamazaki, Akiyoshi Akiyama, Soejima Eiko, Takahiko Yamam ...
    2017 Volume 24 Issue 1 Pages 118-128
    Published: March 01, 2017
    Released on J-STAGE: September 01, 2017
    JOURNAL FREE ACCESS
     The present study aimed to investigate the association between changes in the mod
    ular structures of brain functional connectivity networks (BFCNs) and individual vari-
    ability in foreign language learning ability. Six healthy Japanese students (all male,
    age range: 16–21 years) completed both pre- and post-training EEG sessions and ex-
    aminations in English words. BFCNs were constructed for pre- and post-training data.
    Training sessions required participants to attempt to memorize 200 pairs of the same
    English words as those in the examinations and their Japanese meanings. The con-
    nectivity between any two different electrodes (nodes) was calculated by determining
    the synchronization likelihood (SL) of the EEGs. An edge connecting the two nodes
    was drawn when statistically significant differences in SL values were observed between
    successful and unsuccessful trials. BFCNs for the pre-training data were higher in edge
    density than those for the post-training data, though this difference was not statisti-
    cally significant. Moreover, changes in the composition of modules of the BFCNs were
    associated with the individual difference between two English-word examinations in
    scores.
    Download PDF (6756K)
Presentation Awards
Book Reviews
feedback
Top