認知科学
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
24 巻 , 1 号
認知科学
選択された号の論文の17件中1~17を表示しています
巻頭言
会員へのメッセージ
特集-新しい計算論が切り拓く認知科学の展開
  • 谷口 忠大, 岡田 浩之
    2017 年 24 巻 1 号 p. 5-7
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
  • 小林 一郎
    2017 年 24 巻 1 号 p. 8-15
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     This paper introduces an approach to ‘meaning' from a viewpoint of Hallidayan lin-
    guistics, that is, systemic functional linguistics (SFL).It explains the basic idea of SFL,
    the comparison between Hallidayan and Chomskyan linguistics in terms of an approach
    to meaning, and the relation between SFL and Wittgenstein's philosophy.
  • 海野 裕也
    2017 年 24 巻 1 号 p. 16-22
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     Despite recent artificial intelligence technologies achieved prominent results, machines
    cannot behave like humans yet. I compare the ability of humans’ language acquisition
    with natural language processing by machine learning technologies, taking learning of
    word embedding vectors as an example. And I introduce some notable researches which
    may fill in the gap, one-shot learnig, memory models and language game tasks. Al-
    though these researches have just begun, it will be important for artificial intelligence
    to aquire human intelligence.
  • 中村 友昭, 長井 隆行
    2016 年 24 巻 1 号 p. 23-32
    発行日: 2016/12/30
    公開日: 2017/09/01
    ジャーナル フリー
     In this study, we define concepts as categories into which a robot classifies perceptual
    information obtained through interaction with others and the environment, and the
    inference of unobserved information through the concepts is defined as understanding.
    Furthermore, a robot can infer unobserved perceptual information from words by con-
    necting concepts and words. This inference is the understanding of word meanings.
    We propose probabilistic models that enable robots to learn concepts and language. In
    this paper, we present an overview of the proposed models.
  • 須藤 明人, 藤原 直哉, 徳田 慶太 , 本田 秀仁, 植田 一博
    2017 年 24 巻 1 号 p. 33-51
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     Automatic creation of concepts is important for various situations. Previous re-
    searches in the conceptual blending and the concept invention proposed the cognitive
    models which represent the process by which people combine concepts and those rela-
    tionships. However, those researches do not allow one to create new concepts automat-
    ically in the real world, where there are innumerable notions and the meanings of them
    are time-varying. Because the previous models can not discover which notions should
    be combined to create successful concepts, it is necessary for a user to find an appro-
    priate combination of notions. There are approximately 50 million combinations in the
    business domain. Therefore, we propose a novel model representing concept creation
    processes, which makes automatic creation of new and successful concepts possible even
    in such a real world setting. We formalize the concept creation process as discovering
    new connections between existing concepts and it can be mathematically represented
    using the chronological change of the semantic networks. The data of the input and
    output of this process can be built using a large document set. Hence, machine learning
    technique can reveal a law underlying the concept creation process. After extracting
    such a law, the machine learning model can provide new concepts in accordance with
    its law. In experiments, we evaluated the validity of this approach using real successful
    concepts and document sets, and created new concepts in food category.
  • 英語の過去形の形態素と日本語の格助詞の過剰生成に共通した計算モデル
    河合 祐司, 大嶋 悠司 , 浅田 稔
    2017 年 24 巻 1 号 p. 55-76
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     Young children produce multi-word sentences including some systematic errors or
    overproduction. It has been reported that English-speaking children may add a mor-
    pheme “ed” to an irregular verb as its past tense while Japanese-speaking children may
    position a case particle “NO” after an adjective. We hypothesize that an insufficient in-
    crease in grammatical categories causes such overproduction, which can be expected to
    disappear with a sufficient increase.We assume that hidden states of a hidden Markov
    model (HMM) correspond to grammatical categories acquired from language input.
    Based on the HMM, the simulation results could partially verify the above hypothesis.
    In the English-trained model, the overproduction could appear and then decline. How-
    ever, it did not completely disappear because categories of regular and irregular verbs
    did not differentiate even when the model had many categories. In the Japanese-trained
    model, the overproduction could appear and then disappear through differentiation of
    categories of nouns and adjectives. The limitations of the proposed model are pointed
    out and future issues are discussed.
  • 日髙 昇平
    2017 年 24 巻 1 号 p. 67-78
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     One of major paradigms in cognitive science is to model cognitive process as an in-
    formation processing in the digital computer. Marr (1982) has proposed to capture the
    cognitive process by the three levels of information processing, known as the levels of
    hardware implementation, algorithm and representation, and computational theory. In
    particular, the “computational theory” level is supposed most important among the
    three, as it captures the goal of the information process and explains why the process
    is organized so. It is, however, often controversial what to count as the computational
    theory, and there are several variations in its interpretation. In this article, we review
    these views on the computational theory, and overview the potential problems of the
    computational theory in a narrow sense which have been pointed out in past literature.
    By doing so, we discuss the aspects of the current paradigm to be extended toward a
    new alternative paradigm beyond the formulation of cognition as optimization.
  • 斎藤 元幸
    2017 年 24 巻 1 号 p. 79-95
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     Causal knowledge enables us to explain past events, to control present environment,
    and to predict future outcomes. Over the last decade, causal Bayes nets have been rec-
    ognized as a normative framework for causality and used as a psychological model to
    account for human causal learning and inference. This article provides an introduction
    to causal Bayes nets. According to causal Bayes nets, causal inference can be divided
    into three processes: (a) learning the structure of the causal network, (b) learning the
    strength of the causal relations, and (c) inferring the effect from the cause or the cause
    from the effect. For each process, I describe the predictions of causal Bayes nets, review
    experimental results, and suggest future directions. Although there are a few excep-
    tions (e.g., Markov violation), most of the results are consistent with the predictions
    of causal Bayes nets. The current problems of the Bayesian approach and its future
    perspective are discussed.
  • 柴田 克成, 後藤 祐樹
    2017 年 24 巻 1 号 p. 96-117
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     It is propounded that in order to avoid the “frame problem” or “symbol grounding
    problem” and to create a way to analyze and realize human-like intelligence with higher
    functions, it is not enough just to introduce deep learning, but it is significant to get
    out of deeply penetrated “division into functional modules” and to take the approach of
    “function emergence through end-to-end reinforcement learning.” The functions that
    have been shown to emerge according to this approach in past works are summarized,
    and the reason for the difficulty in the emergence of thinking that is a typical higher
    function is made clear.
     It is claimed that the proposed hypothesis that exploration grows towards think-
    ing through learning, becomes a key to break through the difficulty. To realize that,
    “reinforcement learning using a chaotic neural network” in which adding external ex-
    ploration noises is not necessary is introduced. It is shown that a robot with two
    wheels and a simple visual sensor can learn an obstacle avoidance task by using this
    new reinforcement learning method.
  • 山﨑 敏正, 秋山 暉佳, 副島 英子 , 山本 宇彦
    2017 年 24 巻 1 号 p. 118-128
    発行日: 2017/03/01
    公開日: 2017/09/01
    ジャーナル フリー
     The present study aimed to investigate the association between changes in the mod
    ular structures of brain functional connectivity networks (BFCNs) and individual vari-
    ability in foreign language learning ability. Six healthy Japanese students (all male,
    age range: 16–21 years) completed both pre- and post-training EEG sessions and ex-
    aminations in English words. BFCNs were constructed for pre- and post-training data.
    Training sessions required participants to attempt to memorize 200 pairs of the same
    English words as those in the examinations and their Japanese meanings. The con-
    nectivity between any two different electrodes (nodes) was calculated by determining
    the synchronization likelihood (SL) of the EEGs. An edge connecting the two nodes
    was drawn when statistically significant differences in SL values were observed between
    successful and unsuccessful trials. BFCNs for the pre-training data were higher in edge
    density than those for the post-training data, though this difference was not statisti-
    cally significant. Moreover, changes in the composition of modules of the BFCNs were
    associated with the individual difference between two English-word examinations in
    scores.
大会発表賞
書評
feedback
Top