The research area of human robot interaction has started around 2000and we have studied it more than 10 years. The conference is glowing, but on theother hand we are feeling the limitation. One of the reasons is the concept of interaction is too broad. We need to focus on the key issue. The author believes it is mutual leaning between humans and robots. Mutual learning is the most important purpose in the human society. We will be a member of the society for learning something from others. This paper discusses on human robot learning for deeply studying the human robot interaction and realizes robots that work in our future society.
Dr. Ishiguro mentioned that unsatisfactory development of robot technology through HRI, especially HRI with cognitive scientific approach is to be attributed to the width of concept of “Interaction” with HRI. He proposed that research concerning robot & human relations should aim to realize “learn-from-one-another relationship,” starting HRL, Human-Robot-Learning instead of HRI. However, I am afraid that replacement of target is not sufficient for progress of research concerning robot & human relations from cognitive scientific approach. For further progress of HRI or this kind of research,it is necessary to develop and introduce new method in research and verification. If field experiment is expected as an important method for HRI or this kind of research, it will require theoretical refinement of observational methods for verification. I would like to propose that we may as well set up a cycle of definition of robot and its sophistication according to the progression of HRI. I hope HRI and HRL will provide new means for research activities on teaching, learning and relationship between teachers and students,leading to true innovation of school system and educational administration.
Teaching has been recognized as a fundamental human ability to transmit skills,concepts and cultural knowledge. Recently intriguing evidence has been found in nonhuman animals challenging the assumption that teaching requires complex cognitive abilities. This paper aims to clarify the possibilities of a pedagogical machine- an artifact that can help and enhance human learning. To this aim, we re-define acriterion in animal research and extract essential qualifications for the machine that can interact with humans. We also discuss a future research for a machine that can teach.
Cognitive research questions to clarify possibilities to realize a pedagogical machine were discussed in the paper by Hiraki. To define “teaching-learning behavior, he added three conditions of “learning” to the three conditions of “teaching” by Caro & Hauser. Based on the definition, he discussed about the future research direction of the pedagogical machine. The paper is very stimulative and important to open the discussion of the pedagogical machine. Three questions related to his paper are discussed in this comment:
1) whether the behavior of “teaching-learning” is fully reduced to the interaction between two individuals,
2) the difficulties for a naive learner to express what are learned along the learning process,
3) whether a human being has the necessary and sufficient skill as a pedagogicalagent.
The aim of this short article is to explore new trends of studying learning practices in reality, in order to ultimately raise their quality. By having accumulated huge amount of “learning” studies, we now have a rich data-base of “rules of thumb” to promote education. In order to raise the adaptive generalize-ability of such rules, we explore the use of remotely operable robots, by having them play the role of a learning partner as a good listener in constructive interaction that occurs in collaborative classrooms. This allows us, for the first time in our study, to quasi-control different groups of collaboration byintroducing “same” activities only playable by such robots. Their roles as listener also actualize an important function to collect more detailed and desired process data from learner-centered classrooms, which has been practically almostimpossible to collect. To approach these aims, our joint project named “Human-Robot Symbiosis” has been exploring new challenges on robotics engineering,cognitive science of human-robot relation setting, and on understanding and implementing basic mechanisms of collaborative learning. The article demonstrates three new types of study in the third topic. One is on using a series of class-room like Lego-block building class run by a robot to promote spontaneous collaboration among children. Another is on confirming perspective expansion effects to promote spontaneous dialoguing to appreciate art works, by having a robot participate in the dialogue to provide a “new” perspective into it by the highly experienced operators. The other is on identifying effective roles of learning partnersin collaborative classes carefully designed to promote constructive interaction for integration of knowledge. All show the importance of giving learning agency to learners in different situations. It concludes with directions for futurestudy.
The cross-modality adjective metaphor (e.g., “red taste ”, “silent color ”) is a metaphor in which the vehicle (i.e., adjective) and the topic (=tenor) (i.e., noun) express different perceptual qualities. Most of the existing studies examine how the acceptability of cross-modality adjective metaphors can be explained by the pairing of the vehicle’s and the topic’s perceptual qualities. Unlike these studies, this paper explores how people comprehend cross-modality adjective metaphors. We conducted a large-scale psychological experiment and collected 10388 words associated with 62 cross-modality adjective metaphors. We regarded those words as features of cross-modality adjective metaphors and classified them into the following four kinds of features: common (features listed for the metaphor, the vehicle and the topic), vehicle-shared (features listed for both the metaphor and the vehicle, but not listed for the topic), topic-shared (features listed for both the metaphor and the topic, but not listed for the vehicle), and emergent (features listed for the metaphor, but not listed for either the vehicle or the topic). The result showed that there weresignificantly more emergent features than the other kinds of features in the comprehension of cross-modality adjective metaphors. We assumed that emergent meanings of cross-modality adjective metaphors are based on scene association. We analyzed how many words associated with cross-modality adjective metaphors could beclassified into those based on scene association. The result showed that therewere significantly more words based on scene association than those not based on scene association. This result suggests that meanings of cross-modality adjective metaphors are basically based on scene association.
In Japanese traditional performing arts, “breathing” is consideredone of the most fundamental techniques. Recent studies reveal that breathing is not synchronized with body action in masters or experts in Kyogen and Kabuki, Japanese traditional performing arts. This result contrasts sharply with the report that, with growing proficiency,breathing becomes synchronized with body actionin sports and Western dances. Bunraku,which is also one of the Japanese traditional performing arts, is a form of puppet theater in which three puppeteers cooperatively maneuver one puppet. Bunraku has thus different characteristics from Kyogen and Kabuki; the body (puppet) that performs actions is different from thebodies (puppeteers) that control the actions. Therefore we can expect to find, in Bunraku, a relation between body action and breathing which is different fromthat in Kyogen and Kabuki. In this paper, we clarified relation between body action and breathing in Bunraku puppeteers and compared it with that found in Kyogen and Kabuki. Two Bunraku puppeteers who were different in career (one puppeteer’s career spanned 31 years while the other puppeteer’s career spanned 13 years)participated in our experiment: We asked them to execute the following three tasks; the first task was to perform basic actions called Kata with a familiar puppet, the second was to perform the same basic actions with an unfamiliar puppet, and the third was to perform an actual Bunraku play both to the music by shamisen and to the narration by Tayu. In order to clarify whether or not a puppeteer’s breathing was synchronized with his body action, we investigated the correspondence between his breathing phases and the puppet’s motions in performance aswell as the periodicity and stability of his breathing by analyzing autocorrelation of and applying Fourier analyses to breathing curves. As a result breathing was found less synchronized with body action for the more experienced puppeteer with 31 years career than for the less experienced puppeteer with 13 years career. When they executed the first and third tasks, in addition, the more experienced puppeteer showed more periodic and stable breathing patterns than the less experienced puppeteer did. These findings are consistent with the previous ones found in Kyogen and Kabuki. On the other hand, a clear difference in breathing pattern between the two puppeteers was not found when they did the second task, which is not necessarily consistent with the finding in Kyogen and Kabuki. Along with the previous findings, the results suggest that a common breathing technique may be used among Japanese traditional performing arts, Kyogen, Bunraku and Kabuki.
The role of phonology in visual word recognition has been widely researched. Specifically,it is worth investigating whether the phonological processing of Japanese kanji is the same as that of an alphabetic writing system. The current study systematically examined articulatory suppression effects. Although articulatory suppression is a research tool often used to explore phonological processing in reading, it does not impair all types of phonological processing. Experiment 1A and 1B examined whether articulatory suppression disrupts rhyme judgments. Participants were shown pairs of two-kanji compound words and asked to judge whether they contained the same vowel. In both experiments, participants made more errors under an articulatory suppression condition. Experiment 2A and 2Bexamined whether articulatory suppression disrupts homophone judgments. The stimuli of Experiment 2A were the same as experimental stimuli of Experiment 1A. Theresults showed no articulatory suppression effect. The non-homophone pair was a phonologically similar pair in Experiment 2B. The results suggest that articulatory suppression had some interference effect on homophone judgment. The articulatory suppression effect on phonological processing of two-kanji words was similar to that of alphabetic writing system. Articulatory suppression must impair the segmentation process, irrespective of task type.
The causal approach conflicts with the associative approach on the relation between observation and intervention in causal inference. Causal Bayes nets are unique in that they not only provide common basis for observational andinterventional knowledge but also predict the ability to derive interventional inference from observational learning and observational inference from interventional learning. Therefore, two experiments were conducted in order to test their psychological validity. In Experiment1, participants were informed about the causal structure of four variables, requested to learn the strength of causal relations from passive observations, and asked to make probabilistic inferences aboutobservation and intervention. The results replicated the previous finding that people can derive correct predictions about observation and intervention after observational trial-by-trial learning (Meder, Hagmayer, Waldmann, 2008).In Experiment 2, in which participants learned causal relations by active interventions,the results revealed inadequate sensitivity to the differences between observationand intervention in causalinference. Moreover, comparison between observational learning(Exp.1) and interventional learning (Exp.2) suggested that observations lead to more accurate estimates than interventions. The most of these results are consistent with the predictions of causal Bayes nets theory. The differences between observation and intervention are discussed.