This study investigates: (1) whether people prefer robots over humans as communication partners for different purposes and in different situations in daily life in conjunction with appearance of the robots; and (2) the influence of social interaction avoidance on people’s preference for robots over humans in conjunction with appearance of the robots. Results showed that a certain number of people preferred robots as communication partners for many purposes and in many situations. In addition, a robot whose appearance resembled a human was generally preferred. On the other hand, people with high social anxiety preferred robots with a mechanical appearance as communication partners for many purposes and in many situations. The findings of the study suggest that the appearance of a robot may be a substantial factor when introducing a robot in daily life.
Identifying antecedents in anaphoric relationships is considered to be a necessary elemental technique for achieving high accuracy in natural language processing such as robot dialogue and question answering. We developed AnasysD with a more accurate anaphoric analysis system by conducting anaphoric analysis of indicative pronoun on the basis of the word meaning similarity using semantic analysis system SAGE. In order to quantify the antecedent likelihood, we set 12 kinds of features including co-occurrence similarity between the antecedent clause and the receiver clause of anaphor and another likelihood over the 2-dimensional features such as the upper concept classification of the anaphor and the deep case of the antecedent. We use the NAIST text corpus to learn the probability distribution of the correct antecedent rate by the naive Bayes method and decide the clause with the highest likelihood to be the correct antecedent. As a result of evaluating by 5-fold cross-validation, we achieved an accuracy of 63.42%.
Recently, educational-support robots, which support learning, have been researched and developed. Most previous study focused on the learning effect of the robots for learners who learns unknown learning contents. Thus, we did not know the learning effect of the robots for learners who review the learning content that learners learned once. This paper researches the learning effect of the robots in the collaborative learning with learners who review the learning content. The review of this study was defined as the learner learns exactly the same contents once learned once. The previous study of education reported that learner, who reviews the learning contents, can effectively memorize the leaning contents. The learning effect of collaborative learning with robots was higher than the existing learning method. We think that the collaborative learning with robots is effective for learners who review the learning contents. Moreover, we think that he robots which provide hints can prompt learners who review the learning contents to improve the learning effect than robots which teach the learning contents because the learners learned the learning contents once. Thus, this paper investigates the learning effect of the robots which provide hints in the collaborative learning with learners who review the learning content. Specifically, the experiment quantitatively evaluates the learner’s memory of the learning contents after a certain period of time was passed since collaborative learning with robots.
In order to consider user’s emotion and feeling on human-computer interaction system, it is important to understand the user’s personal preference. However, it is difficult to grasp personal preference information completely because it may differ among people and it can be changed easily by obtained knowledge and experiences. Although there are some database about preference and evaluation, most of them deal with “general” preference information. Nevertheless, people can estimate whether the partner likes an object from his/her utterances. For example, when a person happily says “X won the championship,” we can estimate that he/she likes X. On the other hand, when a person gloomily says “X won the championship,” we can estimate that he/she does not like X. In this paper, we propose a method to estimate like-dislike polarity for an object in an utterance by using such heuristics on the basis of the case frame structure of the utterance and the speaker’s emotion. The heuristics are expressed by the Emotion Generating Calculations method (EGC). The proposed method is applied into the utterance when the speaker expresses pleasure or displeasure. In the experiment, the proposed method calculated like-dislike polarity of a word in the utterance by using the speaker’s emotion estimated by a participant and the calculated polarity was compared with the polarity manually estimated by the same participant. The precision and recall of emotion estimation process were 0.76 and 0.88, respectively.
Learning an action from others require to infer their underlying intentions. Psychological studies have reported behavioral evidences that young children do infer others’ underlying intentions by observing their actions. The objective of the present study is to propose a mechanistic account for how intention inference is possible by observing others’ actions. For this purpose, we performed a series of simulations in which two agents control pendulums for different tasks and goals, and analyzed which types of features is informative to infer their latent intentions. Our analysis showed that a type of fractal dimension of the pendulum movements is sufficiently informative to classify the types of agents. With respect to its invariant nature, our results suggest that the fine-grained movement patterns such as the fractal dimension reflect the structure of the underlying intentions.
Recently, educational support robots, which support learning attract many people’s attentions. In this study, we focus on the educational support robots (teacher-type robots) that teaches learners as a teacher. In conventional researches of teacher-type robots, the main role of the robots is to teach only how to solve questions and to explain the contents of the learning. Using such robots, the learner may not learn the contents of the learning because the learner depend on the support of the robot. Thus, in this research, we focus on the cognitive apprenticeship theory in order to prevent this problem. The cognitive apprenticeship theory changed the support according to the learning situation of the learners. Previous studies have reported that the pedagogy based on cognitive apprenticeship theory can improve the learning skills of the learner. Therefore, we think the learner can improve him/her learning skills when the robot teaches how to solve questions based on the cognitive apprenticeship theory. In this paper, we investigates the effects of educational support robots based on the cognitive apprenticeship theory in collaborative learning with junior high-school students.
In this study, we analyzed the human’s moving trajectories in the encounter scene based on a proposed model. The model of agent possesses preferences for a relationship with a target. The model associates the intensity of aggressive and passive involvements based on the relative distance and the relative angle of the partner. In the scenario of the experiment, the participants approached the target (mannequin) to ask the way. As the results, the participants showed trajectories of a curve deviating from the front direction of the target rather than straight trajectories. The participants also showed adjustment of timing to start speaking based on the assumption of the internal state of the target set by experimental suggestion. The trajectory of human curved approach behavior can be generated approximately by the proposed model. We discuss the reason of these approaching behavior based on the internal state of the model. It can be said that the behavior generation of the proposed model and the estimation of the internal state are useful for designing human-robot interaction in encounter scene.
It is well known that prior knowledge and belief substantially influence our attitude toward another person in an interpersonal communication scene. Meanwhile, in many cases, we do not use confirmed prior knowledge and belief in a human-robot interaction scene, because these entities do not have commonly shared identities. Hence, it is still unclear that how prior knowledge and belief affect our attitude toward artificial robots. In this study, we investigated how prior knowledge and belief influence our attitude toward android of “Soseki Natsume” who is one of most famous novelists in Japan. As a result, we found that prior knowledge about Soseki was weakly related to the subject’s sense of reality of the android. Furthermore, we also found that the awe of artificial intelligence was strongly related to the subject’s sense of reality of the android.
As one of the methods for continuous and smooth communication between people, it is effective that suggestion of a topic by the conversation support system in an appropriate condition. This system observes constantly users’ condition in conversation and decide whether suggest topics or not. In this study, we conducted a conversation experiment aimed to gain a new sight for suggestion of a topic toward development of a conversation support system. Moreover, to objectively measure the condition of active conversation and lapse in conversation user-by-user, we analyzed the LF/HF, that is the balance value of sympathetic nerve and parasympathetic nerve, calculated from heart rate variability in conversation.