Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Volume 31, Issue 3
Regular
Displaying 1-3 of 3 articles from this issue
Regular
Original Paper
  • Felix Jimenez, Masayoshi Kanoh, Tomohiro Yoshikawa, Takeshi Furuhashi
    Article type: Original Paper (Concept Paper)
    2016 Volume 31 Issue 3 Pages A-F93_1-10
    Published: May 01, 2016
    Released on J-STAGE: May 13, 2016
    JOURNAL FREE ACCESS
    This paper reports feasibility of collaborative learning with educational-support robots between human. We designed a robot to prompt the constructive interaction. Constructive interaction has been regarded as a foundation of collaborative learning. Constructive interaction occurs when two students are solving the same question. Therefore, the robot is designed that alternately perform the speaker motion and listener motion to occur constructive interaction with human.In the speaker motion, the robot explains a solving method to the partner and solves a question. Moreover, the robot improves its accuracy rate as learning progress. In the listener motion, the robot does not solve a question and paying attention to a partner who is solving the questions. The robot learns while solving a question issued by learning system with a college student. The college students learned in the learning system with the robot for one month and were videoed during that time to see how they learned. This results of the study suggest that robot, which alternately solves the question with a human and improves its accuracy rate as learning progress, prompts learners to learning by constructive interaction with robot in collaborative learning. This constructive interaction indicates that learners alternately solve the question with robot and listen to robot's speaking. However, learners interest in robot decrease when robot improves its accuracy rate at 100% and performs the same action. Additionally, the screen agent, which is designed same action as robot, does not prompt some learners to learning by constructive interaction because they feel lousy that screen agent could not solve the question correctly, so they ignore what the screen agent says. The same situation was occurred in some learners learning with robot
    Download PDF (950K)
  • Sho Yamauchi, Keiji Suzuki
    2016 Volume 31 Issue 3 Pages B-F63_1-9
    Published: May 01, 2016
    Released on J-STAGE: May 25, 2016
    JOURNAL FREE ACCESS
    In the dynamic environment, how the robot can be controlled is an important issue. Robot would be surrounded by lots of energy flows. Such energy flows are often oscillatory and there are regularity. In spite of the absence of neural circuits, simple living organisms such as amoebas show homeostatic behavior against such energy flows. This is because amoebas are constructed as oscillator aggregation and they can be synchronized with oscillating objectives and some regular motions. In this study, the amoeba-like dynamics was focused as a possible mechanism to control the robot body motion fit for the purpose to stabilize relationship between robot and environment by synchronization. We proposed an oscillator aggregation based robot control mechanism to synchronize with oscillating objects in the external environment by using coupling van der Pol oscillators with a stochastic perturbation. In the real robot experiment, the robot demonstrated to maintain the own internal activity fit for the movement of external objects as a result of the synchronization, which was evaluated in the multivariate multiscale entropy analysis as an index of the degree of synchronization in complex cases. Synchronization and protective behavior observed in this experiment suggest a possible adaptability or autonomy of the robot to synchronize dynamical behaviors in the external environment as an emergence of homeostasis.
    Download PDF (1955K)
  • Takaaki Sugiyama, Kotaro Funakoshi, Mikio Nakano, Kazunori Komatani
    2016 Volume 31 Issue 3 Pages C-FB2_1-9
    Published: May 01, 2016
    Released on J-STAGE: May 25, 2016
    JOURNAL FREE ACCESS
    When a robot interacts with users in public spaces, it receives various sounds such as surrounding noises and users' voices. And furthermore, the robot needs to interact with multiple people at the same time. If the robot incorrectly determines whether it should respond to these sounds, it will erroneously respond to surrounding noises or ignore user utterances directed to the robot. In this paper, we present a machine learning-based method to estimate a response obligation, i.e., whether the robot should respond to an input sound. We address a problem setting that is more similar to interactions in public spaces than those assumed in previous studies. While previous studies assume only utterances directed to one of interlocutors as input sounds, we deal with not only those utterances but also noises and monologues. To deal with various sounds, our method uses the results of input sound classification and user behaviors both in an input sound interval and after the interval. In particular, the user behaviors after the interval are introduced as a key factor for improving the estimation accuracy of response obligation, such as a tendency that a user stands and keeps still after he/she talks to the robot. We demonstrate the new features significantly improved the estimation performance. We also investigate performances with various combinations of features and reveal that the results of input sound classification and the user behaviors after the interval are helpful for the estimation.
    Download PDF (2330K)
feedback
Top