Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Volume 31, Issue 4
Displaying 1-4 of 4 articles from this issue
Original Paper
  • Takashi Yamaguchi, Koji Inoue, Yoshino Koichiro, Katsuya Takanashi, Ni ...
    2016 Volume 31 Issue 4 Pages C-G31_1-10
    Published: July 01, 2016
    Released on J-STAGE: August 03, 2016
    JOURNAL FREE ACCESS
    There is a growing interest in conversation agents and robots which conduct attentive listening. However, the current systems always generate the same or limited forms of backchannels every time, giving a monotonous impression. This study investigates the generation of a variety of backchannel forms appropriate for the dialogue context, using the corpus of counseling dialogue. At first, we annotate all acceptable backchannel form categories considering the permissible variation in backchannels. Second, we analyze how the morphological form of backchannels relates to linguistic features of the preceding utterance such as the utterance boundary type and the linguistic complexity. Based on this analysis, we conduct machine learning to predict backchannel form from the linguistic and prosodic features of the preceding context. This model outperformed a baseline which always outputs the same form of backchannels and another baseline which randomly generates backchannels. Finally, subjective evaluations by human listeners show that the proposed method generates backchannels more naturally and gives a feeling of understanding and empathy.
    Download PDF (520K)
  • Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda
    2016 Volume 31 Issue 4 Pages D-FB1_1-13
    Published: July 01, 2016
    Released on J-STAGE: August 03, 2016
    JOURNAL FREE ACCESS
    In this paper, we propose a constrained independent topic analysis in text mining. Independent topic analysis is a method for extracting mutually independent topics from the text data by using the independent component analysis. In the independent topic analysis, it is possible to obtain the most independent topics. However, these obtained topics may differ from the ones wanted by user. For example, it is assumed resultant three topics, topic A and topic B and topic C. If a content of topic A and topic B is thought to be close, user wants to merge the topic A and topic B as one of the topic D. In addition, when user wants to analyze topic A in more detail, user would like to separate topic A to topic E and topic F. In that case, method which can incorporate these requests of the user is required. To that end, we define the Merge Link constraints and Separate Link constraints. Merge Link constraints is a constraint that merges two topics in a single topic. Separate Link constraint is a constraint that separates one of the topics in the two topics. In this paper, we propose a method of obtaining a highly independent topic that meet these constraints. We conducted evaluation experiments on proposed methods, and obtained results to show the effectiveness of our approach.
    Download PDF (2759K)
  • Riichiro Mizoguchi, Tsukasa Hirashima, Tomoya Horiguchi
    Article type: Original Paper (Concept Paper)
    2016 Volume 31 Issue 4 Pages A-F44_1-13
    Published: July 01, 2016
    Released on J-STAGE: August 12, 2016
    JOURNAL FREE ACCESS

    Current methods for teaching about "force and motion" depend on the use of equations and do not place emphasis on adequately supporting an understanding based on causality. One possible reason for this is the lack of a causality-compliant theory that gives a consistent treatment of the problem of action and reaction, the problem of apparent forces like centrifugal force, and so on. By adopting a naive view of causality (causality-based understanding) that agrees with human experience, we constructed a causal theory of force and motion. This theory can serve as the foundation for an educational approach in helping junior high and high school students understand and explain various phenomena related to forces and motion. Using this approach, it will be possible to design and develop educational support methods and systems expected to reduce student misunderstandings. It will also afford the creation of a general-purpose motion simulator with the ability to provide automated causal explanations of physical phenomena.

    Download PDF (1060K)
  • Takuya Hiraoka, Kallirroi Georgila, Elnaz Nouri, David Traum, Satoshi ...
    2015 Volume 31 Issue 4 Pages B-FC1_1-14
    Published: July 01, 2015
    Released on J-STAGE: September 09, 2016
    JOURNAL FREE ACCESS

    Trading dialogs are a kind of negotiation in which an exchange of ownership of items is discussed, and these kinds of dialogs are pervasive in many situations. Recently, there has been an increasing amount of research on applying reinforcement learning (RL) to negotiation dialog domains. However, in previous research, the focus was on negotiation dialog between two participants only, ignoring cases where negotiation takes place between more than two interlocutors. In this paper, as a first study on multi-party negotiation, we apply RL to a multi-party trading scenario where the dialog system (learner) trades with one, two, or three other agents. We experiment with different RL algorithms and reward functions. We use Q-learning with linear function approximation, least-squares policy iteration, and neural fitted Q iteration. In addition, to make the learning process more efficient, we introduce an incremental reward function. The negotiation strategy of the learner is learned through simulated dialog with trader simulators. In our experiments, we evaluate how the performance of the learner varies depending on the RL algorithm used and the number of traders. Furthermore, we compare the learned dialog policies with two strong hand-crafted baseline dialog policies. Our results show that (1) even in simple multi-party trading dialog tasks, learning an effective negotiation policy is not a straightforward task and requires a lot of experimentation; and (2) the use of neural fitted Q iteration combined with an incremental reward function produces negotiation policies as effective or even better than the policies of the two strong hand-crafted baselines.

    Download PDF (1310K)
feedback
Top