人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
33 巻 , 1 号
選択された号の論文の8件中1~8を表示しています
原著論文
  • 田村 浩一郎, 上野山 勝也, 飯塚 修平, 松尾 豊
    2018 年 33 巻 1 号 p. A-H51_1-11
    発行日: 2018/01/01
    公開日: 2018/01/05
    ジャーナル フリー

    In order to encourage individual asset flow into the Japanese market through long-term investments, it is important to evaluate stock values of companies because stock prices of companies are determined not only by internal values, which are independent of other companies, but also by market fundamentalism. However, there are few studies conducted in this area in the machine learning community, while there are many studies about prediction of stock price trends. These studies use a single factor approach (such as textual or numerical) and focus on internal values only. We propose a model where we combine two major financial approaches to evaluate stock values: technical analysis and fundamental analysis. The technical analysis is conducted using Long-Short Term Memory and technical indexes as input data. On the other hand, the fundamental analysis is conducted transversely and relatively by creating a program which can retrieve data on financial statements of all listed companies in Japan and put them into a database. From the experiments, compared to single technical analysis proposed model’s accuracy in classification was 11.92% more accurate and the relative error of regression was 3.77% smaller on average. In addition, compared to single factor approaches the accuracy in classification was 6.16% more accurate and the relative error of regression was 3.22% smaller on average. The proposed model has the potential to be combined with other prediction methods, such as textual approaches or even traditional financial approaches, which would improve accuracy and increase practicality of this model.

特集論文「知的対話システム」
  • 石川 葉子, 水上 雅博, 吉野 幸一郎, Sakti Sakriani, 鈴木 優, 中村 哲
    2018 年 33 巻 1 号 p. DSH-B_1-9
    発行日: 2018/01/01
    公開日: 2018/01/31
    ジャーナル フリー

    Using emotional expressions in a conversation is an efficient way to convey one’s thoughts. Emotional expressions of the persuader have a strong impact to the recipient’s attitude in a negotiation. Studies for a persuasive dialog system, which tries to lead users to the system’s specific goals, show that incorporating users’ emotional factors can enhance the system to persuade users. However, in a human-human negotiation, the persuader can have better outcomes not only through considering the emotion of the other person but also through expressing his or her own emotions. In this paper, we propose an example-based persuasive dialog system with expressive emotion capability. The proposed dialog system is trained by newly collected corpus with statistical learning. Emotional states and the user’s acceptance rate of the persuasion are annotated. Experimental results through crowdsourcing suggested that the system using emotional expressions has a potential to persuade some users who prefer to be used emotional expressions, effectively.

  • 高津 弘明, 福岡 維新, 藤江 真也, 林 良彦, 小林 哲則
    2018 年 33 巻 1 号 p. DSH-C_1-24
    発行日: 2018/01/01
    公開日: 2018/03/30
    ジャーナル フリー

    This paper describes a spoken dialogue system for accommodating a user’s information behaviors with various levels of information need. Our system, given a set of same-topic news articles, compiles a utterance plan that consists of a primary plan for delivering main news content, and the associated subsidiary plans for supplementing the main content. A primary plan is generated by applying text summarization and style conversion techniques. The subsidiary plans are compiled by considering potential user/system interactions. To make this mechanism work, we first classified user’s possible passive/active behaviors, and then designed the corresponding system actions. We empirically confirmed that our system was able to deliver the news content smoothly while dynamically adapting to the change of user’s intention levels. The smoothness of a conversation can be attributed to the pre-compiled utterance plan.

  • Nurul Lubis, Sakriani Sakti, Koichiro Yoshino, Satoshi Nakamura
    2018 年 33 巻 1 号 p. DSH-D_1-10
    発行日: 2018/01/01
    公開日: 2018/01/31
    ジャーナル フリー

    To completely mimic the naturalness of human interaction in Human-Computer Interaction (HCI), emotion is an essential aspect that should not be overlooked. Emotion allows for a rich and meaningful human interaction. In communicating, not only we express our emotional state, but we are also affected by our conversational counterpart. However, existing works have largely focused only on occurrences of emotion through recognition and simulation. The relationship between an utterance of a speaker and the resulting emotional response that it triggers is not yet closely examined. Observation and incorporation of the underlying process that causes change of emotion can provide useful information for dialogue systems in making a more emotionally intelligent decision, such as being able to take proper action with regard to user’s emotion, and to be aware of the emotional implication of their response. To bridge this gap, in this paper, we tackle three main tasks: 1) recognition of emotional states, 2) analysis of social-affective events in spontaneous conversational data, to capture the relationship between actions taken in discourse and the emotional response that follows, and 3) prediction of emotional triggers and responses in a conversational context. The proposed study differs from existing works in that it focuses on the change of emotion (emotional response) and its cause (emotional triggers) on top of the occurrence of emotion itself. The analysis and experimental results are reported in detail in this paper, showing promising initial results for future works and development.

  • 大野 航平, 武田 龍, ニコルズ エリック, 中野 幹生, 駒谷 和範
    2018 年 33 巻 1 号 p. DSH-E_1-10
    発行日: 2018/01/01
    公開日: 2018/03/01
    ジャーナル フリー

    This paper proposes a lexical acquisition framework for a closed-domain chatbot. It learns the ontological categories of unknown terms in dialogues through implicit confirmation instead of using explicit questions that disrupt the flow of conversation. Our system generates an implicit confirmation request containing an unknown term’s category prediction, which may be incorrect. It then acquires the category only if its prediction was correct by checking various cues that appeared during the confirmation process. We divide this process into two steps. First, we propose a two-tiered method to predict unknown term categories that attempts to predict the most specific category and backs off to a more general category when it is insufficiently confident about its prediction. Direct evaluation showed that this two-tiered method makes correct category predictions 54.4% more often than that predicting the most specific category only. Next, we propose a method for identifying whether categories included confirmation requests are correct by using both the user response following the confirmation request and its context. We introduce features, which are derived from analysis of the confirmation process, and construct a classifier from chat data, which we collect with crowdsourcing. We show that the classifier can identify correct ategories with a precision of 0.708.

  • 井上 昂治, Lala Divesh, 吉井 和佳, 高梨 克也, 河原 達也
    2018 年 33 巻 1 号 p. DSH-F_1-12
    発行日: 2018/01/01
    公開日: 2018/01/05
    ジャーナル フリー

    This article addresses the estimation of engagement level based on the listener’s behaviors such as backchannel, laughing, head nodding, and eye-gaze. Engagement is defined as the level of how much a user is being interested in and willing to continue the current interaction. When the engagement level is evaluated by multiple annotators, the criteria for annotating the engagement level would depend on each annotator. We assume that each annotator has its own character which affects the way of perceiving the engagement level. We propose a latent character model which estimates the engagement level and also the character of each annotator as a latent variable. The experimental results show that the latent character model can predict the engagement label of each annotator in higher accuracy than other models which do not take the character into account.

  • 窪田 智徳, 小川 浩平, 石黒 浩
    2018 年 33 巻 1 号 p. DSH-G_1-13
    発行日: 2018/01/01
    公開日: 2018/07/05
    ジャーナル フリー

    In this study, we developed and evaluated a dialogue system which enables an android robot to have a chat with users on Niconico Live provided by Dwango Co., Ltd. which is a live streaming service. In Niconico Live, broadcasters can talk to users who write comments displayed on the video stream. Therefore, by using Niconico Live chat, we eliminated speech recognition errors which can occur during speech conversations. In addition, the dialogue system can keep consistency of conversation by selecting the comment to which it can correctly respond because many comments are shown simultaneously on the video stream. The dialogue system was designed as a retrieval-based one which finds the appropriate response to user’s utterance from a dialogue corpus. Therefore, we collected a dialogue corpus containing 4,460 pairs of comments and robot responses by teleoperating the android robot talking with users, as a first step. In the next step, we completed the dialogue system on Niconico Live integrating the dialogue corpus into it. To evaluate the performance of the dialogue system, we recorded the conversation between the android and users while running the designed system. After that, we showed the recorded conversation to evaluators and asked them how they feel about the naturalness and consistency of the conversation. Results of the experiment indicate that Niconico Live users perceived the responses of the dialogue system to be natural and found the chat with the android entertaining. Through this study, we demonstrated the applicability of the dialogue system on Niconico Live. However, it is difficult to discuss its effectiveness when applying it to other situations or other communication media such as a humanoid robot or a virtual agent. Therefore, as a future work, conducting a comparative experiment might lead to better understanding of the effectiveness of the dialogue system for androids.

  • 福田 拓也, 若林 啓
    2018 年 33 巻 1 号 p. DSH-H_1-10
    発行日: 2018/01/01
    公開日: 2018/01/05
    ジャーナル フリー

    The backchannel plays an important role in smooth communication. For dialogue system, appropriate backchanneling is a significant factor that makes more natural conversation. However, many existing dialogue systems have poor backchannel patterns and only can produce simple responses. In this paper, we propose a method to extract various backchannels that are suitable for user utterance with no restriction of the diversity of backchannels. We conduct an experiment that compares the proposed method with two existing methods; a classification-based method and a simple extraction-based method with a message length limit. The generated responses are evaluated by human workers. The result shows that the proposed method generates backchannels that are highly diverse and more appropriate in terms of the response to the user utterance.

feedback
Top