In order to encourage individual asset flow into the Japanese market through long-term investments, it is important to evaluate stock values of companies because stock prices of companies are determined not only by internal values, which are independent of other companies, but also by market fundamentalism. However, there are few studies conducted in this area in the machine learning community, while there are many studies about prediction of stock price trends. These studies use a single factor approach (such as textual or numerical) and focus on internal values only. We propose a model where we combine two major financial approaches to evaluate stock values: technical analysis and fundamental analysis. The technical analysis is conducted using Long-Short Term Memory and technical indexes as input data. On the other hand, the fundamental analysis is conducted transversely and relatively by creating a program which can retrieve data on financial statements of all listed companies in Japan and put them into a database. From the experiments, compared to single technical analysis proposed model’s accuracy in classification was 11.92% more accurate and the relative error of regression was 3.77% smaller on average. In addition, compared to single factor approaches the accuracy in classification was 6.16% more accurate and the relative error of regression was 3.22% smaller on average. The proposed model has the potential to be combined with other prediction methods, such as textual approaches or even traditional financial approaches, which would improve accuracy and increase practicality of this model.
Using emotional expressions in a conversation is an efficient way to convey one’s thoughts. Emotional expressions of the persuader have a strong impact to the recipient’s attitude in a negotiation. Studies for a persuasive dialog system, which tries to lead users to the system’s specific goals, show that incorporating users’ emotional factors can enhance the system to persuade users. However, in a human-human negotiation, the persuader can have better outcomes not only through considering the emotion of the other person but also through expressing his or her own emotions. In this paper, we propose an example-based persuasive dialog system with expressive emotion capability. The proposed dialog system is trained by newly collected corpus with statistical learning. Emotional states and the user’s acceptance rate of the persuasion are annotated. Experimental results through crowdsourcing suggested that the system using emotional expressions has a potential to persuade some users who prefer to be used emotional expressions, effectively.
This paper describes a spoken dialogue system for accommodating a user’s information behaviors with various levels of information need. Our system, given a set of same-topic news articles, compiles a utterance plan that consists of a primary plan for delivering main news content, and the associated subsidiary plans for supplementing the main content. A primary plan is generated by applying text summarization and style conversion techniques. The subsidiary plans are compiled by considering potential user/system interactions. To make this mechanism work, we first classified user’s possible passive/active behaviors, and then designed the corresponding system actions. We empirically confirmed that our system was able to deliver the news content smoothly while dynamically adapting to the change of user’s intention levels. The smoothness of a conversation can be attributed to the pre-compiled utterance plan.
To completely mimic the naturalness of human interaction in Human-Computer Interaction (HCI), emotion is an essential aspect that should not be overlooked. Emotion allows for a rich and meaningful human interaction. In communicating, not only we express our emotional state, but we are also affected by our conversational counterpart. However, existing works have largely focused only on occurrences of emotion through recognition and simulation. The relationship between an utterance of a speaker and the resulting emotional response that it triggers is not yet closely examined. Observation and incorporation of the underlying process that causes change of emotion can provide useful information for dialogue systems in making a more emotionally intelligent decision, such as being able to take proper action with regard to user’s emotion, and to be aware of the emotional implication of their response. To bridge this gap, in this paper, we tackle three main tasks: 1) recognition of emotional states, 2) analysis of social-affective events in spontaneous conversational data, to capture the relationship between actions taken in discourse and the emotional response that follows, and 3) prediction of emotional triggers and responses in a conversational context. The proposed study differs from existing works in that it focuses on the change of emotion (emotional response) and its cause (emotional triggers) on top of the occurrence of emotion itself. The analysis and experimental results are reported in detail in this paper, showing promising initial results for future works and development.
This paper proposes a lexical acquisition framework for a closed-domain chatbot. It learns the ontological categories of unknown terms in dialogues through implicit confirmation instead of using explicit questions that disrupt the flow of conversation. Our system generates an implicit confirmation request containing an unknown term’s category prediction, which may be incorrect. It then acquires the category only if its prediction was correct by checking various cues that appeared during the confirmation process. We divide this process into two steps. First, we propose a two-tiered method to predict unknown term categories that attempts to predict the most specific category and backs off to a more general category when it is insufficiently confident about its prediction. Direct evaluation showed that this two-tiered method makes correct category predictions 54.4% more often than that predicting the most specific category only. Next, we propose a method for identifying whether categories included confirmation requests are correct by using both the user response following the confirmation request and its context. We introduce features, which are derived from analysis of the confirmation process, and construct a classifier from chat data, which we collect with crowdsourcing. We show that the classifier can identify correct ategories with a precision of 0.708.
This article addresses the estimation of engagement level based on the listener’s behaviors such as backchannel, laughing, head nodding, and eye-gaze. Engagement is defined as the level of how much a user is being interested in and willing to continue the current interaction. When the engagement level is evaluated by multiple annotators, the criteria for annotating the engagement level would depend on each annotator. We assume that each annotator has its own character which affects the way of perceiving the engagement level. We propose a latent character model which estimates the engagement level and also the character of each annotator as a latent variable. The experimental results show that the latent character model can predict the engagement label of each annotator in higher accuracy than other models which do not take the character into account.
In this study, we developed and evaluated a dialogue system which enables an android robot to have a chat with users on Niconico Live provided by Dwango Co., Ltd. which is a live streaming service. In Niconico Live, broadcasters can talk to users who write comments displayed on the video stream. Therefore, by using Niconico Live chat, we eliminated speech recognition errors which can occur during speech conversations. In addition, the dialogue system can keep consistency of conversation by selecting the comment to which it can correctly respond because many comments are shown simultaneously on the video stream. The dialogue system was designed as a retrieval-based one which finds the appropriate response to user’s utterance from a dialogue corpus. Therefore, we collected a dialogue corpus containing 4,460 pairs of comments and robot responses by teleoperating the android robot talking with users, as a first step. In the next step, we completed the dialogue system on Niconico Live integrating the dialogue corpus into it. To evaluate the performance of the dialogue system, we recorded the conversation between the android and users while running the designed system. After that, we showed the recorded conversation to evaluators and asked them how they feel about the naturalness and consistency of the conversation. Results of the experiment indicate that Niconico Live users perceived the responses of the dialogue system to be natural and found the chat with the android entertaining. Through this study, we demonstrated the applicability of the dialogue system on Niconico Live. However, it is difficult to discuss its effectiveness when applying it to other situations or other communication media such as a humanoid robot or a virtual agent. Therefore, as a future work, conducting a comparative experiment might lead to better understanding of the effectiveness of the dialogue system for androids.
The backchannel plays an important role in smooth communication. For dialogue system, appropriate backchanneling is a significant factor that makes more natural conversation. However, many existing dialogue systems have poor backchannel patterns and only can produce simple responses. In this paper, we propose a method to extract various backchannels that are suitable for user utterance with no restriction of the diversity of backchannels. We conduct an experiment that compares the proposed method with two existing methods; a classification-based method and a simple extraction-based method with a message length limit. The generated responses are evaluated by human workers. The result shows that the proposed method generates backchannels that are highly diverse and more appropriate in terms of the response to the user utterance.