While vector-based representations of word meanings (word vectors) have been widely used in a variety of natural language processing applications, they are not meant for capturing the similarity between words in different languages. This prevents using word vectors in multilingual-applications such as cross-lingual information retrieval and machine translation. To solve this problem, we propose a method that learns a cross-lingual projection of word representations from one language into another. Our method utilizes translatable context pairs obtained from a bilingual dictionary and surface similarity as bonus terms of the objective function. In the experiments, we evaluated the effectiveness of the proposed method in four languages, Japanese, Chinese, English and Spanish. Experiments shows that our method outperformed existing methods without any additional supervisions.
The quantum-inspired evolutionary algorithm (QEA) is one of the evolutionary algorithms incorporating principles of quantum computation. In QEA, each gene is represented by a quantum bit (qubit), and the quantum superposition state is imitated. QEA can effectively shift from a global search to a local search. Han, et al. showed that QEA has superior search performance to a genetic algorithm (GA) in the 0-1 knapsack problem (0-1KP). Nakayama, et al. proposed a simpler algorithm that is referred to as QEA based on pair swap (QEAPS). QEAPS requires fewer parameters to be adjusted than QEA. Nakayama, et al. showed that QEAPS can find similar or even better quality solutions than QEA in 0-1KP. However, in QEA and QEAPS, each gene is represented by a qubit and both algorithms can only use a binary value as an observation result for a qubit. Therefore, Iimura, et al. proposed a novel integer-type gene-coding method that can obtain an integer value as an observation result by assigning multiple qubits in a gene locus. Moreover, they implemented the gene-coding method in both QEA and QEAPS and showed that it can search for similar or even better quality solutions in a shorter time than a conventional binary-type gene-coding method in the integer knapsack problem (IKP). However, the integer-type gene-coding method cannot deal with permutations simply.In order to expand the gene-coding method based on the qubit representation, Moriyama, et al. proposed two interpretation methods based on the integer-type gene-coding method that can deal with permutations. Also, they clarified that the two proposed interpretation methods can search for the optimal solution, even with the genecoding method based on the qubit representation. This paper proposes a new gene-coding method that can deal with permutations. The proposed method promises effect of improving a solution in permutation space, like the k-Opt method, and can search for the optimal solution of the traveling salesman problem (TSP) effectively. From experimental results for TSP, the discovery rate of the optimal solution using the proposed method is higher than using conventional method in many cases of QEA and QEAPS.
Topic models are generative models of documents, automatically clustering frequently co-occurring words (topics) from corpora. Topics can be used as stable features that represent the substances of documents, so that topic models have been extensively studied as technology for extracting latent information behind large data. Unfortunately, the typical time complexity of topic model computation is the product of the data size and the number of topics, therefore the traditional Markov chain Monte Carlo (MCMC) method cannot estimate many topics on large corpora within a realistic time. The data size is a common concern in Bayesian learning and there are general approaches to avoid it, such as variational Bayes and stochastic gradient MCMC. On the other hand, the number of topics is a specific problem to topic models and most solutions are proposed to the traditional Gibbs sampler. However, it is natural to solve these problems at once, because as the data size grows, so does the number of topics in corpora. Accordingly, we propose new methods coping with both data and topic scalability, by using fast computing techniques of the Gibbs sampler on stochastic gradient MCMC. Our experiments demonstrate that the proposed method outperforms the state-of-the-art of traditional MCMC in mini-batch setting, showing a better mixing rate and faster updating.
We consider the task of simultaneous anomaly detection in a system and its elements by comparing a pair of multivariate data sets. This task corresponds to simultaneously conducting anomaly detection and localization. For solving this task, we estimate scores which represent anomalousness of a whole system and its elements. This scoring is difficult for the following reasons. First, it is not trivial how to estimate the scores by taking into account changes of relationships between the elements, which strongly correlate with each other. Second, it is required that scores of the system and its elements are estimated from a single framework. Otherwise, the relation between the scores is not clear and thus localization becomes difficult. We propose a solution, which is a single framework to simultaneously estimate anomalousness of a system and its elements. The key ideas of the method are the following two. First, we introduce doubly kernelized scores. We construct a score by using difference between kernel matrices defined between elements. Then we represent the difference by using a kernel defined between matrices. Second, we construct matrix kernels, which are defined between different dimensional matrices. This method has the following properties: (1) the method can be applied to any data sets where a kernel can be defined between the elements, (2)can estimate scores of element group of any number, and (3)can be applied to a pair of data sets whose numbers of elements are different. Especially, the second and the third properties are realized by introducing matrix kernels. We demonstrate the effectiveness of the proposed method through the experimental results using three data sets.
This paper focuses on developing a model for estimating communication skills of each participant in a group from multimodal (verbal and nonverbal) features. For this purpose, we use a multimodal group meeting corpus including audio signal data and head motion sensor data of participants observed in 30 group meeting sessions. The corpus also includes the communication skills of each participant, which is assessed by 21 external observers with the experience of human resource management. We extracted various kinds of features such as spoken utterances, acoustic features, speaking turns and the amount of head motion to estimate the communication skills. First, we created a regression model to infer the level of communication skills from these features using support vector regression to evaluate the estimation accuracy of the communication skills. Second, we created a binary (high or low) classification model using support vector machine. Experiment results show that the multimodal model achieved 0.62 in R2 as the regression accuracy of overall skill, and also achieved 0.93 as the classification accuracy. This paper reports effective features in predicting the level of communication skill and shows that these features are also useful in characterizing the difference between the participants who have high level communication skills and those who do not.
Large field sensing has been widely studied whereby an application for representative agricultural and disaster reduction utilizes a wireless sensor network (WSN). However, outdoor field sensing involves problems related to communication quality and distance. Particularly, the communication reliability in portable wireless sensor networks is greatly influenced by the surrounding environmental factors which affects the communication nodes’ movement. Unlike fixed wireless sensor networks, radio wave conditions are dynamic, which yields many differences such as the effects of plants. Both the performance and communication quality are addressed by proposing a rapid re-routing mesh network capable of high speed re-connecting at the time of the failure. In this paper, the task of building an ad hoc simple water gauge system that assumes movement was carried out with implementation of wireless sensor networks. Furthermore, we evaluated the problems by deploying outdoor sensor nodes based on our proposed system.
In recent years, the digitization of medical and health data including clinical data, health diagnostic data, medication log data have been made rapidly. One potential application using electronic medical and health information is to develop a system to make a medical diagnosis according to the contents recorded in the electronic medical data and the appropriate patient information. The task of understanding the condition of the patient and making precisely the diagnosis is hard to be automated and requires the high degree of expertise. Toward a final goal to construct a medical diagnostic support system, as its pilot study, we attempt to build a question-answering program that automatically answers the medical licensing examination. The national medical licensing examination is the form of multiple-choice test and contains a wide variety of problems. There is a type of problems to answer the appropriate disease name among multiple choices given the patient information and test results as a problem statement. We aimed to develop the program to answer this type of questions. By the development of such question-answering program that automatically answers the medical licensing examination, we revealed the fundamental issues and essential difficulties in the information processing of the medical data, and finally constructed the foundation for conducting disease diagnosis support with patient information. In this paper, we developed a question-answering program and actually performed the answering for some problems in 107th and 108th out of national medical licensing examination. We carefully examined and analyzed the results and problems that could be answered correctly and problems that were given incorrect answers, and proposed the improvements to build a more accurate program.
A long-standing dream in research on artificial intelligence (AI) is to build a strong AI, which understands and processes the input, unlike a weak AI which just processes it as programmed. Toward realization of this dream, we need a mathematical formulation on what understanding is. In the present study, starting off by revisiting Shannon’s mathematical theory of communication, I argue that it is a model of information transmission but not that of information understanding, because of its common codebook shared by the sender and receiver. I outline the steps to build a model of information understanding, by seeking possibilities of decoding without the shared codebook. Given the model of information understanding, I discuss its relationship to other known problems in AI research, such as the symbol grounding problem and frame problem.
Piecewise sparse linear regression models using factorized asymptotic Bayesian inference (a.k.a. FAB/HME) have recently been employed in practical applications in many industries as a core algorithm of the Heterogeneous Mixture Learning technology. Such applications include sales forecasting in retail stores, energy demand prediction of buildings for smart city, parts demand prediction to optimize inventory, and so on. This paper extends FAB/HME for classification and conducts the following two essential improvements. First, we derive a refined version of factorized information criterion which offers a better approximation of Bayesian marginal log-likelihood. Second, we introduce an analytic quadratic lower bounding technique in an EM-like iterative optimization process of FAB/HME, which drastically reduces computational cost. Experimental results show that advantages of our piecewise sparse linear classification over state-of-the-art piecewise linear models.
he purpose of this study is to answer the question whether an artificial agent can build or not an in-group relation with a human. To answer this, we created a three-way discussion setting between a participant and two artificial agents: One agent took the same opinion as a participant while the other did the opposite. This resulted in a feeling of group identity between the participant and the first agent, i.e. the same side. This feeling of group identity was shown to be detected using error-related negativity (ERN), a component of an event-related potential that accompanies errors in speeded performance. We found that amplitude of ERN was bigger for the same side agent’s failure than for the other side’s. We also found ERN difference correlated to an empathy ability detected by a questionnaire. From these results, we can say that a person can form an in-group relation between an artificial agent, which can be detected by ERN.
This paper describes a technique to recognize “supportiveness” of a given text for an argument topic object and a value. Given an argument topic object (o), a value (v), and a text fragment (t), supportiveness refers to whether t supports a hypothesis “o promotes/suppresses v” or not. For example, with “o: casino” and “v: employment”, then a text “The casinos in Mississippi have created 35,000 jobs.” should support a hypothesis “o promotes v”. This technique enables to automatically collect texts representing reasons and counterexamples for some hypothesis that humans build up (e.g. “casino promotes employment”), combined with text search. Because the difference from relation extraction is polarity of relations, proposed method utilizes multiplifications based on local syntax structures, extending reversing hypothesis in sentiment analysis. We propose feature combinations consisting of “primary features” and “secondary features” for supportiveness recognition. “Primary features” represent local syntax structures around a given target or a given value. “Secondary features” represent global syntax structures generated by combining the primary features. The proposed method calculates weighted sum of secondary features to recognize promoting/suppressing supportiveness. The experiments showed that our method outperforms a Bag-of-Words baseline and a conventional relation extraction method.
This research aims to develop a human-agent interaction that users can genuinely enjoy. We expect that to enhance positive impressions, the human-agent interactions should reflect the users’ own preferences. Thus, we have developed an agent that satisfies this goal. This agent dynamically learns the reward gained by the user-agent interaction and thereby improves its interaction with the user. In turn, the better interaction enhances the user’s impression of the agent. The effectiveness of our agent was experimentally assessed in a simulated ball game between the agent and a user. In a sensitivity evaluation with real participants, users were favorably impressed by the agent. In the present study, the effectiveness of our proposed technique is tested in more diverse interactions and environments that enable emergent behavior. For this purpose, we selected a ball game environment and proposed a learning model based on the ball game activity. The effectiveness of the human-agent interaction was verified in an experimental sensitivity evaluation. Using a simulator, we constructed human agents that interacted with agents in a ball game environment. To confirm that our proposed system could impress users and create a variety of user-enjoyable interactions, we conducted a sensitivity analysis on participants. The reward grant frequencies assigned by the different participants largely influenced the interaction and sensitivity evaluations by the participants.
Three-party multi-issue closed negotiation is an important class of real-life negotiations. Negotiation problems usually have constraints such as unknown opponent’s utility in real time and time discounting. Recently, the attention in this field has shifted from bilateral to multilateral approaches. In three-party negotiations, agents must simultaneously consider two opponents. We propose a negotiation strategy inspired by the analytic hierarchy process (AHP) extended to three-party negotiations by combining the opponents’ estimated utilities. The estimation of opponent utility is decided by counting the opponents’bids and the importance allocation. In addition, we propose the opponents’bids evaluation method, the concession function, and the acceptance strategy inspired by AHP. Experimental results demonstrate that our proposed method wins the higher social welfare than state-of-the-art negotiation strategies which are the top five agents in ANAC2015.
We propose a fair and accurate peer assessment method for group work using a multi-agent trust network. Although group work is an effective educational method, accurately assessing individual students is not easy. Mutual evaluation is often used to assess group work because students can observe the contributions of other students. However, mutual evaluation presents some potential problems to discuss such as irresponsible evaluations and collusion. Our proposed method identifies and excludes such cheating and unfair ratings on the basis of trust networks that are often used to evaluate sellers in e-market places by using customers’ ratings. We assume a group-work course in a semester in which students mutually evaluate other group members a few (three to five) times since too many chances for evaluation burden students. We introduce the iterative method for alternately generating trust networks and calculating cluster-trust values, which represent similarity of evaluations in a cluster network. Using a multi-agent simulation, we experimentally show that our method can find the irresponsible students and collusive groups and considerably improve accuracy of final marks with only a few chances for mutual evaluations. Thus, our method can provide useful information for assessments to instructors and reduce free-riders’ incentives for cheating behaviors.
The effect of option markets on their underlying markets has been studied intensively since the first option market launched. Despite considerable efforts, including the development of theoretical and empirical approaches, we do not yet have conclusive evidence on this effect. We investigate the effect of option markets, especially that of dynamic hedging, on their underlying markets by using an artificial market. We propose a two-market model in which an option market and its underlying market interact. We confirmed that trading behaviors on expire date are not effect on its underlying market, but dynamic hedging, arbitrage trading changed volatility on the price of underlying asset under certain conditions.
In this paper, we address the development of module to the framework which manage execution of multi-agent simulations entered the innumerable combination of input parameters, in order to find effective input parameters from the exhaustive parameter combinations. Toward the development of module, we focus on design of experiment and propose method that orthogonal parameter sets are generated automatically by using differences of the output result. Concretely, by using statistical analysis, parameters are indicated significant difference between input values, and combination of parameters around these significant parameters is generated. To investigate effectiveness of proposed method, through evacuation simulation, we compare each results of all combination of input parameter and reduced combination of them by using module which implemented proposed method is implemented.
This paper proposes a behavioral strategy with which agents select rational or reciprocal behavior depending on the past cooperative activities. Rational behavioral strategy lets agents select actions to try to maximize the direct and immediate rewards, while agents with the reciprocal behavioral strategy try to work with cooperative partners for steady task execution. Although rational action is effective in team formation for group work in an unbusy environment, it may cause conflicts in busy and large-scale multi-agent systems due to the task concentration to a few high capable agents, resulting in the degradation of entire performance. This also affects the learning mechanism to identify which tasks and/or agents will provide more rewards, by destabilizing the cooperative relationship between agents. Our proposed method enables agents to change the behavioral strategy on the basis of the past members of successful group work. We experimentally show that it finally stabilizes the cooperative relationship between agents and improve the entire performance in busy environments. We also indicate that a certain ratios of rational and reciprocal agents in good performance.
Since systemic risk in recent financial crisis has received attention, a wide variety of studies on how financial regulations should be reformed to control such risk have been progressed. Among these, mathematical and computational studies have analyzed the way how the borrowing and lending banks and the borrowers go bankrupt in the chain via interbanking network. This study deals with a chain of bankruptcies of financial institutions endogenously caused by deterioration of their financial situations from the changes in prices of risky assets by focusing on the macro-level collapse caused by the shock stemming from the general market risk factors addressed in theoretical studies after the crisis in 2008. For this purpose, the authors develop an agent-based simulation platform and then examine how current systemic management regulations affect corresponding bankruptcies. The main findings are as follows: First, pertinent management regulations are dependent on the market environment. Second, some combination of management regulations may increase the possibilities of bankruptcies due to more sensitivity toward the market change.