Dynamical systems, which are described using differential equations, present numerous benefits for time-series information processing. They can accommodate continuous changes and dynamic features. However, they are not good for processing complex spatiotemporal patterns such as a temporal order of motions. Therefore, they are often combined with symbol-processing systems or discrete-event systems to produce hybrid systems. As described herein, we propose a method of processing sequences of elementary motions based only on distributed representations and a neurodynamical system. To assess the method’s possibilities, we constructed a human motion estimation system using a trajectory attractor model: a recurrent neural network with continuous-time dynamics. This system can deal analogically with novel hand and arm motions based on similarity between code patterns. Additionally, it can process complex sequences of motions in a robust manner because the network state is attracted to a long trajectory attractor formed in a series of subspaces corresponding to elementary motions. Then the network makes stable state transitions along the trajectory. Experimentally obtained results obtained from surface myoelectric signals show that the system estimated 15 complex hand and arm motions with average accuracy of about 86%, demonstrating the great potential of this system.
Machine learning on RDF data has become important in the field of the Semantic Web. However, RDF graph structures are redundantly represented by noisy and incomplete data on theWeb. In order to apply SVMs to such RDF data, we propose a kernel function to compute the similarity between resources on RDF graphs. This kernel function is defined by selected features on RDF paths that eliminate the redundancy on RDF graphs with information gain ratio filtering. Kernel functions are a very flexible framework and cannot be applied to only SVMs but also principal component analysis, canonical correlation analysis, clustering and so on. However, the calculation of the proposed kernel function requires high costs for time and memory due to the exponential increase of features in RDF graphs. Therefore, we propose an efficient algorithm that calculates the kernel for redundant features from RDF graphs. Our experiments show the performance of the proposed kernel with SVMs on classification tasks for RDF resources and the advantages over existing kernels.
This paper reports progress from 2014 to 2015 on development of solvers of Japanese comprehension questions in university entrance exam. Target questions are the multiple-choice questions in the essay section (Question No.1) in Japanese Language (Kokugo) of National Center Test. In 2014, we introduced a new scoring function using clause boundaries, which are automatically detected by our newly developed tool. The score of a choice is calculated as the average clause-similarity between the choice and a selected part of text body. In 2015, we developed a machinelearning based method, which uses seventeen features to determine the answer. They includes surface-similarity based features, clause-similarity based features, and choice-discriminative features. In addtion to the first formal run of Torobo Project in 2013, we participated in the two formal runs in 2014 and 2015; We were only a participant who submitted the result in Contemporary Japanese Language until now. After the 2015 formal run, we conducted an experiment using 276 questions to compare all developed solvers with various parameters. The best performance was obtained by a 2015 solver, which produced 117 (42%) correct answers. For the subset of 56 previous official questions in National Center Test, a 2014 solver was the best, which produced 32 (57%) correct answers. However, there is no statistical significance between the best 2015 solver and our first solver developed in 2013.
In this paper, we propose an agent-based urban model in which the relationship between a central urban area and a suburban area is expressed simply. Allocation and bustle of a public facility where people stop off in daily life are implemented in the model. We clarify that transportation selection and their residence selection of residents make an effect to change the urban structure and environment. We also discuss how a compact urban structure and a reduction in carbon dioxide emissions are achieved with urban development policies and improvements on attractiveness of the facility for pedestrians and cyclists. In addition, we conduct an experiment of the exclusion of cars from the center of the city. The experimental results confirmed that the automobile control measure would be effective in decreasing the use of automobiles along with a compact urban structure.
We propose a method for extracting semantic structure from procedural texts for more intelligent search or analysis. Procedural texts represent a sequence of procedures to create an object or to make an object be in a certain state, and have many potential applications in artificial intelligence. Procedural texts are relatively clear without modality nor dependence on viewpoints, etc. Thus they can be described their procedures using flow graphs. We adopt recipe texts as procedural text examples and directed acyclic graphs (DAGs) to represent semantic structure. Nodes of a flow graph are important terms in a recipe text and vertices are relationships between the terms such as language phenomena including dependency, predicate-argument structure, and coreference. Because trees can not represent the procedures of recipes sufficiently, DAGs are adopted as the representation of recipes. We first apply word segmentation, automatic term recognition, and then convert the entire text into a flow graphs. For word segmentation and automatic term recognition, we adopt existing methods. Then we propose a flow graph estimation method from term recognition results. Our method is based on the maximum spanning tree algorithm, which is popular in dependency parsing, and simultaneously deals with language phenomena listed above. We experimentally evaluate our method on a flow graph corpus created from various recipe texts on the Internet.