Expressing the confidence level of a system's suggestions by using speech sounds is an important cue to users of the system for perceiving how likely it is for the suggestions to be correct. We assume that expressing confidence levels by using human-like expressions would cause users to have a poorer impression of the systems than if artificial subtle expressions (ASEs) were used when the quality of the presented information does not match the expressed confidence level. We confirmed that this assumption was correct by conducting a psychological experiment.
This paper presents a natural language processing tool, called CONV, which can translate Japanese sentences to well-formed formulas on an extended predicate logic, focusing on the knowledge representation scheme and the translating method to the scheme. This tool has been developed aiming to apply to several intelligent systems such as semantic information retrievers, dialogue systems and so on. In this tool, both a simple sentence and a complex sentence are represented by a single atomic formula, using ordinary words as predicate symbol and terms. Subordinate clauses in complex sentence are represented by embedded in the predicate or terms of the atomic formula for main clause, using the same form as main clause. Sentences are parsed using existent natural language processing tools and electronic dictionaries, and then are translated to logic formulas by newly developed rules.
This paper describes an interactive learning-aid system for analytical comprehension of music by highlighting orchestral score in colors, and classifies and evaluates the learning process on the system. An orchestral music is composed to integrate many instrumental parts, and musicians have to be proficient in reading the score analytically in order to understand its multifaceted structure. However, many people often face difficulty in comprehending its musical structure: Some intermediate performers can read and perform their own part, but cannot understand the role of each part in the assembled whole. In order to solve this problem, our conventional paper proposes an interactive supportive system called ScoreIlluminator that enables musicians (and non-musicians) to easily represent how he or she recognizes an orchestral music, e.g. the differentiation of melody parts from the others, and the similarity across instrumental parts. ScoreIlluminator clusters the parts from an orchestral score according to their roles in the whole, and displays the clusters on the score by assigning a color to each cluster. The users can manipulate the clustering parameters with the user interface of the system. The system employs two major design concepts. One is ``colored notation'' and the other is ``directability''. The ``colored notation'' visualizes the roles and the relations between parts, which are estimated by the system. The estimation is based on the similarity metric of four musical features: rhythmic activity, sonic richness, melodic activity and consonance activity. Using these metrics, clustering phase is conducted using an unsupervised learning algorithm (k-means algorithm). Our system provides the ``directability'' with an interactive interface in which subjects can freely manipulate parameter settings and see the change in score-highliting in real-time. In this process, users learn the role of parts and the relationship between parts and explore multifaceted interpretations of the music. To verify the effectiveness of the system, we conducted a user-experience experiment with four intermediate musicians. The musicians showed various kinds of progress in interpreting the score. With the episodes from the experiment, we discuss how the system encouraged subject's analytic skill in orchestral-score reading and music listening.
Content providers want to make recommendations across multiple interrelated domains such as music and movies. However, existing collaborative filtering methods fail to accurately identify items that may be interesting to the user but that lie in domains that the user has not accessed before. This is mainly because of the paucity of user transactions across multiple item domains. Our method is based on the observation that users who share similar items or who share social connections, can provide recommendation chains (sequences of transitively associated edges) to items in other domains. It first builds domain-specific-user graphs (DSUGs) whose nodes, users, are linked by weighted edges that reflect user similarity. It then connects the DSUGs via the users who rated items in several domains or via the users who share social connections, to create a cross-domain- user graph (CDUG). It performs Random Walk with Restarts on the CDUG to extract user nodes that are related to the starting user node on the CDUG even though they are not present in the DSUG of the starting user node. It then adds items possessed by those users to the recommendations of the starting node user. Furthermore, to ex- tract many more user nodes, we employ a taxonomy-based similarity measure that states that users are similar if they share the same items and/or same classes. Thus we can set many suitable routes from the starting user node to other user nodes in the CDUG. An evaluation using rating datasets in two interrelated domains and social connec- tion histories of users as extracted from a blog portal, indicates that our method identifies potentially interesting items in other domains with higher accuracy than is possible with existing CF methods.
In trajectory data mining which discovers frequent movement patterns from the trajectories of moving objects, both mining complex patterns and processing massive trajectory data are challenging problems. In this paper, we propose a new approach to trajectory data mining focusing on these problems. In order to make trajectories easier to process, traditional approaches quantize trajectories by a grid with a constant resolution. However, the optimal resolution often varies across different areas. This makes it difficult to mine complex patterns. Furthermore, the necessary amount of computational resources increases as the resolution becomes higher. This causes another problem that processing a massive dataset is difficult. To solve these problems, we propose a parallelized approach based on quadtree search with hierarchical grids. We employ a hierarchical grid structure with multiple resolutions to quantize trajectories. This approach initially searches for frequent patterns in a coarse grid level and drills down into a finer grid level to find more fine-grained patterns when needed. In this approach, we extract frequent movements as a pattern in terms of time duration of movements within a margin of error. Since an optimal time error varies across grid's resolutions, we propose a method for estimating optimal time errors. We also show a parallelization method based on MapReduce. In drilling down patterns, we mine child patterns in each region the parent pattern passes through and integrate child patterns along their parent pattern. In evaluation, experiments on real-word data show the effectiveness of our approach in mining complex patterns in low computational resources.