Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Volume 26, Issue 2
Displaying 1-14 of 14 articles from this issue
Special: Selected Short Papers from JSAI2010
Short Paper
  • Nguyen Tuan Duc, Danushka Bollegala, Mitsuru Ishizuka
    2011 Volume 26 Issue 2 Pages 307-312
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    Latent relational search is a new search paradigm based on the proportional analogy between two entity pairs. A latent relational search engine is expected to return the entity ``Paris'' as an answer to the question mark (?) in the query {(Japan, Tokyo), (France, ?)} because the relation between Japan and Tokyo is highly similar to that between France and Paris. We propose a method for extracting entity pairs from a text corpus to build an index for a high speed latent relational search engine. By representing the relation between two entities in an entity pair using lexical patterns, the proposed latent relational search engine can precisely measure the relational similarity between two entity pairs and can therefore accurately rank the result list. We evaluate the system using a Web corpus and compare the performance with an existing relational search engine. The results show that the proposed method achieves high precision and MRR while requiring small query processing time. In particular, the proposed method achieves an MRR of 0.963 and it retrieves correct answer in the Top 1 result for 95% of queries.
    Download PDF (606K)
  • Kiyoshi Izumi, Takashi Goto, Tohgoroh Matsui
    2011 Volume 26 Issue 2 Pages 313-317
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    In this study, we propose a new text-mining method for long-term market analysis. Using our method, we performe out-of-sample tests using monthly price data of financial markets; Japanese government bond market, Japanese stock market, and the yen-dollar market. First we extract feature vectors from monthly reports of Bank of Japan. Then, trends of each market are estimated by regression analysis using the feature vectors. As a result of comparison with support vector regression, the proposal method could forecast in higher accuracy about both the level and direction of long-term market trends. Moreover, our method showed high returns with annual rate averages as a result of the implementation test.
    Download PDF (619K)
  • Katsumasa Yoshikawa, Tsutomu Hirao, Sebastian Riedel, Masayuki Asahara ...
    2011 Volume 26 Issue 2 Pages 318-323
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    This paper presents a new approach that exploits coreference information to extract event-argument (E-A) relations from biomedical documents. This approach has two advantages: (1) it can extract a large number of valuable E-A relations for document understanding based on the concept of salience in discourse ; (2) it enables us to identify cross-sentence E-A using transitivity involving coreference relations. We propose two coreference-based models: a pipeline based on an Support Vector Machine (SVM) classifier, and a joint Markov Logic Network (MLN). We show the effectiveness of these models on GENIA Event Corpus.
    Download PDF (410K)
  • Takashi Shirai, Junji Yano, Shigeki Nishimura, Kouji Kagawa, Tetsuo Mo ...
    2011 Volume 26 Issue 2 Pages 324-329
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    Traffic jam is one of critical issues in urban life. By which, many social problems, for example, time loss, economical loss, and environmental pollution are caused. There are two typical methods for solving traffic jam, improvement of car navigation system and control of traffic lights. We focus on control of traffic lights. Existing traffic light control system is basically centralized control type and lacks robustness and scalability. If the central computer becomes breakdown, all traffic lights received the damage of it. In this paper, we propose a new traffic light control system based on multi-agent model. The offset value, one of the main traffic light parameters, is controlled by using only local information, and green-wave formation is formed through the coordination of each intersection agent.
    Download PDF (632K)
  • Tohgoroh Matsui
    2011 Volume 26 Issue 2 Pages 330-334
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    This paper describes a reinforcement learning framework based on compound returns, which is called compound reinforcement learning. Compound reinforcement learning maximizes the compound return in returns-based MDPs. We also describe compound Q-learning algorithm. We present experimental results using an ilustrative example, 2-armed bandit.
    Download PDF (337K)
  • Noriyuki Morichika, Masahiro Hamasaki, Akihiro Kameda, Ikki Ohmukai, H ...
    2011 Volume 26 Issue 2 Pages 335-340
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    In this paper, we describe our approach for information extraction from documents, which is based on supervised machine learning and collective intelligence approach. This approach is aimed at redeeming each method, because each method has merits and demerits. It provides various ways for users to input data to improve information extraction. Users can add not only supervised data but also a rule to extract values for a set of attributes. Various ways to input data allows many users to add a lot of data for quality improvement and machine learning can reduce noise of data input by users. We implemented it in event-information extraction system, and the experimental result shows effectiveness in correctness and convenience.
    Download PDF (608K)
  • Toshiki KAWABATA, Fumiyoshi KOBAYASHI, Kazunori UEDA
    2011 Volume 26 Issue 2 Pages 341-346
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    Model checking is an exhaustive search method of verification. Automata-based LTL model checking is one of the methods to solve accepting cycle search problems. Model checking is prone to state-space explosion, and we may expect that parallel processing would be a promising approach. However, the optimal sequential algorithm is based on post-order depth-first seach and is difficult to parallelize. Alternative parallel algorithms have been proposed, and OWCTY_reversed is one of them. OWCTY_reversed is known to be a stable and fast algorithm for models that accept some words, but it does not use the characteristics of the automata used in LTL model checking. We propose a new algorithm named SCC-OWCTY that exploits the SCCs (strongly connected components) of property automata. The algorithm removes states that are judged not to form accepting cycles faster than OWCTY_reversed. We experimented and compared the two algorithms using DiVinE, and confirmed improvements both in performance and scalability.
    Download PDF (335K)
  • Toward Early Detection of Cognitive Impairment in Elderly Using Speech Prosody
    Shohei Kato, Yuta Suzuki, Akiko Kobayashi, Toshiaki Kojima, Hidenori I ...
    2011 Volume 26 Issue 2 Pages 347-352
    Published: 2011
    Released on J-STAGE: January 06, 2011
    JOURNAL FREE ACCESS
    This paper presents a new trial approach to early detection of cognitive impairment in the elderly with the use of speech sound analysis and multivariate statistical technique. In this paper, we focus on the prosodic features from speech sound. Japanese 115 subjects (32 males and 83 females between ages of 38 and 99) participated in this study. We collected speech sound in a few segments of dialogue of HDS-R examination. The segments corresponds to speech sound that is answering for questions on time orientation and number backward count. Firstly, 130 prosodic features have been extracted from each of the speech sounds. These prosodic features consist of spectral and pitch features (53), formant features (56), intensity features (19), and speech rate and response time (2). Secondly, these features are refined by principal component analysis and/or feature selection. Lastly, we have calculated speech prosody-based cognitive impairment rating (SPCIR) by multiple linear regression analysis. The results indicated that there is moderately significant correlation between HDS-R score and synthesis of several selected prosodic features. Consequently, adjusted coefficient of determination R2=0.50 suggests that prosody-based speech sound analysis has possibility to screen the elderly with cognitive impairment.
    Download PDF (368K)
Regular
Original Paper
  • Kotaro Funakoshi, Kazuki Kobayashi, Mikio Nakano, Takanori Komatsu, Se ...
    2011 Volume 26 Issue 2 Pages 353-365
    Published: 2011
    Released on J-STAGE: January 14, 2011
    JOURNAL FREE ACCESS
    We argue that task-oriented spoken dialogue systems or communication robots do not need to quickly respond verbally as long as they quickly respond non-verbally by showing their internal states by using an artificial subtle expression. This paper describes an experiment whose results support this point. In this experiment, 48 participants engaged in reservation tasks with a spoken dialogue system coupled with an interface robot using a blinking light expression. The blinking light expression is designed as an artificial subtle expression to intuitively notify a user about a robot's internal states (such as processing) for the sake of reducing speech collisions as consequences of turn-taking failures due to end-of-turn misdetection. Speech collisions harm smooth speech communication and degrade system usability. Two experimental factors were setup: the blinking light factor (with or without a blinking light) and the reply speed factor (moderate or slow reply speed), resulting in four experimental conditions. The results suggest that both the slow reply speed and the blinking light expression can reduce speech collisions, and improve a user's impression. Meanwhile, contrary to expectation, no degradation of evaluation due to the slow reply speed was found.
    Download PDF (522K)
  • Shohei Tanaka, Naoaki Okazaki, Mitsuru Ishizuka
    2011 Volume 26 Issue 2 Pages 366-375
    Published: 2011
    Released on J-STAGE: January 25, 2011
    JOURNAL FREE ACCESS
    This paper presents a novel method for acquiring a set of query patterns that are able to retrieve documents containing important information about an entity. Given an existing Wikipedia category that should contain the entity, we first extract all entities that are the subjects of the articles in the category. From these articles, we extract triplets of the form (subject-entity, query pattern, concept) that are expected to be in the search results of the query patterns. We then select a small set of query patterns so that when formulating search queries with these patterns, the overall precision and coverage of the returned information from the Web are optimized. We model this optimization problem as a Weighted Maximum Satisfiability (Weighted Max-SAT) problem. Experimental results demonstrate that the proposed method outperformed the methods based on statistical measures such as frequency and point-wise mutual information (PMI) being widely used in relation extraction.
    Download PDF (292K)
  • Rintaro Miyazaki, Hironobu Tsukahara, Jun Nishimura, Naoto Maeda, Tats ...
    2011 Volume 26 Issue 2 Pages 376-386
    Published: 2011
    Released on J-STAGE: February 08, 2011
    JOURNAL FREE ACCESS
    In order to achieve faceted search in net auction system, several researchers have dealt with the automated extraction of attributes and their values from descriptions of exhibits. In this paper, we propose a two-staged method to improve the performance of the extraction. The proposed method is based on the following two assumptions. 1) Identifying whether or not each sentence includes the target information is easier than extracting the target information from raw plain text. 2) Extracting the target information from the sentences selected in the first stage is easier than extracting the target information from the entire raw plain text. In the first stage, the method selects each sentence in a description that is judged to have attributes and/or values. In this stage, each sentence is represented a bag-of-words-styled feature vector, and is labeled as selected or not by a classifier derived by SVM. In the second stage, the extraction of attributes and values are performed on the cleaned text that does not contain parts of description irrelevant to exhibits, like descriptions for the postage, other exhibits, and so on. In the second stage, we adopt a sequential labeling method similar to named entity recognizers. The experimental result shows that the proposed method improves both the precision and the recall in the attribute-value extraction than only using second-stage extraction method. This fact supports our assumptions.
    Download PDF (664K)
  • Mamoru Ohta, Kouji Kozaki, Riichiro Mizoguchi
    2011 Volume 26 Issue 2 Pages 387-402
    Published: 2011
    Released on J-STAGE: February 16, 2011
    JOURNAL FREE ACCESS
    Recently, ontology has attracted attention as an important technology for knowledge infrastructure and semantic processing, and a lot of ontologies have been constructed in various domains. At the same time, ontological theories, technologies, and ontology engineering tools have been developed by not only researchers but also practitioners. We also have been investigating ontological theories and developed a computer environment for building/using ontologies named ``Hozo'' which is based on the ontological theory of roles. Hozo has been used for practical ontology construction in various domains such as clinical medicine, bioinformatics, environmental engineering and so on. These practices of ontology development have raised many issues of ontology construction and utilization in both theoretical and technical points of view. We have solved them by extending our theories and functions of the system. It means we refined usability and reliability of Hozo through these practical experiences. In this paper, we focus on theoretical issues of ontology construction and discuss some extensions of ontological theories concerning representation of multiple inheritance and instances. First, we introduce IS-A relation which represents a relation weaker than is-a relation in the sense that it prohibits inheritance of identity, and we allow users to represent multiple inheritance using not usual strong is-a relation but IS-A relation. Then we investigate the validity of the representation using IS-A through ontological analysis of three utilization patterns of it. Next, we introduce two kinds of new representation of instances: ``simplified instance representation'' and ``#-(species) operator''. Users can use these representations for referring to instances in concept (class) definitions. These theoretical extensions would contribute to well-organized construction of ontologies in practical ontology engineering.
    Download PDF (1064K)
  • Mamoru Ohta, Kouji Kozaki, Riichiro Mizoguchi
    2011 Volume 26 Issue 2 Pages 403-418
    Published: 2011
    Released on J-STAGE: February 16, 2011
    JOURNAL FREE ACCESS
    Through the spread of ontological engineering, many technologies and software tools for ontology construction have been developed. By using them, many ontologies have been constructed in various domains. On these backgrounds, we have been developing an ontology engineering environment "Hozo" and using the tool to construct a lot of ontologies in various domains such as medical science, bioinformatics, nano-technology, education, environmental engineering and so on. Through these practical experiences, we have noticed many issues concerning ontology construction and have solved them by enhancing the ontological theories and technical functions of Hozo. The number of items of improvements amounts to 72 in both theoretical and practical issues. This paper focuses on the practical issues and presents the improved functions of Hozo. They are grouped into three kinds: 1) issues in the ontology design/construction phase, 2) ones in the ontology utilization phase, and 3) ones in the ontology refinement phase. We discuss requirements to solve the practical issues in each phase and demonstrate improved functions of Hozo. Then, we consider how those functions have been useful for ontology construction through actual uses in six ontology development projects and evaluation experiment of usability of Hozo. Through these extensions, usability and reliability of Hozo has been improved.
    Download PDF (1081K)
  • Osami Yamamoto, Hiroshi Satone
    2011 Volume 26 Issue 2 Pages 419-426
    Published: 2011
    Released on J-STAGE: February 22, 2011
    JOURNAL FREE ACCESS
    The fifteen puzzle is a sliding puzzle which has fifteen pieces on which numbers from 1 to 15 are printed. Using the IDA* algorithm with an admissible evaluation function, we can obtain an optimal solution of the puzzle. The performance of the algorithm depends on the evaluation function. The most simple evaluation function is the Manhattan evaluation function, whose value is the sum of the Manhattan distances from the positions of the corresponding pieces in the goal configuration. In this paper, we propose an evaluation function whose values are greater than or equal to that of the Manhattan evaluation function. Our evaluation function refers an approximated database of the gap-2n set. The database is computed beforehand like pattern databases, but it is completely different from pattern databases. The belongingness of a configuration of pieces to the set has to be checked by the database. Using an evaluation function based on the gap-8 set, we were able to reduce the number of search nodes to about 2.5×10-4 times in average with the IDA* algorithm compared with the Manhattan evaluation function. We also show that combining an evaluation function by gap-8 set and an evaluation function by additive pattern databases of disjoint seven and eight pieces, we were able to reduce the number of search nodes by about 53 compared with the evaluation function only by the additive pattern databases.
    Download PDF (523K)
feedback
Top