IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E100.D, Issue 4
Displaying 1-39 of 39 articles from this issue
Special Section on Award-winning Papers
  • Norimichi UKITA
    2017 Volume E100.D Issue 4 Pages 599
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (69K)
  • Go IRIE, Hiroyuki ARAI, Yukinobu TANIGUCHI
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 4 Pages 600-609
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    This paper presents an unsupervised approach to feature binary coding for efficient semantic image retrieval. Although the majority of the existing methods aim to preserve neighborhood structures of the feature space, semantically similar images are not always in such neighbors but are rather distributed in non-linear low-dimensional manifolds. Moreover, images are rarely alone on the Internet and are often surrounded by text data such as tags, attributes, and captions, which tend to carry rich semantic information about the images. On the basis of these observations, the approach presented in this paper aims at learning binary codes for semantic image retrieval using multimodal information sources while preserving the essential low-dimensional structures of the data distributions in the Hamming space. Specifically, after finding the low-dimensional structures of the data by using an unsupervised sparse coding technique, our approach learns a set of linear projections for binary coding by solving an optimization problem which is designed to jointly preserve the extracted data structures and multimodal data correlations between images and texts in the Hamming space as much as possible. We show that the joint optimization problem can readily be transformed into a generalized eigenproblem that can be efficiently solved. Extensive experiments demonstrate that our method yields significant performance gains over several existing methods.

    Download PDF (587K)
  • Yasuhiro FUJIWARA, Makoto NAKATSUJI, Hiroaki SHIOKAWA, Takeshi MISHIMA ...
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 4 Pages 610-620
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Personalized PageRank (PPR) is a typical similarity metric between nodes in a graph, and node searches based on PPR are widely used. In many applications, graphs change dynamically, and in such cases, it is desirable to perform ad hoc searches based on PPR. An ad hoc search involves performing searches by varying the search parameters or graphs. However, as the size of a graph increases, the computation cost of performing an ad hoc search can become excessive. In this paper, we propose a method called Castanet that offers fast ad hoc searches of PPR. The proposed method features (1) iterative estimation of the upper and lower bounds of PPR scores, and (2) dynamic pruning of nodes that are not needed to obtain a search result. Experiments confirm that the proposed method does offer faster ad hoc PPR searches than existing methods.

    Download PDF (512K)
  • Shigeki MATSUDA, Teruaki HAYASHI, Yutaka ASHIKARI, Yoshinori SHIGA, Hi ...
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 4 Pages 621-632
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    This study introduces large-scale field experiments of VoiceTra, which is the world's first speech-to-speech multilingual translation application for smart phones. In the study, approximately 10 million input utterances were collected since the experiments commenced. The usage of collected data was analyzed and discussed. The study has several important contributions. First, it explains system configuration, communication protocol between clients and servers, and details of multilingual automatic speech recognition, multilingual machine translation, and multilingual speech synthesis subsystems. Second, it demonstrates the effects of mid-term system updates using collected data to improve an acoustic model, a language model, and a dictionary. Third, it analyzes system usage.

    Download PDF (1274K)
  • Motoki AMAGASAKI, Yuki NISHITANI, Kazuki INOUE, Masahiro IIDA, Morihir ...
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 4 Pages 633-644
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Fault tolerance is an important feature for the system LSIs used in reliability-critical systems. Although redundancy techniques are generally used to provide fault tolerance, these techniques have significantly hardware costs. However, FPGAs can easily provide high reliability due to their reconfiguration ability. Even if faults occur, the implemented circuit can perform correctly by reconfiguring to a fault-free region of the FPGA. In this paper, we examine an FPGA-IP core loaded in SoC and introduce a fault-tolerant technology based on fault detection and recovery as a CAD-level approach. To detect fault position, we add a route to the manufacturing test method proposed in earlier research and identify fault areas. Furthermore, we perform fault recovery at the logic tile and multiplexer levels using reconfiguration. The evaluation results for the FPGA-IP core loaded in the system LSI demonstrate that it was able to completely identify and avoid fault areas relative to the faults in the routing area.

    Download PDF (2741K)
  • Gou KOUTAKI, Keiichi UCHIMURA
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 4 Pages 645-654
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    In this paper, we propose the application of principal component analysis (PCA) to scale-spaces. PCA is a standard method used in computer vision. Because the translation of an input image into scale-space is a continuous operation, it requires the extension of conventional finite matrix-based PCA to an infinite number of dimensions. Here, we use spectral theory to resolve this infinite eigenvalue problem through the use of integration, and we propose an approximate solution based on polynomial equations. In order to clarify its eigensolutions, we apply spectral decomposition to Gaussian scale-space and scale-normalized Laplacian of Gaussian (sLoG) space. As an application of this proposed method, we introduce a method for generating Gaussian blur images and sLoG images, demonstrating that the accuracy of such an image can be made very high by using an arbitrary scale calculated through simple linear combination. Furthermore, to make the scale-space filtering efficient, we approximate the basis filter set using Gaussian lobes approximation and we can obtain XY-Separable filters. As a more practical example, we propose a new Scale Invariant Feature Transform (SIFT) detector.

    Download PDF (3182K)
  • Masayuki SUZUKI, Ryo KUROIWA, Keisuke INNAMI, Shumpei KOBAYASHI, Shiny ...
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 4 Pages 655-661
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    When synthesizing speech from Japanese text, correct assignment of accent nuclei for input text with arbitrary contents is indispensable in obtaining naturally-sounding synthetic speech. A phenomenon called accent sandhi occurs in utterances of Japanese; when a word is uttered in a sentence, its accent nucleus may change depending on the contexts of preceding/succeeding words. This paper describes a statistical method for automatically predicting the accent nucleus changes due to accent sandhi. First, as the basis of the research, a database of Japanese text was constructed with labels of accent phrase boundaries and accent nucleus positions when uttered in sentences. A single native speaker of Tokyo dialect Japanese annotated all the labels for 6,344 Japanese sentences. Then, using this database, a conditional-random-field-based method was developed using this database to predict accent phrase boundaries and accent nuclei. The proposed method predicted accent nucleus positions for accent phrases with 94.66% accuracy, clearly surpassing the 87.48% accuracy obtained using our rule-based method. A listening experiment was also conducted on synthetic speech obtained using the proposed method and that obtained using the rule-based method. The results show that our method significantly improved the naturalness of synthetic speech.

    Download PDF (1076K)
  • Nobuaki MINEMATSU, Ibuki NAKAMURA, Masayuki SUZUKI, Hiroko HIRANO, Chi ...
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 4 Pages 662-669
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    This paper develops an online and freely available framework to aid teaching and learning the prosodic control of Tokyo Japanese: how to generate its adequate word accent and phrase intonation. This framework is called OJAD (Online Japanese Accent Dictionary) [1] and it provides three features. 1) Visual, auditory, systematic, and comprehensive illustration of patterns of accent change (accent sandhi) of verbs and adjectives. Here only the changes caused by twelve fundamental conjugations are focused upon. 2) Visual illustration of the accent pattern of a given verbal expression, which is a combination of a verb and its postpositional auxiliary words. 3) Visual illustration of the pitch pattern of any given sentence and the expected positions of accent nuclei in the sentence. The third feature is technically implemented by using an accent change prediction module that we developed for Japanese Text-To-Speech (TTS) synthesis [2],[3]. Experiments show that accent nucleus assignment to given texts by the proposed framework is much more accurate than that by native speakers. Subjective assessment and objective assessment done by teachers and learners show extremely high pedagogical effectiveness of the developed framework.

    Download PDF (3317K)
Special Section on Data Engineering and Information Management
  • Makoto ONIZUKA
    2017 Volume E100.D Issue 4 Pages 670
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (77K)
  • Yasufumi TAKAMA, Wataru SASAKI, Takafumi OKUMURA, Chi-Chih YU, Lieu-He ...
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 671-681
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    This paper proposes a walking route recommender system aiming at continuously supporting a user to take a walk as means for health promotion. In recent years, taking a walk becomes popular with not only the elderly, but also those from all ages as one of the easiest ways for health promotion. From the viewpoint of health promotion, it is desirable to take a walk as daily exercise. However, walking is very simple activity, which makes it difficult for people to maintain their motivation. Although using an activity monitor is expected to improve the motivation for taking a walk as daily exercise, it forces users to manage their activities by themselves. The proposed system solves such a problem by recommending a walking route that can consume target calories. When a system is to be used for long period of time for supporting user's daily exercise, it should consider the case when a user will not follow the recommended route. It would cause a gap between consumed and target calories. We think this problem becomes serious when a user gradually gets bored with taking a walk during a long period of time. In order to solve the problem, the proposed method implicitly manages calories on monthly basis and recommends walking routes that could keep a user from getting bored. The effectiveness of the recommendation algorithm is evaluated with agent simulation. As another important factor for walking support, this paper also proposes a navigation interface that presents direction to the next visiting point without using a map. As users do not have to continuously focus on the interface, it is not only useful for their safety, but also gives them room to enjoy the landscape. The interface is evaluated by an experiment with test participants.

    Download PDF (2355K)
  • Saneyasu YAMAGUCHI, Yuki MORIMITSU
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 682-692
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Load size for a service on the Internet changes remarkably every hour. Thus, it is expected for service system scales to change dynamically according to load size. KVS (key-value store) is a scalable DBMS (database management system) widely used in largescale Internet services. In this paper, we focus on Cassandra, a popular open-source KVS implementation, and discuss methods for improving dynamic scaling performance. First, we evaluate node joining time, which is the time to complete adding a node to a running KVS system, and show that its bottleneck process is disk I/O. Second, we analyze disk accesses in the nodes and indicate that some heavily accessed files cause a large number of disk accesses. Third, we propose two methods for improving elasticity, which means decreasing node adding and removing time, of Cassandra. One method reduces disk accesses significantly by keeping the heavily accessed file in the page cache. The other method optimizes I/O scheduler behavior. Lastly, we evaluate elasticity of our methods. Our experimental results demonstrate that the methods can improve the scaling-up and scaling-down performance of Cassandra.

    Download PDF (2196K)
  • Hideaki KIM, Noriko TAKAYA, Hiroshi SAWADA
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 693-703
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Improvements in information technology have made it easier for industry to communicate with their customers, raising hopes for a scheme that can estimate when customers will want to make purchases. Although a number of models have been developed to estimate the time-varying purchase probability, they are based on very restrictive assumptions such as preceding purchase-event dependence and discrete-time effect of covariates. Our preliminary analysis of real-world data finds that these assumptions are invalid: self-exciting behavior, as well as marketing stimulus and preceding purchase dependence, should be examined as possible factors influencing purchase probability. In this paper, by employing the novel idea of hierarchical time rescaling, we propose a tractable but highly flexible model that can meld various types of intrinsic history dependency and marketing stimuli in a continuous-time setting. By employing the proposed model, which incorporates the three factors, we analyze actual data, and show that our model has the ability to precisely track the temporal dynamics of purchase probability at the level of individuals. It enables us to take effective marketing actions such as advertising and recommendations on timely and individual bases, leading to the construction of a profitable relationship with each customer.

    Download PDF (1843K)
  • Seungtae HONG, Kyongseok PARK, Chae-Deok LIM, Jae-Woo CHANG
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 704-717
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    To analyze large-scale data efficiently, studies on Hadoop, one of the most popular MapReduce frameworks, have been actively done. Meanwhile, most of the large-scale data analysis applications, e.g., data clustering, are required to do the same map and reduce functions repeatedly. However, Hadoop cannot provide an optimal performance for iterative MapReduce jobs because it derives a result by doing one phase of map and reduce functions. To solve the problems, in this paper, we propose a new efficient resource management framework for iterative MapReduce processing in large-scale data analysis. For this, we first design an iterative job state-machine for managing the iterative MapReduce jobs. Secondly, we propose an invariant data caching mechanism for reducing the I/O costs of data accesses. Thirdly, we propose an iterative resource management technique for efficiently managing the resources of a Hadoop cluster. Fourthly, we devise a stop condition check mechanism for preventing unnecessary computation. Finally, we show the performance superiority of the proposed framework by comparing it with the existing frameworks.

    Download PDF (5668K)
  • Kento SUGIURA, Yoshiharu ISHIKAWA, Yuya SASAKI
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 718-729
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    As the development of sensor and machine learning technologies has progressed, it has become increasingly important to detect patterns from probabilistic data streams. In this paper, we focus on complex event processing based on pattern matching. When we apply pattern matching to probabilistic data streams, numerous matches may be detected at the same time interval because of the uncertainty of data. Although existing methods distinguish between such matches, they may derive inappropriate results when some of the matches correspond to the real-world event that has occurred during the time interval. Thus, we propose two grouping methods for matches. Our methods output groups that indicate the occurrence of complex events during the given time intervals. In this paper, first we describe the definition of groups based on temporal overlap, and propose two grouping algorithms, introducing the notions of complete overlap and single overlap. Then, we propose an efficient approach for calculating the occurrence probabilities of groups by using deterministic finite automata that are generated from the query patterns. Finally, we empirically evaluate the effectiveness of our methods by applying them to real and synthetic datasets.

    Download PDF (1240K)
  • Yuji YAMAOKA, Kouichi ITOH
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 730-740
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    PPDP (Privacy-Preserving Data Publishing) is technology that discloses personal information while protecting individual privacy. k-anonymity is a privacy model that should be achieved in PPDP. However, k-anonymity does not guarantee privacy against adversaries who have knowledge of even a few uncommon individuals in a population. In this paper, we propose a new model, called k-presence-secrecy, that prevents such adversaries from inferring whether an arbitrary individual is included in a personal data table. We also propose an algorithm that satisfies the model. k-presence-secrecy is a practical model because an algorithm that satisfies it requires only a PPDP target table as personal information, whereas previous models require a PPDP target table and almost all the background knowledge of adversaries. Our experiments show that, whereas an algorithm satisfying only k-anonymity cannot protect privacy, even against adversaries who have knowledge for one uncommon individual in a population, our algorithm can do so with less information loss and shorter execution time.

    Download PDF (425K)
  • Yosuke SAKATA, Koji EGUCHI
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 741-749
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    There are increasing demands for improved analysis of multimodal data that consist of multiple representations, such as multilingual documents and text-annotated images. One promising approach for analyzing such multimodal data is latent topic models. In this paper, we propose conditionally independent generalized relational topic models (CI-gRTM) for predicting unknown relations across different multiple representations of multimodal data. We developed CI-gRTM as a multimodal extension of discriminative relational topic models called generalized relational topic models (gRTM). We demonstrated through experiments with multilingual documents that CI-gRTM can more effectively predict both multilingual representations and relations between two different language representations compared with several state-of-the-art baseline models that enable to predict either multilingual representations or unimodal relations.

    Download PDF (770K)
  • Ratchainant THAMMASUDJARIT, Anon PLANGPRASOPCHOK, Charnyote PLUEMPITIW ...
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 750-757
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Ground-truth identification - the process, which infers the most probable labels, for a certain dataset, from crowdsourcing annotations - is a crucial task to make the dataset usable, e.g., for a supervised learning problem. Nevertheless, the process is challenging because annotations from multiple annotators are inconsistent and noisy. Existing methods require a set of data sample with corresponding ground-truth labels to precisely estimate annotator performance but such samples are difficult to obtain in practice. Moreover, the process requires a post-editing step to validate indefinite labels, which are generally unidentifiable without thoroughly inspecting the whole annotated data. To address the challenges, this paper introduces: 1) Attenuated score (A-score) - an indicator that locally measures annotator performance for segments of annotation sequences, and 2) label aggregation method that applies A-score for ground-truth identification. The experimental results demonstrate that A-score label aggregation outperforms majority vote in all datasets by accurately recovering more labels. It also achieves higher F1 scores than those of the strong baselines in all multi-class data. Additionally, the results suggest that A-score is a promising indicator that helps identifying indefinite labels for the post-editing procedure.

    Download PDF (536K)
  • Semih YUMUSAK, Erdogan DOGDU, Halife KODAZ, Andreas KAMILARIS, Pierre- ...
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 758-767
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Linked data endpoints are online query gateways to semantically annotated linked data sources. In order to query these data sources, SPARQL query language is used as a standard. Although a linked data endpoint (i.e. SPARQL endpoint) is a basic Web service, it provides a platform for federated online querying and data linking methods. For linked data consumers, SPARQL endpoint availability and discovery are crucial for live querying and semantic information retrieval. Current studies show that availability of linked datasets is very low, while the locations of linked data endpoints change frequently. There are linked data respsitories that collect and list the available linked data endpoints or resources. It is observed that around half of the endpoints listed in existing repositories are not accessible (temporarily or permanently offline). These endpoint URLs are shared through repository websites, such as Datahub.io, however, they are weakly maintained and revised only by their publishers. In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a “search keyword” discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, the collected search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. We analyze our findings in comparison to Datahub collection in detail.

    Download PDF (2514K)
  • Ke XU, Rujun LIU, Yuan SUN, Keju ZOU, Yan HUANG, Xinfang ZHANG
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 768-775
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    In tutoring systems, students are more likely to utilize hints to assist their decisions about difficult or confusing problems. In the meanwhile, students with weaker knowledge mastery tend to choose more hints than others with stronger knowledge mastery. Hints are important assistances to help students deal with questions. Students can learn from hints and enhance their knowledge about questions. In this paper we firstly use hints alone to build a model named Hints-Model to predict student performance. In addition, matrix factorization (MF) has been prevalent in educational fields to predict student performance, which is derived from their success in collaborative filtering (CF) for recommender systems (RS). While there is another factorization method named non-negative matrix factorization (NMF) which has been developed over one decade, and has additional non-negative constrains on the factorization matrices. Considering the sparseness of the original matrix and the efficiency, we can utilize an element-based matrix factorization called regularized single-element-based NMF (RSNMF). We compared the results of different factorization methods to their combination with Hints-Model. From the experiment results on two datasets, we can find the combination of RSNMF with Hints-Model has achieved significant improvement and obtains the best result. We have also compared the Hints-Model with the pioneer approach performance factor analysis (PFA), and the outcomes show that the former method exceeds the later one.

    Download PDF (1514K)
  • Miki ENOKI, Issei YOSHIDA, Masato OGUCHI
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 776-784
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    In Twitter-like services, countless messages are being posted in real-time every second all around the world. Timely knowledge about what kinds of information are diffusing in social media is quite important. For example, in emergency situations such as earthquakes, users provide instant information on their situation through social media. The collective intelligence of social media is useful as a means of information detection complementary to conventional observation. We have developed a system for monitoring and analyzing information diffusion data in real-time by tracking retweeted tweets. A tweet retweeted by many users indicates that they find the content interesting and impactful. Analysts who use this system can find tweets retweeted by many users and identify the key people who are retweeted frequently by many users or who have retweeted tweets about particular topics. However, bursting situations occur when thousands of social media messages are suddenly posted simultaneously, and the lack of machine resources to handle such situations lowers the system's query performance. Since our system is designed to be used interactively in real-time by many analysts, waiting more than one second for a query results is simply not acceptable. To maintain an acceptable query performance, we propose a capacity control method for filtering incoming tweets using extra attribute information from tweets themselves. Conventionally, there is a trade-off between the query performance and the accuracy of the analysis results. We show that the query performance is improved by our proposed method and that our method is better than the existing methods in terms of maintaining query accuracy.

    Download PDF (1556K)
  • Masataka ARAKI, Marie KATSURAI, Ikki OHMUKAI, Hideaki TAKEDA
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 785-792
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Most existing methods on research collaborator recommendation focus on promoting collaboration within a specific discipline and exploit a network structure derived from co-authorship or co-citation information. To find collaboration opportunities outside researchers' own fields of expertise and beyond their social network, we present an interdisciplinary collaborator recommendation method based on research content similarity. In the proposed method, we calculate textual features that reflect a researcher's interests using a research grant database. To find the most relevant researchers who work in other fields, we compare constructing a pairwise similarity matrix in a feature space and exploiting existing social networks with content-based similarity. We present a case study at the Graduate University for Advanced Studies in Japan in which actual collaborations across departments are used as ground truth. The results indicate that our content-based approach can accurately predict interdisciplinary collaboration compared with the conventional collaboration network-based approaches.

    Download PDF (939K)
  • Abu Nowshed CHY, Md Zia ULLAH, Masaki AONO
    Article type: PAPER
    2017 Volume E100.D Issue 4 Pages 793-806
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Microblog, especially twitter, has become an integral part of our daily life for searching latest news and events information. Due to the short length characteristics of tweets and frequent use of unconventional abbreviations, content-relevance based search cannot satisfy user's information need. Recent research has shown that considering temporal and contextual aspects in this regard has improved the retrieval performance significantly. In this paper, we focus on microblog retrieval, emphasizing the alleviation of the vocabulary mismatch, and the leverage of the temporal (e.g., recency and burst nature) and contextual characteristics of tweets. To address the temporal and contextual aspect of tweets, we propose new features based on query-tweet time, word embedding, and query-tweet sentiment correlation. We also introduce some popularity features to estimate the importance of a tweet. A three-stage query expansion technique is applied to improve the relevancy of tweets. Moreover, to determine the temporal and sentiment sensitivity of a query, we introduce query type determination techniques. After supervised feature selection, we apply random forest as a feature ranking method to estimate the importance of selected features. Then, we make use of ensemble of learning to rank (L2R) framework to estimate the relevance of query-tweet pair. We conducted experiments on TREC Microblog 2011 and 2012 test collections over the TREC Tweets2011 corpus. Experimental results demonstrate the effectiveness of our method over the baseline and known related works in terms of precision at 30 (P@30), mean average precision (MAP), normalized discounted cumulative gain at 30 (NDCG@30), and R-precision (R-Prec) metrics.

    Download PDF (1029K)
  • Kyoungman BAE, Youngjoong KO
    Article type: LETTER
    2017 Volume E100.D Issue 4 Pages 807-810
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    The translation based language model (TRLM) is state-of-the-art method to solve the lexical gap problem of the question retrieval in the community-based question answering (cQA). Some researchers tried to find methods for solving the lexical gap and improving the TRLM. In this paper, we propose a new dependency based model (DM) for the question retrieval. We explore how to utilize the results of a dependency parser for cQA. Dependency bigrams are extracted from the dependency parser and the language model is transformed using the dependency bigrams as bigram features. As a result, we obtain the significant improved performances when TRLM and DM approaches are effectively combined.

    Download PDF (364K)
  • Changbeom SHIM, Wooil KIM, Wan HEO, Sungmin YI, Yon Dohn CHUNG
    Article type: LETTER
    2017 Volume E100.D Issue 4 Pages 811-812
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    The development of smart devices has led to the growth of Location-Based Social Networking Services (LBSNSs). In this paper, we introduce an l-Close Range Friends query that finds all l-hop friends of a user within a specified range. We also propose a query processing method on Social Grid Index (SGI). Using real datasets, the performance of our method is evaluated.

    Download PDF (262K)
Regular Section
  • Haiou JIANG, Haihong E, Meina SONG
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2017 Volume E100.D Issue 4 Pages 813-821
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    The Infrastructure-as-a-Service (IaaS) cloud is attracting applications due to the scalability, dynamic resource provision, and pay-as-you-go cost model. Scheduling scientific workflow in the IaaS cloud is faced with uncertainties like resource performance variations and unknown failures. A schedule is said to be robust if it is able to absorb some degree of the uncertainties during the workflow execution. In this paper, we propose a novel workflow scheduling algorithm called Dynamic Earliest-Finish-Time (DEFT) in the IaaS cloud improving both makespan and robustness. DEFT is a dynamic scheduling containing a set of list scheduling loops invoked when some tasks complete successfully and release resources. In each loop, unscheduled tasks are ranked, a best virtual machine (VM) with minimum estimated earliest finish time for each task is selected. A task is scheduled only when all its parents complete, and the selected best VM is ready. Intermediate data is sent from the finished task to each of its child and the selected best VM before the child is scheduled. Experiments show that DEFT can produce shorter makespans with larger robustness than existing typical list and dynamic scheduling algorithms in the IaaS cloud.

    Download PDF (1158K)
  • Junji YAMADA, Ushio JIMBO, Ryota SHIOYA, Masahiro GOSHIMA, Shuichi SAK ...
    Article type: PAPER
    Subject area: Computer System
    2017 Volume E100.D Issue 4 Pages 822-837
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    The region that includes the register file is a hot spot in high-performance cores that limits the clock frequency. Although multibanking drastically reduces the area and energy consumption of the register files of superscalar processor cores, it suffers from low IPC due to bank conflicts. Our skewed multistaging drastically reduces not the bank conflict probability but the pipeline disturbance probability by the second stage. The evaluation results show that, compared with NORCS, which is the latest research on a register file for area and energy efficiency, a proposed register file with 18 banks achieves a 39.9% and 66.4% reduction in circuit area and in energy consumption, while maintaining a relative IPC of 97.5%.

    Download PDF (3305K)
  • Fuan PU, Guiming LUO, Zhou JIANG
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 4 Pages 838-848
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    In this paper, a Boolean algebra approach is proposed to encode various acceptability semantics for abstract argumentation frameworks, where each semantics can be equivalently encoded into several Boolean constraint models based on Boolean matrices and a family of Boolean operations between them. Then, we show that these models can be easily translated into logic programs, and can be solved by a constraint solver over Boolean variables. In addition, we propose some querying strategies to accelerate the calculation of the grounded, stable and complete extensions. Finally, we describe an experimental study on the performance of our encodings according to different semantics and querying strategies.

    Download PDF (529K)
  • Tomoki MATSUZAWA, Eisuke ITO, Raissa RELATOR, Jun SESE, Tsuyoshi KATO
    Article type: PAPER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 4 Pages 849-856
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    In recent years, covariance descriptors have received considerable attention as a strong representation of a set of points. In this research, we propose a new metric learning algorithm for covariance descriptors based on the Dykstra algorithm, in which the current solution is projected onto a half-space at each iteration, and which runs in O(n3) time. We empirically demonstrate that randomizing the order of half-spaces in the proposed Dykstra-based algorithm significantly accelerates convergence to the optimal solution. Furthermore, we show that the proposed approach yields promising experimental results for pattern recognition tasks.

    Download PDF (568K)
  • Xiaoyun WANG, Tsuneo KATO, Seiichi YAMAMOTO
    Article type: PAPER
    Subject area: Speech and Hearing
    2017 Volume E100.D Issue 4 Pages 857-864
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Recognition of second language (L2) speech is a challenging task even for state-of-the-art automatic speech recognition (ASR) systems, partly because pronunciation by L2 speakers is usually significantly influenced by the mother tongue of the speakers. Considering that the expressions of non-native speakers are usually simpler than those of native ones, and that second language speech usually includes mispronunciation and less fluent pronunciation, we propose a novel method that maximizes unified acoustic and linguistic objective function to derive a phoneme set for second language speech recognition. The authors verify the efficacy of the proposed method using second language speech collected with a translation game type dialogue-based computer assisted language learning (CALL) system. In this paper, the authors examine the performance based on acoustic likelihood, linguistic discrimination ability and integrated objective function for second language speech. Experiments demonstrate the validity of the phoneme set derived by the proposed method.

    Download PDF (674K)
  • Kazu MISHIBA, Yuji OYAMADA, Katsuya KONDO
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2017 Volume E100.D Issue 4 Pages 865-873
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Conventional image retargeting methods fail to avoid distortion in the case where visually important regions are distributed all over the image. To reduce distortions, this paper proposes a novel image retargeting method that incorporates letterboxing into an image warping framework. Letterboxing has the advantage of producing results without distortion or content loss although being unable to use the entire display area. Therefore, it is preferable to combine a retargeting method with a letterboxing operator when displaying images in full screen. Experimental results show that the proposed method is superior to conventional methods in terms of visual quality measured by an objective metric.

    Download PDF (2379K)
  • Tsuyoshi HIGASHIGUCHI, Toma SHIMOYAMA, Norimichi UKITA, Masayuki KANBA ...
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 4 Pages 874-881
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    This paper proposes a method for evaluating a physical gait motion based on a 3D human skeleton measured by a depth sensor. While similar methods measure and evaluate the motion of only a part of interest (e.g., knee), the proposed method comprehensively evaluates the motion of the full body. The gait motions with a variety of physical disabilities due to lesioned body parts are recorded and modeled in advance for gait anomaly detection. This detection is achieved by finding lesioned parts a set of pose features extracted from gait sequences. In experiments, the proposed features extracted from the full body allowed us to identify where a subject was injured with 83.1% accuracy by using the model optimized for the individual. The superiority of the full-body features was validated in in contrast to local features extracted from only a body part of interest (77.1% by lower-body features and 65% by upper-body features). Furthermore, the effectiveness of the proposed full-body features was also validated with single universal model used for all subjects; 55.2%, 44.7%, and 35.5% by the full-body, lower-body, and upper-body features, respectively.

    Download PDF (1117K)
  • Changki LEE
    Article type: PAPER
    Subject area: Natural Language Processing
    2017 Volume E100.D Issue 4 Pages 882-887
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Recurrent neural networks (RNNs) are a powerful model for sequential data. RNNs that use long short-term memory (LSTM) cells have proven effective in handwriting recognition, language modeling, speech recognition, and language comprehension tasks. In this study, we propose LSTM conditional random fields (LSTM-CRF); it is an LSTM-based RNN model that uses output-label dependencies with transition features and a CRF-like sequence-level objective function. We also propose variations to the LSTM-CRF model using a gate recurrent unit (GRU) and structurally constrained recurrent network (SCRN). Empirical results reveal that our proposed models attain state-of-the-art performance for named entity recognition.

    Download PDF (1376K)
  • Zhenyu SONG, Shangce GAO, Yang YU, Jian SUN, Yuki TODO
    Article type: PAPER
    Subject area: Biocybernetics, Neurocomputing
    2017 Volume E100.D Issue 4 Pages 888-900
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    This paper proposes a novel multiple chaos embedded gravitational search algorithm (MCGSA) that simultaneously utilizes multiple different chaotic maps with a manner of local search. The embedded chaotic local search can exploit a small region to refine solutions obtained by the canonical gravitational search algorithm (GSA) due to its inherent local exploitation ability. Meanwhile it also has a chance to explore a huge search space by taking advantages of the ergodicity of chaos. To fully utilize the dynamic properties of chaos, we propose three kinds of embedding strategies. The multiple chaotic maps are randomly, parallelly, or memory-selectively incorporated into GSA, respectively. To evaluate the effectiveness and efficiency of the proposed MCGSA, we compare it with GSA and twelve variants of chaotic GSA which use only a certain chaotic map on a set of 48 benchmark optimization functions. Experimental results show that MCGSA performs better than its competitors in terms of convergence speed and solution accuracy. In addition, statistical analysis based on Friedman test indicates that the parallelly embedding strategy is the most effective for improving the performance of GSA.

    Download PDF (1033K)
  • Sammer ZAI, Muhammad Ahsan ANSARI, Young Shik MOON
    Article type: PAPER
    Subject area: Biological Engineering
    2017 Volume E100.D Issue 4 Pages 901-909
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Precise estimation of coronary arteries from computed tomography angiography (CTA) data is one of the challenging problems. This study focuses on automatic delineation of coronary arteries from 3D CTA data that may assess the clinicians in identifying the coronary pathologies. In this work, we present a technique that effectively segments the complete coronary arterial tree under the guidance of initial vesselness response without relying on heavily manual operations. The proposed method isolates the coronary arteries with accuracy by using localized statistical energy model in two directions provided with an automated seed which ensures an optimal segmentation of the coronaries. The detection of seed is carried out by analyzing the shape information of the coronary arteries in three successive cross-sections. To demonstrate the efficiency of the proposed algorithm, the obtained results are compared with the reference data provided by Rotterdam framework for lumen segmentation and the level-set active contour based method proposed by Lankton et al. Results reveal that the proposed method performs better in terms of leakages and accuracy in completeness of the coronary arterial tree.

    Download PDF (1764K)
  • Ruilian XIE, Jueping CAI, Xin XIN, Bo YANG
    Article type: LETTER
    Subject area: Computer System
    2017 Volume E100.D Issue 4 Pages 910-913
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    This letter presents a Preferable Mad-y (PMad-y) turn model and Low-cost Adaptive and Fault-tolerant Routing (LAFR) method that use one and two virtual channels along the X and Y dimensions for 2D mesh Network-on-Chip (NoC). Applying PMad-y rules and using the link status of neighbor routers within 2-hops, LAFR can tolerate multiple faulty links and routers in more complicated faulty situations and impose the reliability of network without losing the performance of network. Simulation results show that LAFR achieves better saturation throughput (0.98% on average) than those of other fault-tolerant routing methods and maintains high reliability of more than 99.56% on average. For achieving 100% reliability of network, a Preferable LAFR (PLAFR) is proposed.

    Download PDF (239K)
  • Kai FANG, Shuoyan LIU, Chunjie XU, Hao XUE
    Article type: LETTER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 4 Pages 914-917
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    In this paper, an adaptive updating probabilistic model is proposed to track an object in real-world environment that includes motion blur, illumination changes, pose variations, and occlusions. This model adaptively updates tracker with the searching and updating process. The searching process focuses on how to learn appropriate tracker and updating process aims to correct it as a robust and efficient tracker in unconstrained real-world environments. Specifically, according to various changes in an object's appearance and recent probability matrix (TPM), tracker probability is achieved in Expectation-Maximization (EM) manner. When the tracking in each frame is completed, the estimated object's state is obtained and then fed into update current TPM and tracker probability via running EM in a similar manner. The highest tracker probability denotes the object location in every frame. The experimental result demonstrates that our method tracks targets accurately and robustly in the real-world tracking environments.

    Download PDF (1084K)
  • Jin XU, Yan ZHANG, Zhizhong FU, Ning ZHOU
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2017 Volume E100.D Issue 4 Pages 918-922
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    Distributed compressive video sensing (DCVS) is a new paradigm for low-complexity video compression. To achieve the highest possible perceptual coding performance under the measurements budget constraint, we propose a perceptual optimized DCVS codec by jointly exploiting the reweighted sampling and rate-distortion optimized measurements allocation technologies. A visual saliency modulated just-noticeable distortion (VS-JND) profile is first developed based on the side information (SI) at the decoder side. Then the estimated correlation noise (CN) between each non-key frame and its SI is suppressed by the VS-JND. Subsequently, the suppressed CN is utilized to determine the weighting matrix for the reweighted sampling as well as to design a perceptual rate-distortion optimization model to calculate the optimal measurements allocation for each non-key frame. Experimental results indicate that the proposed DCVS codec outperforms the other existing DCVS codecs in term of both the objective and subjective performance.

    Download PDF (1759K)
  • Yong CHENG, Zuoyong LI, Yuanchen HAN
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 4 Pages 923-926
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    After exploring the classic Lambertian reflectance model, we proposed an effective illumination estimation model to extract illumination invariants for face recognition under complex illumination conditions in this paper. The estimated illumination by our method not only meets the actual lighting conditions of facial images, but also conforms to the imaging principle. Experimental results on the combined Yale B database show that the proposed method can extract more robust illumination invariants, which improves face recognition rate.

    Download PDF (2037K)
  • Feng YANG, Zheng MA, Mei XIE
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 4 Pages 927-930
    Published: April 01, 2017
    Released on J-STAGE: April 01, 2017
    JOURNAL FREE ACCESS

    The quality of codebook is very important in visual image classification. In order to boost the classification performance, a scheme of codebook generation for scene image recognition based on parallel key SIFT analysis (PKSA) is presented in this paper. The method iteratively applies classical k-means clustering algorithm and similarity analysis to evaluate key SIFT descriptors (KSDs) from the input images, and generates the codebook by a relaxed k-means algorithm according to the set of KSDs. With the purpose of evaluating the performance of the PKSA scheme, the image feature vector is calculated by sparse code with Spatial Pyramid Matching (ScSPM) after the codebook is constructed. The PKSA-based ScSPM method is tested and compared on three public scene image datasets. The experimental results show the proposed scheme of PKSA can significantly save computational time and enhance categorization rate.

    Download PDF (230K)
feedback
Top