IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E97.D, Issue 4
Displaying 1-48 of 48 articles from this issue
Special Section on Data Engineering and Information Management
  • Atsuyuki MORISHIMA
    2014 Volume E97.D Issue 4 Pages 633
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Download PDF (71K)
  • Qiang SONG, Takayuki KAWABATA, Fumiaki ITOH, Yousuke WATANABE, Haruo Y ...
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 634-643
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    The numbers of files in file systems have increased dramatically in recent years. Office workers spend much time and effort searching for the documents required for their jobs. To reduce these costs, we propose a new method for recommending files and operations on them. Existing technologies for recommendation, such as collaborative filtering, suffer from two problems. First, they can only work with documents that have been accessed in the past, so that they cannot recommend when only newly generated documents are inputted. Second, they cannot easily handle sequences involving similar or differently ordered elements because of the strict matching used in the access sequences. To solve these problems, such minor variations should be ignored. In our proposed method, we introduce the concepts of abstract files as groups of similar files used for a similar purpose, abstract tasks as groups of similar tasks, and frequent abstract workflows grouped from similar workflows, which are sequences of abstract tasks. In experiments using real file-access logs, we confirmed that our proposed method could extract workflow patterns with longer sequences and higher support-count values, which are more suitable as recommendations. In addition, the F-measure for the recommendation results was improved significantly, from 0.301 to 0.598, compared with a method that did not use the concepts of abstract tasks and abstract workflows.
    Download PDF (1918K)
  • Yutaka ARAKAWA, Keiichiro KASHIWAGI, Takayuki NAKAMURA, Motonori NAKAM ...
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 644-653
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    The number of networked devices of sensors and actuators continues to increase. We are developing a data-sharing mechanism called uTupleSpace as middleware for storing and retrieving ubiquitous data that are input or output by such devices. uTupleSpace enables flexible retrieval of sensor data and flexible control of actuator devices, and it simplifies the development of various applications. Though uTupleSpace requires scalability against increasing amounts of ubiquitous data, traditional load-distribution methods using a distributed hash table (DHT) are unsuitable for our case because of the ununiformity of the data. Data are nonuniformly generated at some particular times, in some particular positions, and by some particular devices, and their hash values focus on some particular values. This feature makes it difficult for the traditional methods to sufficiently distribute the load by using the hash values. Therefore, we propose a new load-distribution method using a DHT called the dynamic-help method. The proposed method enables one or more peers to handle loads related to the same hash value redundantly. This makes it possible to handle the large load related to one hash value by distributing the load among peers. Moreover, the proposed method reduces the load caused by dynamic load-redistribution. Evaluation experiments showed that the proposed method achieved sufficient load-distribution even when the load was concentrated on one hash value with low overhead. We also confirmed that the proposed method enabled uTupleSpace to accommodate the increasing load with simple operational rules stably and with economic efficiency.
    Download PDF (1281K)
  • Yasufumi TAKAMA, Takeshi KUROSAWA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 654-662
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    This paper proposes a visualization system for supporting the task of monitoring bug update information. Recent growth of the Web has brought us various kinds of text stream data, such as bulletin board systems (BBS), blogs, and social networking services (SNS). Bug update information managed by bug tracking systems (BTS) is also a kind of text stream data. As such text stream data continuously generates new data, it is difficult for users to watch it all the time. Therefore, the task of monitoring text stream data inevitably involves breaks of monitoring, which would cause users to lose the context of monitoring. In order to support such a monitoring task involving breaks, the proposed system employs several visualization techniques. The dynamic relationship between bugs is visualized with animation, and a function of highlighting updated bugs as well as that of replaying a part of the last monitoring time is also proposed in order to help a user grasping the context of monitoring. The result of experiment with test participants shows that highlighting and replay functions can reduce frequency of checking data and monitoring time.
    Download PDF (1155K)
  • Pablo MARTINEZ LERIN, Daisuke YAMAMOTO, Naohisa TAKAHASHI
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 663-672
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Travel recommendation and travel diary generation applications can benefit significantly from methods that infer the durations and locations of visits from travelers' GPS data. However, conventional inference methods, which cluster GPS points on the basis of their spatial distance, are not suited to inferring visit durations. This paper presents a pace-based clustering method to infer visit locations and durations. The method contributes two novel techniques: (1) It clusters GPS points logged during visits by considering the speed and applying a probabilistic density function for each trip. Consequently, it avoids clustering GPS points that are near but unrelated to visits. (2) It also includes additional GPS points in the clusters by considering their temporal sequence. As a result, it is able to complete the clusters with GPS points that are far from the visits but are logged during the visits, caused, for example, by GPS noise indoors. The results of an experimental evaluation comparing our proposed method with three published inference methods indicate that our proposed method infers the duration of a visit with an average error rate of 8.7%, notably outperforming the other methods.
    Download PDF (664K)
  • Yao MA, Hongwei LU, Zaobin GAN
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 673-684
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Analysis of the trust network proves beneficial to the users in Online Social Networks (OSNs) for decision-making. Since the construction of trust propagation paths connecting unfamiliar users is the preceding work of trust inference, it is vital to find appropriate trust propagation paths. Most of existing trust network discovery algorithms apply the classical exhausted searching approaches with low efficiency and/or just take into account the factors relating to trust without regard to the role of distrust relationships. To solve the issues, we first analyze the trust discounting operators with structure balance theory and validate the distribution characteristics of balanced transitive triads. Then, Maximum Indirect Referral Belief Search (MIRBS) and Minimum Indirect Functional Uncertainty Search (MIFUS) strategies are proposed and followed by the Optimal Trust Inference Path Search (OTIPS) algorithms accordingly on the basis of the bidirectional versions of Dijkstra's algorithm. The comparative experiments of path search, trust inference and edge sign prediction are performed on the Epinions data set. The experimental results show that the proposed algorithm can find the trust inference path with better efficiency and the found paths have better applicability to trust inference.
    Download PDF (1454K)
  • Jun-Gil KIM, Kyung-Soon LEE
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 685-693
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    News articles usually represent a biased viewpoint on contentious issues, potentially causing social problems. To mitigate this media bias, we propose a novel framework for predicting orientation of a news article by analyzing social user behaviors in Twitter. Highly active users tend to have consistent behavior patterns in social network by retweeting behavior among users with the same viewpoints for contentious issues. The bias ratio of highly active users is measured to predict orientation of users. Then political orientation of a news article is predicted based on the bias ratio of users, mutual retweeting and opinion analysis of tweet documents. The analysis of user behavior shows that users with the value of 1 in bias ratio are 88.82%. It indicates that most of users have distinctive orientation. Our prediction method based on orientation of users achieved 88.6% performance in accuracy. Experimental results show significant improvements over the SVM classification. These results show that proposed detection method is effective in social network.
    Download PDF (854K)
  • Tingting DONG, Chuan XIAO, Yoshiharu ISHIKAWA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 694-704
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Probabilistic range query is an important type of query in the area of uncertain data management. A probabilistic range query returns all the data objects within a specific range from the query object with a probability no less than a given threshold. In this paper, we assume that each uncertain object stored in the database is associated with a multi-dimensional Gaussian distribution, which describes the probability distribution that the object appears in the multi-dimensional space. A query object is either a certain object or an uncertain object modeled by a Gaussian distribution. We propose several filtering techniques and an R-tree-based index to efficiently support probabilistic range queries over Gaussian objects. Extensive experiments on real data demonstrate the efficiency of our proposed approach.
    Download PDF (2663K)
  • Arunee RATIKAN, Mikifumi SHIKIDA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 705-713
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Online Social Networks (OSNs) have recently been playing an important role in communication. From the audience aspect, they enable audiences to get unlimited information via the information feeding mechanism (IFM), which is an important part of the OSNs. The audience relies on the quantity and quality of the information served by it. We found that existing IFMs can result in two problems: information overload and cultural ignorance. In this paper, we propose a new type of IFM that solves these problems. The advantage of our proposed IFM is that it can filter irrelevant information with consideration of audiences' culture by using the Naïve Bayes (NB) algorithm together with features and factors. It then dynamically serves interesting and important information based on the current situation and preference of the audience. This mechanism helps the audience to reduce the time spent in finding interesting information. It can be applied to other cultures, societies and businesses. In the near future, the audience will be provided with excellent, and less annoying, communication. Through our studies, we have found that our proposed IFM is most appropriate for Thai and some groups of Japanese audiences under the consideration of audiences' culture.
    Download PDF (351K)
  • Yang XIE, Koji EGUCHI
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 714-720
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    A number of studies have been conducted on topic modeling for various types of data, including text and image data. We focus particularly on the burstiness of the local features in modeling topics within video data in this paper. Burstiness is a phenomenon that is often discussed for text data. The idea is that if a word is used once in a document, it is more likely to be used again within the document. It is also observed in video data; for example, an object or visual word in video data is more likely to appear repeatedly within the same video data. Based on the idea mentioned above, we propose a new topic model, the Correspondence Dirichlet Compound Multinomial LDA (Corr-DCMLDA), which takes into account the burstiness of the local features in video data. The unknown parameters and latent variables in the model are estimated by conducting a collapsed Gibbs sampling and the hyperparameters are estimated by focusing on the fixed-point iterations. We demonstrate through experimentation on the genre classification of social video data that our model works more effectively than several baselines.
    Download PDF (757K)
  • Yan DING, Huaimin WANG, Lifeng WEI, Songzheng CHEN, Hongyi FU, Xinhai ...
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 721-732
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    MapReduce is commonly used as a parallel massive data processing model. When deploying it as a service over the open systems, the computational integrity of the participants is becoming an important issue due to the untrustworthy workers. Current duplication-based solutions can effectively solve non-collusive attacks, yet most of them require a centralized worker to re-compute additional sampled tasks to defend collusive attacks, which makes the worker a bottleneck. In this paper, we try to explore a trusted worker scheduling framework, named VAWS, to detect collusive attackers and assure the integrity of data processing without extra re-computation. Based on the historical results of verification, we construct an Integrity Attestation Graph (IAG) in VAWS to identify malicious mappers and remove them from the framework. To further improve the efficiency of identification, a verification-couple selection method with the IAG guidance is introduced to detect the potential accomplices of the confirmed malicious worker. We have proven the effectiveness of our proposed method on the improvement of system performance in theoretical analysis. Intensive experiments show the accuracy of VAWS is over 97% and the overhead of computation is closed to the ideal value of 2 with the increasing of the number of map tasks in our scheme.
    Download PDF (2341K)
  • Yasuhito ASANO, Taihei OSHINO, Masatoshi YOSHIKAWA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 733-742
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Graph pattern mining has played important roles in network analysis and information retrieval. However, temporal characteristics of networks have not been estimated sufficiently. We propose time graph pattern mining as a new concept of graph mining reflecting the temporal information of a network. We conduct two case studies of time graph pattern mining: extensively discussed topics on blog sites and a book recommendation network. Through examination of case studies, we ascertain that time graph pattern mining has numerous possibilities as a novel means for information retrieval and network analysis reflecting both structural and temporal characteristics.
    Download PDF (737K)
  • Nasir AHMED, Abdul JALIL
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 743-751
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Manifold learning based image clustering models are usually employed at local level to deal with images sampled from nonlinear manifold. Multimode patterns in image data matrices can vary from nominal to significant due to images with different expressions, pose, illumination, or occlusion variations. We show that manifold learning based image clustering models are unable to achieve well separated images at local level for image datasets with significant multimode data patterns. Because gray level image features used in these clustering models are not able to capture the local neighborhood structure effectively for multimode image datasets. In this study, we use nearest neighborhood quality (NNQ) measure based criterion to improve local neighborhood structure in terms of correct nearest neighbors of images locally. We found Gist as the optimal image descriptor among HOG, Gist, SUN, SURF, and TED image descriptors based on an overall maximum NNQ measure on 10 benchmark image datasets. We observed significant performance improvement for recently reported clustering models such as Spectral Embedded Clustering (SEC) and Nonnegative Spectral Clustering with Discriminative Regularization (NSDR) using proposed approach. Experimentally, significant overall performance improvement of 10.5% (clustering accuracy) and 9.2% (normalized mutual information) on 13 benchmark image datasets is observed for SEC and NSDR clustering models. Further, overall computational cost of SEC model is reduced to 19% and clustering performance for challenging outdoor natural image databases is significantly improved by using proposed NNQ measure based optimal image representations.
    Download PDF (1564K)
  • Tomoki KOBAYASHI, Koji EGUCHI
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 752-761
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Many kinds of data can be represented as a network or graph. It is crucial to infer the latent structure underlying such a network and to predict unobserved links in the network. Mixed Membership Stochastic Blockmodel (MMSB) is a promising model for network data. Latent variables and unknown parameters in MMSB have been estimated through Bayesian inference with the entire network; however, it is important to estimate them online for evolving networks. In this paper, we first develop online inference methods for MMSB through sequential Monte Carlo methods, also known as particle filters. We then extend them for time-evolving networks, taking into account the temporal dependency of the network structure. We demonstrate through experiments that the time-dependent particle filter outperformed several baselines in terms of prediction performance in an online condition.
    Download PDF (1532K)
  • Donghui LIN, Toru ISHIDA, Yohei MURAKAMI, Masahiro TANAKA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 762-769
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    The availability of more and more Web services provides great varieties for users to design service processes. However, there are situations that services or service processes cannot meet users' requirements in functional QoS dimensions (e.g., translation quality in a machine translation service). In those cases, composing Web services and human tasks is expected to be a possible alternative solution. However, analysis of such practical efforts were rarely reported in previous researches, most of which focus on the technology of embedding human tasks in software environments. Therefore, this study aims at analyzing the effects of composing Web services and human activities using a case study in the domain of language service with large scale experiments. From the experiments and analysis, we find out that (1) service implementation variety can be greatly increased by composing Web services and human activities for satisfying users' QoS requirements; (2) functional QoS of a Web service can be significantly improved by inducing human activities with limited cost and execution time provided certain quality of human activities; and (3) multiple QoS attributes of a composite service are affected in different ways with different quality of human activities.
    Download PDF (3032K)
  • Jianmin WU, Mizuho IWAIHARA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 770-778
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    As one of the popular social media that many people turn to in recent years, collaborative encyclopedia Wikipedia provides information in a more “Neutral Point of View” way than others. Towards this core principle, plenty of efforts have been put into collaborative contribution and editing. The trajectories of how such collaboration appears by revisions are valuable for group dynamics and social media research, which suggest that we should extract the underlying derivation relationships among revisions from chronologically-sorted revision history in a precise way. In this paper, we propose a revision graph extraction method based on supergram decomposition in the document collection of near-duplicates. The plain text of revisions would be measured by its frequency distribution of supergram, which is the variable-length token sequence that keeps the same through revisions. We show that this method can effectively perform the task than existing methods.
    Download PDF (1817K)
  • Yusuke KOZAWA, Toshiyuki AMAGASA, Hiroyuki KITAGAWA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 779-789
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Probabilistic frequent itemset mining, which discovers frequent itemsets from uncertain data, has attracted much attention due to inherent uncertainty in the real world. Many algorithms have been proposed to tackle this problem, but their performance is not satisfactory because handling uncertainty incurs high processing cost. To accelerate such computation, we utilize GPUs (Graphics Processing Units). Our previous work accelerated an existing algorithm with a single GPU. In this paper, we extend the work to employ multiple GPUs. Proposed methods minimize the amount of data that need to be communicated among GPUs, and achieve load balancing as well. Based on the methods, we also present algorithms on a GPU cluster. Experiments show that the single-node methods realize near-linear speedups, and the methods on a GPU cluster of eight nodes achieve up to a 7.1 times speedup.
    Download PDF (1499K)
  • Yong REN, Nobuhiro KAJI, Naoki YOSHINAGA, Masaru KITSUREGAWA
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 790-797
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In sentiment classification, conventional supervised approaches heavily rely on a large amount of linguistic resources, which are costly to obtain for under-resourced languages. To overcome this scarce resource problem, there exist several methods that exploit graph-based semi-supervised learning (SSL). However, fundamental issues such as controlling label propagation, choosing the initial seeds, selecting edges have barely been studied. Our evaluation on three real datasets demonstrates that manipulating the label propagating behavior and choosing labeled seeds appropriately play a critical role in adopting graph-based SSL approaches for this task.
    Download PDF (1467K)
  • Abdulla Al MARUF, Hung-Hsuan HUANG, Kyoji KAWAGOE
    Article type: PAPER
    2014 Volume E97.D Issue 4 Pages 798-810
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    A lot of work has been conducted on time series classification and similarity search over the past decades. However, the classification of a time series with high accuracy is still insufficient in applications such as ubiquitous or sensor systems. In this paper, a novel textual approximation of a time series, called TAX, is proposed to achieve high accuracy time series classification. l-TAX, an extended version of TAX that shows promising classification accuracy over TAX and other existing methods, is also proposed. We also provide a comprehensive comparison between TAX and l-TAX, and discuss the benefits of both methods. Both TAX and l-TAX transform a time series into a textual structure using existing document retrieval methods and bioinformatics algorithms. In TAX, a time series is represented as a document like structure, whereas l-TAX used a sequence of textual symbols. This paper provides a comprehensive overview of the textual approximation and techniques used by TAX and l-TAX
    Download PDF (958K)
  • Kwanho KIM, Josué OBREGON, Jae-Yoon JUNG
    Article type: LETTER
    2014 Volume E97.D Issue 4 Pages 811-814
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    As the recent growth of online social network services such as Facebook and Twitter, people are able to easily share information with each other by writing posts or commenting for another's posts. In this paper, we firstly suggest a method of discovering information flows of posts on Facebook and their underlying contexts by incorporating process mining and text mining techniques. Based on comments collected from Facebook, the experiment results illustrate how the proposed method can be applied to analyze information flows and contexts of posts on social network services.
    Download PDF (1027K)
  • Tsukasa OMOTO, Koji EGUCHI, Shotaro TORA
    Article type: LETTER
    2014 Volume E97.D Issue 4 Pages 815-820
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    The hierarchical Dirichlet process (HDP) can provide a nonparametric prior for a mixture model with grouped data, where mixture components are shared across groups. However, the computational cost is generally very high in terms of both time and space complexity. Therefore, developing a method for fast inference of HDP remains a challenge. In this paper, we assume a symmetric multiprocessing (SMP) cluster, which has been widely used in recent years. To speed up the inference on an SMP cluster, we explore hybrid two-level parallelization of the Chinese restaurant franchise sampling scheme for HDP, especially focusing on the application to topic modeling. The methods we developed, Hybrid-AD-HDP and Hybrid-Diff-AD-HDP, make better use of SMP clusters, resulting in faster HDP inference. While the conventional parallel algorithms with a full message-passing interface does not benefit from using SMP clusters due to higher communication costs, the proposed hybrid parallel algorithms have lower communication costs and make better use of the computational resources.
    Download PDF (1479K)
Regular Section
  • Chang-Yong LEE, Yong-Jin PARK
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 4 Pages 821-829
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In this paper, we apply a mutation operation based on a multivariate Cauchy distribution to fast evolutionary programming and analyze its effect in terms of various function optimizations. The conventional fast evolutionary programming in-cooperates the univariate Cauchy mutation in order to overcome the slow convergence rate of the canonical Gaussian mutation. For a mutation of n variables, while the conventional method utilizes n independent random variables from a univariate Cauchy distribution, the proposed method adopts n mutually dependent random variables that satisfy a multivariate Cauchy distribution. The multivariate Cauchy distribution naturally has higher probabilities of generating random variables in inter-variable regions than the univariate Cauchy distribution due to the mutual dependence among variables. This implies that the multivariate Cauchy random variable enhances the search capability especially for a large number of correlated variables, and, as a result, is more appropriate for optimization schemes characterized by interdependence among variables. In this sense, the proposed mutation possesses the advantage of both the univariate Cauchy and Gaussian mutations. The proposed mutation is tested against various types of real-valued function optimizations. We empirically find that the proposed mutation outperformed the conventional Cauchy and Gaussian mutations in the optimization of functions having correlations among variables, whereas the conventional mutations showed better performance in functions of uncorrelated variables.
    Download PDF (516K)
  • Rubing HUANG, Dave TOWEY, Jinfu CHEN, Yansheng LU
    Article type: PAPER
    Subject area: Software Engineering
    2014 Volume E97.D Issue 4 Pages 830-841
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Combinatorial interaction testing has been well studied in recent years, and has been widely applied in practice. It generally aims at generating an effective test suite (an interaction test suite) in order to identify faults that are caused by parameter interactions. Due to some constraints in practical applications (e.g. limited testing resources), for example in combinatorial interaction regression testing, prioritized interaction test suites (called interaction test sequences) are often employed. Consequently, many strategies have been proposed to guide the interaction test suite prioritization. It is, therefore, important to be able to evaluate the different interaction test sequences that have been created by different strategies. A well-known metric is the Average Percentage of Combinatorial Coverage (shortly APCCλ), which assesses the rate of interaction coverage of a strength λ (level of interaction among parameters) covered by a given interaction test sequence S. However, APCCλ has two drawbacks: firstly, it has two requirements (that all test cases in S be executed, and that all possible λ-wise parameter value combinations be covered by S); and secondly, it can only use a single strength λ (rather than multiple strengths) to evaluate the interaction test sequence - which means that it is not a comprehensive evaluation. To overcome the first drawback, we propose an enhanced metric Normalized APCCλ (NAPCC) to replace the APCCλ Additionally, to overcome the second drawback, we propose three new metrics: the Average Percentage of Strengths Satisfied (APSS); the Average Percentage of Weighted Multiple Interaction Coverage (APWMIC); and the Normalized APWMIC (NAPWMIC). These metrics comprehensively assess a given interaction test sequence by considering different interaction coverage at different strengths. Empirical studies show that the proposed metrics can be used to distinguish different interaction test sequences, and hence can be used to compare different test prioritization strategies.
    Download PDF (1224K)
  • Koichi MORIYAMA, Simón Enrique ORTIZ BRANCO, Mitsuhiro MATSUMOT ...
    Article type: PAPER
    Subject area: Information Network
    2014 Volume E97.D Issue 4 Pages 842-851
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In standard fighting videogames, users usually prefer playing against other users rather than against machines because opponents controlled by machines are in a rut and users can memorize their behaviors after repetitive plays. On the other hand, human players adapt to each other's behaviors, which makes fighting videogames interesting. Thus, in this paper, we propose an artificial agent for a fighting videogame that can adapt to its users, allowing users to enjoy the game even when playing alone. In particular, this work focuses on combination attacks, or combos, that give great damage to the opponent. The agent treats combos independently, i.e., it is composed of a subagent for predicting combos the user executes, that for choosing combos the agent executes, and that for controlling the whole agent. Human users evaluated the agent compared to static opponents, and the agent received minimal negative ratings.
    Download PDF (1945K)
  • Amir Masoud GHAREHBAGHI, Masahiro FUJITA
    Article type: PAPER
    Subject area: Dependable Computing
    2014 Volume E97.D Issue 4 Pages 852-863
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    This paper presents a method for automatic rectification of design bugs in processors. Given a golden sequential instruction-set architecture model of a processor and its erroneous detailed cycle-accurate model at the micro-architecture level, we perform symbolic simulation and property checking combined with concrete simulation iteratively to detect the buggy location and its corresponding fix. We have used the truth-table model of the function that is required for correction, which is a very general model. Moreover, we do not represent the truth-table explicitly in the design. We use, instead, only the required minterms, which are obtained from the output of our backend formal engine. This way, we avoid adding any new variable for representing the truth-table. Therefore, our correction model is scalable to the number of inputs of the truth-table that could grow exponentially. We have shown the effectiveness of our method on a complex out-of-order superscalar processor supporting atomic execution of instructions. Our method reduces the model size for correction by 6.0x and total correction time by 12.6x, on average, compared to our previous work.
    Download PDF (631K)
  • Sotarat THAMMABOOSADEE, Bunthit WATANAPA, Jonathan H. CHAN, Udom SILPA ...
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 4 Pages 864-875
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    A two-stage classifier is proposed that identifies criminal charges and a range of punishments given a set of case facts and attributes. Our supervised-learning model focuses only on the offences against life and body section of the criminal law code of Thailand. The first stage identifies a set of diagnostic issues from the case facts using a set of artificial neural networks (ANNs) modularized in hierarchical order. The second stage extracts a set of legal elements from the diagnostic issues by employing a set of C4.5 decision tree classifiers. These linked modular networks of ANNs and decision trees form an effective system in terms of determining power and the ability to trace or infer the relevant legal reasoning behind the determination. Isolated and system-integrated experiments are conducted to measure the performance of the proposed system. The overall accuracy of the integrated system can exceed 90%. An actual case is also demonstrated to show the effectiveness of the proposed system.
    Download PDF (1677K)
  • Chongjing SUN, Hui GAO, Junlin ZHOU, Yan FU, Li SHE
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 4 Pages 876-883
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    With the distributed data mining technique having been widely used in a variety of fields, the privacy preserving issue of sensitive data has attracted more and more attention in recent years. Our major concern over privacy preserving in distributed data mining is the accuracy of the data mining results while privacy preserving is ensured. Corresponding to the horizontally partitioned data, this paper presents a new hybrid algorithm for privacy preserving distributed data mining. The main idea of the algorithm is to combine the method of random orthogonal matrix transformation with the proposed secure multi-party protocol of matrix product to achieve zero loss of accuracy in most data mining implementations.
    Download PDF (534K)
  • Takuto NAITO, Keisuke YAMAZAKI
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 4 Pages 884-892
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Linear dynamical systems are basic state space models literally dealing with underlying system dynamics on the basis of linear state space equations. When the model is employed for time-series data analysis, the system identification, which detects the dimension of hidden state variables, is one of the most important tasks. Recently, it has been found that the model has singularities in the parameter space, which implies that analysis for adverse effects of the singularities is necessary for precise identification. However, the singularities in the models have not been thoroughly studied. There is a previous work, which dealt with the simplest case; the hidden state and the observation variables are both one dimensional. The present paper extends the setting to general dimensions and more rigorously reveals the structure of singularities. The results provide the asymptotic forms of the generalization error and the marginal likelihood, which are often used as criteria for the system identification.
    Download PDF (290K)
  • Tomoko KOJIRI, Naoya IWASHITA
    Article type: PAPER
    Subject area: Educational Technology
    2014 Volume E97.D Issue 4 Pages 893-900
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    The objective of our research is to develop a support system for creating presentation speech, especially speech that explains relations between two slides (complementary speech). Complementary speech is required between slides whose relations are difficult to understand from their contents, such as texts, figures, and tables. If presenters could notice relations between created slides that are recognized by audiences, they would prepare appropriate complementary speech at the right places. To make presenters notice slides where complementary speech is needed, our system analyzes relations between slides based on their texts and visualizes them. Four slide relations are defined and the method for detecting these relations from the slide texts is proposed. Then, analyzed relations are arranged in two-dimensional spaces that represent sequential relation and inclusive relation of their topics. The experimental results showed that most detected slide relations were the same as what examinees understood, and visualization of slide relations was useful in creating complementary speech, especially for less-experienced presenter.
    Download PDF (1861K)
  • Seng KHEANG, Kouichi KATSURADA, Yurie IRIBE, Tsuneo NITTA
    Article type: PAPER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 4 Pages 901-910
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    To achieve high quality output speech synthesis systems, data-driven grapheme-to-phoneme (G2P) conversion is usually used to generate the phonetic transcription of out-of-vocabulary (OOV) words. To improve the performance of G2P conversion, this paper deals with the problem of conflicting phonemes, where an input grapheme can, in the same context, produce many possible output phonemes at the same time. To this end, we propose a two-stage neural network-based approach that converts the input text to phoneme sequences in the first stage and then predicts each output phoneme in the second stage using the phonemic information obtained. The first-stage neural network is fundamentally implemented as a many-to-many mapping model for automatic conversion of word to phoneme sequences, while the second stage uses a combination of the obtained phoneme sequences to predict the output phoneme corresponding to each input grapheme in a given word. We evaluate the performance of this approach using the American English words-based pronunciation dictionary known as the auto-aligned CMUDict corpus[1]. In terms of phoneme and word accuracy of the OOV words, on comparison with several proposed baseline approaches, the evaluation results show that our proposed approach improves on the previous one-stage neural network-based approach for G2P conversion. The results of comparison with another existing approach indicate that it provides higher phoneme accuracy but lower word accuracy on a general dataset, and slightly higher phoneme and word accuracy on a selection of words consisting of more than one phoneme conflicts.
    Download PDF (2206K)
  • Narpendyah Wisjnu ARIWARDHANI, Masashi KIMURA, Yurie IRIBE, Kouichi KA ...
    Article type: PAPER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 4 Pages 911-918
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In this paper, we propose voice conversion (VC) based on articulatory features (AF) to vocal-tract parameters (VTP) mapping. An artificial neural network (ANN) is applied to map AF to VTP and to convert a speaker's voice to a target-speaker's voice. The proposed system is not only text-independent VC, in which it does not need parallel utterances between source and target-speakers, but can also be used for an arbitrary source-speaker. This means that our approach does not require source-speaker data to build the VC model. We are also focusing on a small number of target-speaker training data. For comparison, a baseline system based on Gaussian mixture model (GMM) approach is conducted. The experimental results for a small number of training data show that the converted voice of our approach is intelligible and has speaker individuality of the target-speaker.
    Download PDF (1050K)
  • Qing-Ge JI, Zhi-Feng TAN, Zhe-Ming LU, Yong ZHANG
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 4 Pages 919-927
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In recent years, with the popularization of video collection devices and the development of the Internet, it is easy to copy original digital videos and distribute illegal copies quickly through the Internet. It becomes a critical task to uphold copyright laws, and this problem will require a technical solution. Therefore, as a challenging problem, copy detection or video identification becomes increasingly important. The problem addressed here is to identify a given video clip in a given set of video sequences. In this paper, an extension to the video identification approach based on video tomography is presented. First, the feature extraction process is modified to enhance the reliability of the shot signature with its size unchanged. Then, a new similarity measurement between two shot signatures is proposed to address the problem generated by the original approach when facing the query shot with a short length. In addition, the query scope is extended from one shot only to one clip (several consecutive shots) by giving a new definition of similarity between two clips and describing a search algorithm which can save much of the computation cost. Experimental results show that the proposed approach is more suitable for identifying shots with short lengths than the original approach. The clip query approach performs well in the experiment and it also shows strong robustness to data loss.
    Download PDF (1245K)
  • Gibran BENITEZ-GARCIA, Gabriel SANCHEZ-PEREZ, Hector PEREZ-MEANA, Keit ...
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 4 Pages 928-935
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    This paper presents a facial expression recognition algorithm based on segmentation of a face image into four facial regions (eyes-eyebrows, forehead, mouth and nose). In order to unify the different results obtained from facial region combinations, a modal value approach that employs the most frequent decision of the classifiers is proposed. The robustness of the algorithm is also evaluated under partial occlusion, using four different types of occlusion (half left/right, eyes and mouth occlusion). The proposed method employs sub-block eigenphases algorithm that uses the phase spectrum and principal component analysis (PCA) for feature vector estimation which is fed to a support vector machine (SVM) for classification. Experimental results show that using modal value approach improves the average recognition rate achieving more than 90% and the performance can be kept high even in the case of partial occlusion by excluding occluded parts in the feature extraction process.
    Download PDF (2362K)
  • Qingyi GU, Abdullah AL NOMAN, Tadayoshi AOYAMA, Takeshi TAKAKI, Idaku ...
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 4 Pages 936-950
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In this paper, we present a high frame rate (HFR) vision system that can automatically control its exposure time by executing brightness histogram-based image processing in real time at a high frame rate. Our aim is to obtain high-quality HFR images for robust image processing of high-speed phenomena even under dynamically changing illumination, such as lamps flickering at 100 Hz, corresponding to an AC power supply at 50 / 60 Hz. Our vision system can simultaneously calculate a 256-bin brightness histogram for an 8-bit gray image of 512×512 pixels at 2000 fps by implementing a brightness histogram calculation circuit module as parallel hardware logic on an FPGA-based high-speed vision platform. Based on the HFR brightness histogram calculation, our method realizes automatic exposure (AE) control of 512×512 images at 2000 fps using our proposed AE algorithm. The proposed AE algorithm can maximize the number of pixels in the effective range of the brightness histogram, thus excluding much darker and brighter pixels, to improve the dynamic range of the captured image without over- and under-exposure. The effectiveness of our HFR system with AE control is evaluated according to experimental results for several scenes with illumination flickering at 100 Hz, which is too fast for the human eye to see.
    Download PDF (6539K)
  • Shun UMETSU, Akinobu SHIMIZU, Hidefumi WATANABE, Hidefumi KOBATAKE, Sh ...
    Article type: PAPER
    Subject area: Biological Engineering
    2014 Volume E97.D Issue 4 Pages 951-963
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    This paper presents a novel liver segmentation algorithm that achieves higher performance than conventional algorithms in the segmentation of cases with unusual liver shapes and/or large liver lesions. An L1 norm was introduced to the mean squared difference to find the most relevant cases with an input case from a training dataset. A patient-specific probabilistic atlas was generated from the retrieved cases to compensate for livers with unusual shapes, which accounts for liver shape more specifically than a conventional probabilistic atlas that is averaged over a number of training cases. To make the above process robust against large pathological lesions, we incorporated a novel term based on a set of “lesion bases” proposed in this study that account for the differences from normal liver parenchyma. Subsequently, the patient-specific probabilistic atlas was forwarded to a graph-cuts-based fine segmentation step, in which a penalty function was computed from the probabilistic atlas. A leave-one-out test using clinical abdominal CT volumes was conducted to validate the performance, and proved that the proposed segmentation algorithm with the proposed patient-specific atlas reinforced by the lesion bases outperformed the conventional algorithm with a statistically significant difference.
    Download PDF (3869K)
  • Young-Seok CHOI
    Article type: LETTER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 4 Pages 964-967
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    A new type of the affine projection (AP) algorithms which incorporates the sparsity condition of a system is presented. To exploit the sparsity of the system, a weighted l1-norm regularization is imposed on the cost function of the AP algorithm. Minimizing the cost function with a subgradient calculus and choosing two distinct weightings for l1-norm, two stochastic gradient based sparsity regularized AP (SR-AP) algorithms are developed. Experimental results show that the SR-AP algorithms outperform the typical AP counterparts for identifying sparse systems.
    Download PDF (619K)
  • Tomoya SAKAI, Masashi SUGIYAMA
    Article type: LETTER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 4 Pages 968-971
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Squared-loss mutual information (SMI) is a robust measure of the statistical dependence between random variables. The sample-based SMI approximator called least-squares mutual information (LSMI) was demonstrated to be useful in performing various machine learning tasks such as dimension reduction, clustering, and causal inference. The original LSMI approximates the pointwise mutual information by using the kernel model, which is a linear combination of kernel basis functions located on paired data samples. Although LSMI was proved to achieve the optimal approximation accuracy asymptotically, its approximation capability is limited when the sample size is small due to an insufficient number of kernel basis functions. Increasing the number of kernel basis functions can mitigate this weakness, but a naive implementation of this idea significantly increases the computation costs. In this article, we show that the computational complexity of LSMI with the multiplicative kernel model, which locates kernel basis functions on unpaired data samples and thus the number of kernel basis functions is the sample size squared, is the same as that for the plain kernel model. We experimentally demonstrate that LSMI with the multiplicative kernel model is more accurate than that with plain kernel models in small sample cases, with only mild increase in computation time.
    Download PDF (578K)
  • Ju Hee CHOI, Jong Wook KWAK, Seong Tae JHANG, Chu Shik JHON
    Article type: LETTER
    Subject area: Computer System
    2014 Volume E97.D Issue 4 Pages 972-975
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    Filter caches have been studied as an energy efficient solution. They achieve energy savings via selected access to L1 cache, but severely decrease system performance. Therefore, a filter cache system should adopt components that balance execution delay against energy savings. In this letter, we analyze the legacy filter cache system and propose Data Filter Cache with Partial Tag Cache (DFPC) as a new solution. The proposed DFPC scheme reduces energy consumption of L1 data cache and does not impair system performance at all. Simulation results show that DFPC provides the 46.36% energy savings without any performance loss.
    Download PDF (606K)
  • Ki-Hoon LEE
    Article type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2014 Volume E97.D Issue 4 Pages 976-980
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    As data volumes explode, data storage costs become a large fraction of total IT costs. We can reduce the costs substantially by using compression. However, it is generally known that database compression is not suitable for write-intensive workloads. In this paper, we provide a comprehensive solution to improve the performance of compressed databases for write-intensive OLTP workloads. We find that storing data too densely in compressed pages incurs many future page splits, which require exclusive locks. In order to avoid lock contention, we reduce page splits by sacrificing a couple of percent of space savings. We reserve enough space in each compressed page for future updates of records and prevent page merges that are prone to incur page splits in the near future. The experimental results using TPC-C benchmark and MySQL/InnoDB show that our method gives 1.5 times higher throughput with 33% space savings compared with the uncompressed counterpart and 1.8 times higher throughput with only 1% more space compared with the state-of-the-art compression method developed by Facebook.
    Download PDF (198K)
  • Guoqi LI
    Article type: LETTER
    Subject area: Dependable Computing
    2014 Volume E97.D Issue 4 Pages 981-983
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    The large and complicated safety-critical systems today need to keep changing to accommodate ever-changing objectives and environments. Accordingly, runtime analysis for safe reconfiguration or evaluation is currently a hot topic in the field, whereas information acquisition of external environment is crucial for runtime safety analysis. With the rapid development of web services, mobile networks and ubiquitous computing, abundant realtime information of environment is available on the Internet. To integrate these public information into runtime safety analysis of critical systems, this paper brings forward a framework, which could be implemented with open source and cross platform modules and encouragingly, applicable to various safety-critical systems.
    Download PDF (238K)
  • Yongjoo SHIN, Sihu SONG, Yunho LEE, Hyunsoo YOON
    Article type: LETTER
    Subject area: Dependable Computing
    2014 Volume E97.D Issue 4 Pages 984-988
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    This letter proposes a novel intrusion tolerant system consisting of several virtual machines (VMs) that refresh the target system periodically and by live migration, which monitors the many features of the VMs to identify and replace exhausted VMs. The proposed scheme provides adequate performance and dependability against denial of service (DoS) attacks. To show its efficiency and security, we conduct experiments on the CSIM20 simulator, which showed 22% improvement in a normal situation and approximately 77.83% improvement in heavy traffic in terms of the response time compared to that reported in the literature. We measure and compare the response time. The result show that the proposed scheme has shorter response time and maintains than other systems and supports services during the heavy traffic.
    Download PDF (432K)
  • Qingbo WU, Jian XIONG, Bing LUO, Chao HUANG, Linfeng XU
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 4 Pages 989-992
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In this paper, we propose a novel joint rate distortion optimization (JRDO) model for intra prediction coding. The spatial prediction dependency is exploited by modeling the distortion propagation with a linear fitting function. A novel JRDO based Lagrange multiplier (LM) is derived from this model. To adapt to different blocks' distortion propagation characteristics, we also introduce a generalized multiple Lagrange multiplier (MLM) framework where some candidate LMs are used in the RDO process. Experiment results show that our proposed JRDO-MLM scheme is superior to the H.264/AVC encoder.
    Download PDF (633K)
  • Leida LI, Hancheng ZHU, Jiansheng QIAN, Jeng-Shyang PAN
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 4 Pages 993-997
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    This letter presents a no-reference blocking artifact measure based on analysis of color discontinuities in YUV color space. Color shift and color disappearance are first analyzed in JPEG images. For color-shifting and color-disappearing areas, the blocking artifact scores are obtained by computing the gradient differences across the block boundaries in U component and Y component, respectively. An overall quality score is then produced as the average of the local ones. Extensive simulations and comparisons demonstrate the efficiency of the proposed method.
    Download PDF (1184K)
  • Su-hyun LEE, Yong-jin JEONG
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 4 Pages 998-1000
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    An integral image is the sum of input image pixel values. It is mainly used to speed up the process of a box filter operation, such as Haar-like features. However, large memory for integral image data can be an obstacle in an embedded environment with limited hardware. Therefore, an efficient method to store the integral image is necessary. In this paper, we propose a memory size reduction method for integral image. The method uses four types image information: an integral image, a row integral image, a column integral image, and an input image. Using this method, integral image memory can be reduced by 42.6% on a 640×480 8-bit gray-scale input image. The same idea can be applied for bigger size images.
    Download PDF (525K)
  • Hongliang XU, Fei ZHOU, Fan YANG, Qingmin LIAO
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 4 Pages 1001-1003
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    We propose a parameterized multisurface fitting method for multi-frame super-resolution (SR) processing. A parameter assumed for the unknown high-resolution (HR) pixel is used for multisurface fitting. Each surface fitted at each low-resolution (LR) pixel is an expression of the parameter. Final SR result is obtained by fusing the sampling values from these surfaces in the maximum a posteriori fashion. Experimental results demonstrate the superiority of the proposed method.
    Download PDF (305K)
  • Lijian ZHOU, Wanquan LIU, Zhe-Ming LU, Tingyuan NIE
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 4 Pages 1004-1007
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    In this Letter, a new face recognition approach based on curvelets and local ternary patterns (LTP) is proposed. First, we observe that the curvelet transform is a new anisotropic multi-resolution transform and can efficiently represent edge discontinuities in face images, and that the LTP operator is one of the best texture descriptors in terms of characterizing face image details. This motivated us to decompose the image using the curvelet transform, and extract the features in different frequency bands. As revealed by curvelet transform properties, the highest frequency band information represents the noisy information, so we directly drop it from feature selection. The lowest frequency band mainly contains coarse image information, and thus we deal with it more precisely to extract features as the face's details using LTP. The remaining frequency bands mainly represent edge information, and we normalize them for achieving explicit structure information. Then, all the extracted features are put together as the elementary feature set. With these features, we can reduce the features' dimension using PCA, and then use the sparse sensing technique for face recognition. Experiments on the Yale database, the extended Yale B database, and the CMU PIE database show the effectiveness of the proposed methods.
    Download PDF (1360K)
  • Zhongying HU, Kiichi URAHAMA
    Article type: LETTER
    Subject area: Computer Graphics
    2014 Volume E97.D Issue 4 Pages 1008-1010
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    We propose a method for downsizing line pictures to generate pixel line arts. In our method, topological properties such as connectivity of lines and segments are preserved by allowing slight distortion in the form of objects in input images. When input line pictures are painted with colors, the number of colors is preserved by our method.
    Download PDF (441K)
  • Jingjing GAO, Mei XIE, Ling MAO
    Article type: LETTER
    Subject area: Biological Engineering
    2014 Volume E97.D Issue 4 Pages 1011-1015
    Published: April 01, 2014
    Released on J-STAGE: April 01, 2014
    JOURNAL FREE ACCESS
    k-NN classification has been applied to classify normal tissues in MR images. However, the intensity inhomogeneity of MR images forces conventional k-NN classification into significant misclassification errors. This letter proposes a new interleaved method, which combines k-NN classification and bias field estimation in an energy minimization framework, to simultaneously overcome the limitation of misclassifications in conventional k-NN classification and correct the bias field of observed images. Experiments demonstrate the effectiveness and advantages of the proposed algorithm.
    Download PDF (517K)
feedback
Top