IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E95.D , Issue 9
Showing 1-25 articles out of 25 articles from the selected issue
Special Section on Software Reliability Engineering
  • Tadashi DOHI
    2012 Volume E95.D Issue 9 Pages 2167-2168
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    Download PDF (67K)
  • Akito MONDEN, Tomoko MATSUMURA, Mike BARKER, Koji TORII, Victor R. BAS ...
    Type: PAPER
    2012 Volume E95.D Issue 9 Pages 2169-2182
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    This paper customizes Goal/Question/Metric (GQM) project monitoring models for various projects and organizations to take advantage of the data from the software tool EPM and to allow the tailoring of the interpretation models based upon the context and success criteria for each project and organization. The basic idea is to build less concrete models that do not include explicit baseline values to interpret metrics values. Instead, we add hypothesis and interpretation layers to the models to help people of different projects make decisions in their own context. We applied the models to two industrial projects, and found that our less concrete models could successfully identify typical problems in software projects.
    Download PDF (3068K)
  • Kenneth LIND, Rogardt HELDAL
    Type: PAPER
    2012 Volume E95.D Issue 9 Pages 2183-2192
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    Accurate estimation of Software Code Size is important for developing cost-efficient embedded systems. The Code Size affects the amount of system resources needed, like ROM and RAM memory, and processing capacity. In our previous work, we have estimated the Code Size based on CFP (COSMIC Function Points) within 15% accuracy, with the purpose of deciding how much ROM memory to fit into products with high cost pressure. Our manual CFP measurement process would require 2.5 man years to estimate the ROM size required in a typical car. In this paper, we want to investigate how the manual effort involved in estimation of Code Size can be minimized. We define a UML Profile capturing all information needed for estimation of Code Size, and develop a tool for automated estimation of Code Size based on CFP. A case study will show how UML models save manual effort in a realistic case.
    Download PDF (1390K)
  • Hisashi MIYAZAKI, Tomoyuki YOKOGAWA, Sousuke AMASAKI, Kazuma ASADA, Yo ...
    Type: PAPER
    2012 Volume E95.D Issue 9 Pages 2193-2201
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    During a software development phase where a product is progressively elaborated, it is difficult to guarantee that the refined product retains its original behaviors. In this paper, we propose a method to detect refinement errors in UML sequence diagrams using LTSA (Labeled Transition System Analyzer). The method integrates multiple sequence diagrams using hMSC (high-level Message Sequence Charts) into a sequence diagram. Then, the method translates the diagram into FSP representation, which is the input language of LTSA. The method also supports some combined fragment operators in the UML 2.0 specification. We applied the method to some examples of refined sequence diagrams and checked the correctness of refinement. As a result, we confirmed the method can detect refinement errors in practical time.
    Download PDF (544K)
  • Anakorn JONGYINDEE, Masao OHIRA, Akinori IHARA, Ken-ichi MATSUMOTO
    Type: PAPER
    2012 Volume E95.D Issue 9 Pages 2202-2210
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    There are many roles to play in the bug fixing process in open source software development. A developer called “Committer”, who has a permission to submit a patch into a software repository, plays a major role in this process and holds a key to the successfulness of the project. Despite the importance of committer's activities, we suspect that sometimes committers can make mistakes which have some consequences to the bug fixing process (e.g., reopened bugs after bug fixing). Our research focuses on studying the consequences of each committer's activities to this process. We collected each committer's historical data from the Eclipse-Platform's bug tracking system and version control system and evaluated their activities using bug status in the bug tracking system and commit log in the version control system. Then we looked deeper into each committer's characteristics to see the reasons why some committers tend to make mistakes more than the others.
    Download PDF (1424K)
  • Fevzi BELLI, Mutlu BEYAZIT, Tomohiko TAKAGI, Zengo FURUKAWA
    Type: PAPER
    2012 Volume E95.D Issue 9 Pages 2211-2218
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    A model-based mutation testing (MBMT) approach enables to perform negative testing where test cases are generated using mutant models containing intentional faults. This paper introduces an alternative MBMT framework using pushdown automata (PDA) that relate to context-free (type-2) languages. There are two key ideas in this study. One is to gain stronger representational power to capture the features whose behavior depends on previous states of software under test (SUT). The other is to make use of a relatively small test set and concentrate on suspicious parts of the SUT by using MBMT approach. Thus, the proposed framework includes (1) a novel usage of PDA for modeling SUT, (2) novel mutation operators for generating PDA mutants, (3) a novel coverage criterion, and an algorithm to generate negative test cases from mutant PDA. A case study validates the approach, and discusses its characteristics and limitations.
    Download PDF (718K)
  • Bo ZHOU, Hiroyuki OKAMURA, Tadashi DOHI
    Type: PAPER
    2012 Volume E95.D Issue 9 Pages 2219-2226
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    This paper proposes the test case prioritization in regression testing. The large size of a test suite to be executed in regression testing often causes large amount of testing cost. It is important to reduce the size of test cases according to prioritized test sequence. In this paper, we apply the Markov chain Monte Carlo random testing (MCMC-RT) scheme, which is a promising approach to effectively generate test cases in the framework of random testing. To apply MCMC-RT to the test case prioritization, we consider the coverage-based distance and develop the algorithm of the MCMC-RT test case prioritization using the coverage-based distance. Furthermore, the MCMC-RT test case prioritization technique is consistently comparable to coverage-based adaptive random testing (ART) prioritization techniques and involves much less time cost.
    Download PDF (263K)
  • Soumen MAITY
    Type: LETTER
    2012 Volume E95.D Issue 9 Pages 2227-2231
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    In most software development environments, time, computing and human resources needed to perform the testing of a component is strictly limited. In order to deal with such situations, this paper proposes a method of creating the best possible test suite (covering the maximum number of 3-tuples) within a fixed number of test cases.
    Download PDF (301K)
  • Youngsul SHIN, Woo Jin LEE
    Type: LETTER
    2012 Volume E95.D Issue 9 Pages 2232-2234
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    This letter proposes a reuse method of unit test cases, which characterize internal behaviors of a called function, for enhancing capability of automatic generation of test cases. Existing test case generation tools have limits in finding solutions to the deep call structure of the source code. In our approach, the complex call structure is simplified by reusing unit test cases of called functions. As unit test cases represent the characteristics of the called function, the internal behaviors of called functions are replaced by the test cases. This approach can be applicable to existing test tools for simplifying the process of generation and enhancing their capabilities.
    Download PDF (233K)
Regular Section
  • Kyohei YAMAGUCHI, Yuya KORA, Hideki ANDO
    Type: PAPER
    Subject area: Computer System
    2012 Volume E95.D Issue 9 Pages 2235-2246
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    This paper evaluates the delay of the issue queue in a superscalar processor to aid microarchitectural design, where quick quantification of the complexity of the issue queue is needed to consider the tradeoff between clock cycle time and instructions per cycle. Our study covers two aspects. First, we introduce banking tag RAM, which comprises the issue queue, to reduce the delay. Unlike normal RAM, this is not straightforward, because of the uniqueness of the issue queue organization. Second, we explore and identify the correct critical path in the issue queue. In a previous study, the critical path of each component in the issue queue was summed to obtain the issue queue delay, but this does not give the correct delay of the issue queue, because the critical paths of the components are not connected logically. In the evaluation assuming 32-nm LSI technology, we obtained the delays of issue queues with eight to 128 entries. The process of banking tag RAM and identifying the correct critical path reduces the delay by up to 20% and 23% for 4- and 8-issue widths, respectively, compared with not banking tag RAM and simply summing the critical path delay of each component.
    Download PDF (897K)
  • Yan LEI, Xiaoguang MAO, Ziying DAI, Dengping WEI
    Type: PAPER
    Subject area: Software Engineering
    2012 Volume E95.D Issue 9 Pages 2247-2257
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    At the stage of software debugging, the effective interaction between software debugging engineers and fault localization techniques can greatly improve fault localization performance. However, most fault localization approaches usually ignore this interaction and merely utilize the information from testing. Due to different goals of testing and fault localization, the lack of interaction may lead to the issue of information inadequacy, which can substantially degrade fault localization performance. In addition, human work is costly and error-prone. It is vital to study and simulate the pattern of debugging engineers as they apply their knowledge and experience to this interaction to promote fault localization effectiveness and reduce their workload. Thus this paper proposes an effective fault localization approach to simulate this interaction via feedback. Based on results obtained from fault localization techniques, this approach utilizes test data generation techniques to automatically produce feedback for interacting with these fault localization techniques, and then iterate this process to improve fault localization performance until a specific stopping condition is satisfied. Experiments on two standard benchmarks demonstrate the significant improvement of our approach over a promising fault localization technique, namely the spectrum-based fault localization technique.
    Download PDF (1486K)
  • Ki-Hoon LEE
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2012 Volume E95.D Issue 9 Pages 2258-2264
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    Considerable effort has been devoted to minimizing XPath queries under the assumption that the minimal query is faster than the original query. However, little attention has been paid to the validity of the assumption. In this paper, we provide a detailed analysis on the effectiveness of XPath query minimization and present an extensive experimental evaluation on the effectiveness using six publicly available XQuery engines. To the best of our knowledge, this is the first work done towards this objective. Experiments on real and synthetic data sets show that although the assumption is valid for some cases, the performance of the minimal query is often lower than or almost equal to that of the original query.
    Download PDF (916K)
  • Woei-Kae CHEN, Pin-Ying TU
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2012 Volume E95.D Issue 9 Pages 2265-2276
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    Report generation is one of the most important tasks for database and e-commerce applications. Current report tools typically provide a set of predefined components that are used to specify report layout and format. However, available layout options are limited, and WYSIWYG formatting is not allowed. This paper proposes a four-phase report generation process to overcome these problems. The first phase retrieves source tables from the database. The second phase reorganizes the layout of the source tables by transferring the source tables into a set of new flat tables (in the first normal form). The third phase restructures the flat tables into a nested table (report) by specifying the report structure. The last phase formats the report with a WYSIWYG format editor supporting a number of formatting rules designed specifically for nested reports. Each phase of the proposed process supports visual programming, giving an easy-to-use user interface and allowing very flexible report layouts and formats. A visual end-user-programming tool, called TPS, is developed to demonstrate the proposed process and show that reports with sophisticated layouts can be created without writing low-level report generation programs.
    Download PDF (3718K)
  • Gang WANG, Yaping LIN, Rui LI, Jinguo LI, Xin YAO, Peng LIU
    Type: PAPER
    Subject area: Information Network
    2012 Volume E95.D Issue 9 Pages 2277-2287
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    High-speed IP address lookup with fast prefix update is essential for designing wire-speed packet forwarding routers. The developments of optical fiber and 100Gbps interface technologies have placed IP address lookup as the major bottleneck of high performance networks. In this paper, we propose a novel structure named Compressed Multi-way Prefix Tree (CMPT) based on B+ tree to perform dynamic and scalable high-speed IP address lookup. Our contributions are to design a practical structure for high-speed IP address lookup suitable for both IPv4 and IPv6 addresses, and to develop efficient algorithms for dynamic prefix insertion and deletion. By investigating the relationships among routing prefixes, we arrange independent prefixes as the search indexes on internal nodes of CMPT, and by leveraging a nested prefix compression technique, we encode all the routing prefixes on the leaf nodes. For any IP address, the longest prefix matching can be made at leaf nodes without backtracking. For a forwarding table with u independent prefixes, CMPT requires O(log mu) search time and O(mlog mu) dynamic insert and delete time. Performance evaluations using real life IPv4 forwarding tables show promising gains in lookup and dynamic update speeds compared with the existing B-tree structures.
    Download PDF (985K)
  • Ali NADIAN GHOMSHEH, Alireza TALEBPOUR
    Type: PAPER
    Subject area: Pattern Recognition
    2012 Volume E95.D Issue 9 Pages 2288-2297
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    In this paper, a new skin detection method using pixel color and image regional information, intended for objectionable image filtering is proposed. The method consists of three stages: skin detection, feature extraction and image classification. Skin detection is implemented in two steps. First, a Sinc function, fitted to skin color distribution in the Cb-Cr chrominance plane is used for detecting pixels with skin color properties. Next, to benefit regional information, based on the theory of color image reproduction, it's shown that the scattering of skin pixels in the RGB color space can be approximated by an exponential function. This function is incorporated to extract the final accurate skin map of the image. As objectionable image features, new shape and direction features, along with area feature are extracted. Finally, a Multi-Layer Perceptron trained with the best set of input features is used for filtering images. Experimental results on a dataset of 1600 images illustrate that the regional method improves the pixel-based skin detection rate by 10%. The final classification result with 94.12% accuracy showed better results when compared to other methods.
    Download PDF (2194K)
  • Kazunori KOMATANI, Mikio NAKANO, Masaki KATSUMARU, Kotaro FUNAKOSHI, T ...
    Type: PAPER
    Subject area: Speech and Hearing
    2012 Volume E95.D Issue 9 Pages 2298-2307
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    The optimal way to build speech understanding modules depends on the amount of training data available. When only a small amount of training data is available, effective allocation of the data is crucial to preventing overfitting of statistical methods. We have developed a method for allocating a limited amount of training data in accordance with the amount available. Our method exploits rule-based methods for when the amount of data is small, which are included in our speech understanding framework based on multiple model combinations, i.e., multiple automatic speech recognition (ASR) modules and multiple language understanding (LU) modules, and then allocates training data preferentially to the modules that dominate the overall performance of speech understanding. Experimental evaluation showed that our allocation method consistently outperforms baseline methods that use a single ASR module and a single LU module while the amount of training data increases.
    Download PDF (562K)
  • Welly NAPTALI, Masatoshi TSUCHIYA, Seiichi NAKAGAWA
    Type: PAPER
    Subject area: Speech and Hearing
    2012 Volume E95.D Issue 9 Pages 2308-2317
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    Out-of-vocabulary (OOV) words create serious problems for automatic speech recognition (ASR) systems. Not only are they miss-recognized as in-vocabulary (IV) words with similar phonetics, but the error also causes further errors in nearby words. Language models (LMs) for most open vocabulary ASR systems treat OOV words as a single entity, ignoring the linguistic information. In this paper we present a class-based n-gram LM that is able to deal with OOV words by treating each of them individually without retraining all the LM parameters. OOV words are assigned to IV classes consisting of similar semantic meanings for IV words. The World Wide Web is used to acquire additional data for finding the relation between the OOV and IV words. An evaluation based on adjusted perplexity and word-error-rate was carried out on the Wall Street Journal corpus. The result suggests the preference of the use of multiple classes for OOV words, instead of one unknown class.
    Download PDF (375K)
  • Seung-Jin BAEK, Seung-Won JUNG, Hahyun LEE, Hui Yong KIM, Sung-Jea KO
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2012 Volume E95.D Issue 9 Pages 2318-2326
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    In this paper, an improved B-picture coding algorithm based on the symmetric bi-directional motion estimation (ME) is proposed. In addition to the block match error between blocks in the forward and backward reference frames, the proposed method exploits the previously-reconstructed template regions in the current and reference frames for bi-directional ME. The side match error between the predicted target block and its template is also employed in order to alleviate block discontinuities. To efficiently perform ME, an initial motion vector (MV) is adaptively derived by exploiting temporal correlations. Experimental results show that the number of generated bits is reduced by up to 9.31% when the proposed algorithm is employed as a new macroblock (MB) coding mode for the H.264/AVC standard.
    Download PDF (1904K)
  • Fengwei AN, Tetsushi KOIDE, Hans Jürgen MATTAUSCH
    Type: PAPER
    Subject area: Biocybernetics, Neurocomputing
    2012 Volume E95.D Issue 9 Pages 2327-2338
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose a hardware solution for overcoming the problem of high computational demands in a nearest neighbor (NN) based multi-prototype learning system. The multiple prototypes are obtained by a high-speed K-means clustering algorithm utilizing a concept of software-hardware cooperation that takes advantage of the flexibility of the software and the efficiency of the hardware. The one nearest neighbor (1-NN) classifier is used to recognize an object by searching for the nearest Euclidean distance among the prototypes. The major deficiency in conventional implementations for both K-means and 1-NN is the high computational demand of the nearest neighbor searching. This deficiency is resolved by an FPGA-implemented coprocessor that is a VLSI circuit for searching the nearest Euclidean distance. The coprocessor requires 12.9% logic elements and 58% block memory bits of an Altera Stratix III E110 FPGA device. The hardware communicates with the software by a PCI Express (×4) local-bus-compatible interface. We benchmark our learning system against the popular case of handwritten digit recognition in which abundant previous works for comparison are available. In the case of the MNIST database, we could attain the most efficient accuracy rate of 97.91% with 930 prototypes, the learning speed of 1.3×10-4s/sample and the classification speed of 3.94×10-8s/character.
    Download PDF (1053K)
  • Chen YUAN, Haibin KAN
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2012 Volume E95.D Issue 9 Pages 2339-2342
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    A Superconcentrator is a directed acyclic graph with specific properties. The existence of linear-sized supercentrator has been proved in [4]. Since then, the size has been decreased significantly. The best known size is 28N which is proved by U. Schöning in [8]. Our work follows their construction and proves a smaller size superconcentrator.
    Download PDF (97K)
  • GunWoo PARK, SungHoon SEO, SooJin LEE, SangHoon LEE
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2012 Volume E95.D Issue 9 Pages 2343-2346
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    Question and Answering (Q&A) sites are recently gaining popularity on the Web. People using such sites are like a community-anyone can ask, anyone can answer, and everyone can share, since all of the questions and answers are public and searchable immediately. This mechanism can reduce the time and effort to find the most relevant answer. Unfortunately, the users suffer from answer quality problem due to several reasons including limited knowledge about the question domain, bad intentions (e.g. spam, making fun of others), limited time to prepare good answers, etc. In order to identify the credible users to help people find relevant answer, in this paper, we propose a ranking algorithm, InfluenceRank, which is basis of analyzing relationship in terms of users' activities and their mutual trusts. Our experimental studies show that the proposed algorithm significantly outperforms the baseline algorithms.
    Download PDF (585K)
  • Ruicong ZHI, Qiuqi RUAN, Zhifei WANG
    Type: LETTER
    Subject area: Pattern Recognition
    2012 Volume E95.D Issue 9 Pages 2347-2350
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    A facial components based facial expression recognition algorithm with sparse representation classifier is proposed. Sparse representation classifier is based on sparse representation and computed by L1-norm minimization problem on facial components. The features of “important” training samples are selected to represent test sample. Furthermore, fuzzy integral is utilized to fuse individual classifiers for facial components. Experiments for frontal views and partially occluded facial images show that this method is efficient and robust to partial occlusion on facial images.
    Download PDF (692K)
  • Doo Hwa HONG, June Sig SUNG, Kyung Hwan OH, Nam Soo KIM
    Type: LETTER
    Subject area: Speech and Hearing
    2012 Volume E95.D Issue 9 Pages 2351-2354
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    Decision tree-based clustering and parameter estimation are essential steps in the training part of an HMM-based speech synthesis system. These two steps are usually performed based on the maximum likelihood (ML) criterion. However, one of the drawbacks of the ML criterion is that it is sensitive to outliers which usually result in quality degradation of the synthesized speech. In this letter, we propose an approach to detect and remove outliers for HMM-based speech synthesis. Experimental results show that the proposed approach can improve the synthetic speech, particularly when the available training speech database is insufficient.
    Download PDF (242K)
  • Miki HASEYAMA, Daisuke IZUMI, Makoto TAKIZAWA
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2012 Volume E95.D Issue 9 Pages 2355-2358
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    A method for spatio-temporal resolution enhancement of video sequences based on super-resolution reconstruction is proposed. A new observation model is defined for accurate resolution enhancement, which enables subpixel motion in intermediate frames to be obtained. A modified optimization formula for obtaining a high-resolution sequence is also adopted.
    Download PDF (215K)
  • Bei HE, Guijin WANG, Chenbo SHI, Xuanwu YIN, Bo LIU, Xinggang LIN
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 9 Pages 2359-2362
    Published: September 01, 2012
    Released: September 01, 2012
    JOURNALS FREE ACCESS
    This paper presents a self-clustering algorithm to detect symmetry in images. We combine correlations of orientations, scales and descriptors as a triple feature vector to evaluate each feature pair while low confidence pairs are regarded as outliers and removed. Additionally, all confident pairs are preserved to extract potential symmetries since one feature point may be shared by different pairs. Further, each feature pair forms one cluster and is merged and split iteratively based on the continuity in the Cartesian and concentration in the polar coordinates. Pseudo symmetric axes and outlier midpoints are eliminated during the process. Experiments demonstrate the robustness and accuracy of our algorithm visually and quantitatively.
    Download PDF (1646K)
feedback
Top