IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E97.D , Issue 5
Showing 1-48 articles out of 48 articles from the selected issue
Special Section on Knowledge-Based Software Engineering
  • Saeko MATSUURA
    2014 Volume E97.D Issue 5 Pages 1016
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Download PDF (62K)
  • Dang Viet DZUNG, Bui Quang HUY, Atsushi OHNISHI
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1017-1027
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    There have been many researches about construction and application of ontology in reality, notably the usage of ontology to support requirements engineering. The effect of ontology-based requirements engineering depends on quality of ontology. With the increasing size of ontology, it is difficult to verify the correctness of information stored in ontology. This paper will propose a method of using rules for verification the correctness of requirements ontology. We provide a rule description language to specify properties that requirements ontology should satisfy. Then, by checking whether the rules are consistent with requirements ontology, we verify the correctness of the ontology. We have developed a verification tool to support the method and evaluated the tool through experiments.
    Download PDF (1311K)
  • Dang Viet DZUNG, Atsushi OHNISHI
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1028-1038
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    This paper introduces an ontology-based method for checking requirements specification. Requirements ontology is a knowledge structure that contains functional requirements (FR), attributes of FR and relations among FR. Requirements specification is compared with functional nodes in the requirements ontology, then rules are used to find errors in requirements. On the basis of the results, requirements team can ask questions to customers and correctly and efficiently revise requirements. To support this method, an ontology-based checking tool for verification of requirements has been developed. Finally, the requirements checking method is evaluated through an experiment.
    Download PDF (1059K)
  • Takako NAKATANI, Shozo HORI, Keiichi KATAMINE, Michio TSUDA, Toshihiko ...
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1039-1048
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    The success of any project can be affected by requirements changes. Requirements elicitation is a series of activities of adding, deleting, and modifying requirements. We refer to the completion of requirements elicitation of a software component as requirements maturation. When the requirements of each component have reached the 100% maturation point, no requirement will come to the component. This does not mean that a requirements analyst (RA) will reject the addition of requirements, but simply, that the additional requirements will not come to the project. Our motivation is to provide measurements by which an RA can estimate one of the maturation periods: the early, middle, or late period of the project. We will proceed by introducing the requirements maturation efficiency (RME). The RME of the requirements represents how quickly the requirements of a component reach 100% maturation. Then, we will estimate the requirements maturation period for every component by applying the RME. We assume that the RME is derived from its accessibility from an RA to the requirements source and the stability of the requirements. We model accessibility as the number of information flows from the source of the requirements to the RA, and further, model stability with the requirements maturation index (RMI). According to the multiple regression analysis of a case, we are able to get an equation on RME derived from these two factors with a significant level of 5%. We evaluated the result by comparing it to another case, and then discuss the effectiveness of the measurements.
    Download PDF (818K)
  • Ywen HUANG, Zhua JIANG
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1049-1057
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In the process of production design, engineers usually find it is difficult to seek and reuse others' empirical knowledge which is in the forms of lesson-learned documents. This study proposed a novel approach, which uses a semantic-based topic knowledge map system (STKMS) to support timely and precisely lesson-learned documents finding and reusing. The architecture of STKMS is designed, which has five major functional modules: lesson-learned documents pre-processing, topic extraction, topic relation computation, topic weights computation, and topic knowledge map generation modules. Then STKMS implementation is briefly introduced. We have conducted two sets of experiments to evaluate quality of knowledge map and the performance of utilizing STKMS in outfitting design of a ship-building company. The first experiment shows that knowledge maps generated by STKMS are accepted by domain experts from the evaluation since precision and recall are high. The second experiment shows that STKMS-based group outperforms browse-based group in both learning score and satisfaction level, which are two measurements of performance of utilizing STKMS. The promising results confirm the feasibility of STKMS in helping engineers to find needed lesson-learned documents and reuse related knowledge easily and precisely.
    Download PDF (2327K)
  • Xiao XIAO, Tadashi DOHI
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1058-1068
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Recently, the wavelet-based estimation method has gradually been becoming popular as a new tool for software reliability assessment. The wavelet transform possesses both spatial and temporal resolution which makes the wavelet-based estimation method powerful in extracting necessary information from observed software fault data, in global and local points of view at the same time. This enables us to estimate the software reliability measures in higher accuracy. However, in the existing works, only the point estimation of the wavelet-based approach was focused, where the underlying stochastic process to describe the software-fault detection phenomena was modeled by a non-homogeneous Poisson process. In this paper, we propose an interval estimation method for the wavelet-based approach, aiming at taking account of uncertainty which was left out of consideration in point estimation. More specifically, we employ the simulation-based bootstrap method, and derive the confidence intervals of software reliability measures such as the software intensity function and the expected cumulative number of software faults. To this end, we extend the well-known thinning algorithm for the purpose of generating multiple sample data from one set of software-fault count data. The results of numerical analysis with real software fault data make it clear that, our proposal is a decision support method which enables the practitioners to do flexible decision making in software development project management.
    Download PDF (1236K)
  • Rizky Januar AKBAR, Takayuki OMORI, Katsuhisa MARUYAMA
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1069-1083
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Developers often face difficulties while using APIs. API usage patterns can aid them in using APIs efficiently, which are extracted from source code stored in software repositories. Previous approaches have mined repositories to extract API usage patterns by simply applying data mining techniques to the collection of method invocations of API objects. In these approaches, respective functional roles of invoked methods within API objects are ignored. The functional role represents what type of purpose each method actually achieves, and a method has a specific predefined order of invocation in accordance with its role. Therefore, the simple application of conventional mining techniques fails to produce API usage patterns that are helpful for code completion. This paper proposes an improved approach that extracts API usage patterns at a higher abstraction level rather than directly mining the actual method invocations. It embraces a multilevel sequential mining technique and uses categorization of method invocations based on their functional roles. We have implemented a mining tool and an extended Eclipse's code completion facility with extracted API usage patterns. Evaluation results of this tool show that our approach improves existing code completion.
    Download PDF (1912K)
  • Junko SHIROGANE, Seitaro SHIRAI, Hajime IWATA, Yoshiaki FUKAZAWA
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1084-1096
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    To realize usability in software, GUI (Graphical User Interface) layouts must be consistent because consistency allows end users to operate software based on previous experiences. Often consistency can be achieved by user interface guidelines, which realize consistency in a software package as well as between various software packages within a platform. Because end users have different experiences and perceptions, GUIs based on guidelines are not always usable for end users. Thus, it is necessary to realize consistency without guidelines. Herein we propose a method to realize consistent GUIs where existing software packages are surveyed and common patterns for window layouts, which we call layout rules, are specified. Our method uses these layout rules to arrange the windows of GUIs. Concretely, source programs of developed GUIs are analyzed to identify the layout rules, and then these rules are used to extract parameters to generate source programs of undeveloped GUIs. To evaluate our method, we applied it to existing GUIs in software packages to extract the layout rules from several windows and to generate other windows. The evaluation confirms that our method easily realizes layout consistency.
    Download PDF (906K)
  • Yoshitaka AOKI, Saeko MATSUURA
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1097-1108
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Software programs often include many defects that are not easy to detect because of the developers' mistakes, misunderstandings caused by the inadequate definition of requirements, and the complexity of the implementation. Due to the different skill levels of the testers, the significant increase in testing person-hours interferes with the progress of development projects. Therefore, it is desireable for any inexperienced developer to identify the cause of the defects. Model checking has been favored as a technique to improve the reliability earlier in the software development process. In this paper, we propose a verification method in which a Java source code control sequence is converted into finite automata in order to detect the cause of defects by using the model-checking tool UPPAAL, which has an exhaustive checking mechanism. We also propose a tool implemented by an Eclipse plug-in to assist general developers who have little knowledge of the model-checking tool. Because source code is generally complicated and large, the tool provides a step-wise verification mechanism based on the functional structure of the code and makes it easy to verify the business rules in the specification documents by adding a user-defined specification-based model to the source code model.
    Download PDF (2758K)
  • Rogene LACANIENTA, Shingo TAKADA, Haruto TANNO, Morihide OINUMA
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1109-1118
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    For the past couple of decades, the usage of the Web as a platform for deploying software products has become incredibly popular. Web applications became more prevalent, as well as more complex. Countless Web applications have already been designed, developed, tested, and deployed on the Internet. However, it is noticeable that many common functionalities are present among these vast number of applications. This paper proposes an approach based on a database containing information from previous test artifacts. The information is used to generate test scenarios for Web applications under test. We have developed a tool based on our proposed approach, with the aim of reducing the effort required from software test engineers and professionals during the test planning and creation stage of software engineering. We evaluated our approach from three viewpoints: comparison between our approach and manual generation, qualitative evaluation by professional software engineers, and comparison between our approach and two open-source tools.
    Download PDF (805K)
  • Donghui LIN, Toru ISHIDA
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1119-1126
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Collaborative business has been increasingly developing with the environment of globalization and advanced information technologies. In a collaboration environment with multiple organizations, participants from different organizations always have different views about modeling the overall business process due to different knowledge and cultural backgrounds. Moreover, flexible support, privacy preservation and process reuse are important issues that should be considered in business process management across organizational boundaries. This paper presents a novel approach of modeling interorganizational business process for collaboration. Our approach allows for modeling loosely coupled interorganizational business process considering different views of organizations. In the proposed model, organizations have their own local process views of modeling business process instead of sharing pre-defined global processes. During process cooperation, local process of an organization can be invisible to other organizations. Further, we propose the coordination mechanisms for different local process views to detect incompatibilities among organizations. We illustrate our proposed approach by a case study of interorganizational software development collaboration.
    Download PDF (710K)
  • Masanobu UMEDA, Keiichi KATAMINE, Keiichi ISHIBASHI, Masaaki HASHIMOTO ...
    Type: PAPER
    2014 Volume E97.D Issue 5 Pages 1127-1138
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Software engineering education at universities plays an increasingly important role as software quality is becoming essential in realizing a safe and dependable society. This paper proposes a practical state transition model (Practical-STM) based on the Organizational Expectancy Model for the improvement of software process education based on the Personal Software Process (PSP) from a motivation point of view. The Practical-STM treats an individual trainee of the PSP course as a state machine, and formalizes a motivation process of a trainee using a set of states represented by factors regarding motivation and a set of operations carried out by course instructors. The state transition function of this model represents the features or characteristics of a trainee in terms of motivation. The model allows a formal description of the states of a trainee in terms of motivation and the educational actions of the instructors in the PSP course. The instructors are able to decide effective and efficient actions to take toward the trainees objectively by presuming a state and a state transition function of the trainees formally. Typical patterns of state transitions from an initial state to a final state, which is called a scenario, are useful for inferring possible transitions of a trainee and taking proactive operations from a motivation point of view. Therefore, the model is useful not only for improving the educational effect of the PSP course, but also for the standardization of the course management and the quality management of the instructors.
    Download PDF (2203K)
Special Section on Formal Approach
  • Yoshinao ISOBE
    2014 Volume E97.D Issue 5 Pages 1139
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Download PDF (65K)
  • Katsuyuki KIMURA, Shigemasa TAKAI
    Type: PAPER
    Subject area: Formal Verification
    2014 Volume E97.D Issue 5 Pages 1140-1148
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we study a supervisory control problem for plants and specifications modeled by nondeterministic automata. This problem requires to synthesize a nondeterministic supervisor such that the supervised plant is bisimilar to a given specification. We assume that a supervisor can observe not only the event occurrence but also the current state of the plant, and introduce a notion of completeness of a supervisor which guarantees that all nondeterministic transitions caused by events enabled by the supervisor are defined in the supervised plant. We define a notion of partial bisimulation between a given specification and the plant, and prove that it serves as a necessary and sufficient condition for the existence of a bisimilarity enforcing complete supervisor.
    Download PDF (430K)
  • Pablo LAMILLA ALVAREZ, Yoshiaki TAKATA
    Type: PAPER
    Subject area: Formal Verification
    2014 Volume E97.D Issue 5 Pages 1149-1159
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Information-Based Access Control (IBAC) has been proposed as an improvement to History-Based Access Control (HBAC) model. In modern component-based systems, these access control models verify that all the code responsible for a security-sensitive operation is sufficiently authorized to execute that operation. The HBAC model, although safe, may incorrectly prevent the execution of operations that should be executed. The IBAC has been shown to be more precise than HBAC maintaining its safety level while allowing sufficiently authorized operations to be executed. However the verification problem of IBAC program has not been discussed. This paper presents a formal model for IBAC programs based on extended weighted pushdown systems (EWPDS). The mapping process between the IBAC original semantics and the EWPDS structure is described. Moreover, the verification problem for IBAC programs is discussed and several typical IBAC program examples using our model are implemented.
    Download PDF (909K)
  • Iakovos OURANOS, Kazuhiro OGATA, Petros STEFANEAS
    Type: PAPER
    Subject area: Formal Verification
    2014 Volume E97.D Issue 5 Pages 1160-1170
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In this paper we report on experiences gained and lessons learned by the use of the Timed OTS/CafeOBJ method in the formal verification of TESLA source authentication protocol. These experiences can be a useful guide for the users of the OTS/CafeOBJ, especially when dealing with such complex systems and protocols.
    Download PDF (1817K)
  • Toshiyuki MIYAMOTO, Yasuwo HASEGAWA, Hiroyuki OIMURA
    Type: PAPER
    Subject area: Formal Construction
    2014 Volume E97.D Issue 5 Pages 1171-1180
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    A service-oriented architecture builds the entire system using a combination of independent software components. Such an architecture can be applied to a wide variety of computer systems. The problem of synthesizing service implementation models from choreography representing the overall specifications of service interaction is known as the choreography realization problem. In automatic synthesis, software models should be simple enough to be easily understood by software engineers. In this paper, we discuss a semi-formal method for synthesizing hierarchical state machine models for the choreography realization problem. The proposed method is evaluated using metrics for intelligibility.
    Download PDF (1347K)
  • Shingo YAMAGUCHI, Huan WU
    Type: PAPER
    Subject area: Formal Construction
    2014 Volume E97.D Issue 5 Pages 1181-1187
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    A workflow may be extended to adapt to market growth, legal reform, and so on. The extended workflow must be logically correct, and inherit the behavior of the existing workflow. Even if the extended workflow inherits the behavior, it may be not logically correct. Can we modify it so that it satisfies not only behavioral inheritance but also logical correctness? This is named behavioral inheritance preserving soundizability problem. There are two kinds of behavioral inheritance: protocol inheritance and projection inheritance. In this paper, we tackled protocol inheritance preserving soundizability problem using a subclass of Petri nets called workflow nets. Limiting our analysis to acyclic free choice workflow nets, we formalized the problem. And we gave a necessary and sufficient condition on the problem, which is the existence of a key structure of free choice workflow nets called TP-handle. Based on this condition, we also constructed a polynomial time procedure to solve the problem.
    Download PDF (611K)
Regular Section
  • Agus BEJO, Dongju LI, Tsuyoshi ISSHIKI, Hiroaki KUNIEDA
    Type: PAPER
    Subject area: Computer System
    2014 Volume E97.D Issue 5 Pages 1188-1195
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    This paper firstly presents a processor design with Derivative ASIP approach. The architecture of processor is designed by making use of a well-known embedded processor's instruction-set as a base architecture. To improve its performance, the architecture is enhanced with more hardware resources such as registers, interfaces and instruction extensions which might achieve target specifications. Secondly, a new approach for retargeting compiler by means of assembly converter tool is proposed. Our retargeting approach is practical because it is performed by the assembly converter tool with a simple configuration file and independent from a base compiler. With our proposed approach, both architecture flexibility and a good quality of assembly code can be obtained at once. Compared to other compilers, experiments show that our approach capable of generating code as high efficiency as its base compiler and the developed ASIP results in better performance than its base processor.
    Download PDF (2108K)
  • Shinobu MIWA, Takara INOUE, Hiroshi NAKAMURA
    Type: PAPER
    Subject area: Computer System
    2014 Volume E97.D Issue 5 Pages 1196-1210
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Turbo mode, which accelerates many applications without major change of existing systems, is widely used in commercial processors. Since time duration or powerfulness of turbo mode depends on peak temperature of a processor chip, reducing the peak temperature can reinforce turbo mode. This paper presents that adding small amount of hardware allows microprocessors to reduce the peak temperature drastically and then to reinforce turbo mode successfully. Our approach is to find out a few small units that become heat sources in a processor and to appropriately duplicate them for reduction of their power density. By duplicating the limited units and using the copies evenly, the processor can show significant performance improvement while achieving area-efficiency. The experimental result shows that the proposed method achieves up to 14.5% of performance improvement in exchange for 2.8% of area increase.
    Download PDF (2918K)
  • Ting CHEN, Kenjiro TAURA
    Type: PAPER
    Subject area: Computer System
    2014 Volume E97.D Issue 5 Pages 1211-1224
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    To better support data-intensive workflows which are typically built out of various independently developed executables, this paper proposes extensions to parallel database systems called User-Defined eXecutables (UDX) and collective queries. UDX facilitates the description of workflows by enabling seamless integrations of external executables into SQL statements without any efforts to write programs confirming to strict specifications of databases. A collective query is an SQL query whose results are distributed to multiple clients and then processed by them in parallel, using arbitrary UDX. It provides efficient parallelization of executables through the data transfer optimization algorithms that distribute query results to multiple clients, taking both communication cost and computational loads into account. We implement this concept in a system called ParaLite, a parallel database system based on a popular lightweight database SQLite. Our experiments show that ParaLite has several times higher performance over Hive for typical SQL tasks and has 10x speedup compared to a commercial DBMS for executables. In addition, this paper studies a real-world text processing workflow and builds it on top of ParaLite, Hadoop, Hive and general files. Our experiences indicate that ParaLite outperforms other systems in both productivity and performance for the workflow.
    Download PDF (1263K)
  • Takayuki AKAMINE, Mohamad Sofian ABU TALIP, Yasunori OSANA, Naoyuki FU ...
    Type: PAPER
    Subject area: Computer System
    2014 Volume E97.D Issue 5 Pages 1225-1234
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Computational fluid dynamics (CFD) is an important tool for designing aircraft components. FaSTAR (Fast Aerodynamics Routines) is one of the most recent CFD packages and has various subroutines. However, its irregular and complicated data structure makes it difficult to execute FaSTAR on parallel machines due to memory access problem. The use of a reconfigurable platform based on field programmable gate arrays (FPGAs) is a promising approach to accelerating memory-bottlenecked applications like FaSTAR. However, even with hardware execution, a large number of pipeline stalls can occur due to read-after-write (RAW) data hazards. Moreover, it is difficult to predict when such stalls will occur because of the unstructured mesh used in FaSTAR. To eliminate this problem, we developed an out-of-order mechanism for permuting the data order so as to prevent RAW hazards. It uses an execution monitor and a wait buffer. The former identifies the state of the computation units, and the latter temporarily stores data to be processed in the computation units. This out-of-order mechanism can be applied to various types of computations with data dependency by changing the number of execution monitors and wait buffers in accordance with the equations used in the target computation. An out-of-order system can be reconfigured by automatic changing of the parameters. Application of the proposed mechanism to five subroutines in FaSTAR showed that its use reduces the number of stalls to less than 1% compared to without the mechanism. In-order execution was speeded up 2.6-fold and software execution was speeded up 2.9-fold using an Intel Core 2 Duo processor with a reasonable amount of overhead.
    Download PDF (1052K)
  • Dongjin YU, Xiang SU, Yunlei MU
    Type: PAPER
    Subject area: Software System
    2014 Volume E97.D Issue 5 Pages 1235-1243
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Aspect-oriented software development (AOSD) helps to solve the problem of low scalability and high maintenance costs of legacy systems caused by code scattering and tangling by extracting cross-cutting concerns and inserting them into aspects. Identifying the cross-cutting concerns of legacy systems is the key to reconstructing such systems using the approach of AOSD. However, current dynamic approaches to the identification of cross-cutting concerns simply check the methods' execution sequence, but do not consider their calling context, which may cause low precision. In this paper, we propose an improved comprehensive approach to the identification of candidate cross-cutting concerns of legacy systems based on the combination of the analysis of recurring execution relations and fan-ins. We first analyse the execution trace with a given test case and identify four types of execution relations for neighbouring methods: exit-entry, entry-exit, entry-entry and exit-exit. Afterwards, we measure the methods' left cross-cutting degrees and right cross-cutting degrees. The former ensures that the candidate recurs in a similar running context, whereas the latter indicates how many times the candidate cross-cuts different methods. The final candidates are then obtained from those high fan-in methods, which not only cross-cut others more times than a predefined threshold, but are always entered or left under the same running context. The experiment conducted on three open source systems shows that our approach improves the precision of identifying cross-cutting concerns compared with tradition ones.
    Download PDF (905K)
  • Eunjong CHOI, Norihiro YOSHIDA, Katsuro INOUE
    Type: PAPER
    Subject area: Software Engineering
    2014 Volume E97.D Issue 5 Pages 1244-1253
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Although code clones (i.e. code fragments that have similar or identical code fragments in the source code) are regarded as a factor that increases the complexity of software maintenance, tools for supporting clone refactoring (i.e. merging a set of code clones into a single method or function) are not commonly used. To promote the development of refactoring tools that can be more widely utilized, we present an investigation of clone refactoring carried out in the development of open source software systems. In the investigation, we identified the most frequently used refactoring patterns and discovered how merged code clone token sequences and differences in token sequence lengths vary for each refactoring pattern.
    Download PDF (1278K)
  • Eunjin KOH, Chanyoung LEE, Dong Gil JEONG
    Type: PAPER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 5 Pages 1254-1263
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    We propose a novel motion segmentation method based on a Clausius Normalized Field (CNF), a probabilistic model for treating time-varying imagery, which estimates entropy variations by observing the entropy definitions of Clausius and Boltzmann. As pixels of an image are viewed as a state of lattice-like molecules in a thermodynamic system, estimating entropy variations of pixels is the same as estimating their degrees of disorder. A greater increase in entropy means that a pixel has a higher chance of belonging to moving objects rather than to the background, because of its higher disorder. In addition to these homologous operations, a CNF naturally takes into consideration both spatial and temporal information to avoid local maxima, which substantially improves the accuracy of motion segmentation. Our motion segmentation system using CNF clearly separates moving objects from their backgrounds. It also effectively eliminates noise to a level achieved when refined post-processing steps are applied to the results of general motion segmentations. It requires less computational power than other random fields and generates automatically normalized outputs without additional post-processes.
    Download PDF (4499K)
  • Masahiro FUKUI, Shigeaki SASAKI, Yusuke HIWASAKI, Kimitaka TSUTSUMI, S ...
    Type: PAPER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 5 Pages 1264-1272
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    We proposes a new adaptive spectral masking method of algebraic vector quantization (AVQ) for non-sparse signals in the modified discreet cosine transform (MDCT) domain. This paper also proposes switching the adaptive spectral masking on and off depending on whether or not the target signal is non-sparse. The switching decision is based on the results of MDCT-domain sparseness analysis. When the target signal is categorized as non-sparse, the masking level of the target MDCT coefficients is adaptively controlled using spectral envelope information. The performance of the proposed method, as a part of ITU-T G.711.1 Annex D, is evaluated in comparison with conventional AVQ. Subjective listening test results showed that the proposed method improves sound quality by more than 0.1 points on a five-point scale on average for speech, music, and mixed content, which indicates significant improvement.
    Download PDF (939K)
  • Mumtaz Begum MUSTAFA, Zuraidah Mohd DON, Raja Noor AINON, Roziati ZAIN ...
    Type: PAPER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 5 Pages 1273-1282
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    The development of an HMM-based speech synthesis system for a new language requires resources like speech database and segment-phonetic labels. As an under-resourced language, Malay lacks the necessary resources for the development of such a system, especially segment-phonetic labels. This research aims at developing an HMM-based speech synthesis system for Malay. We are proposing the use of two types of training HMMs, which are the benchmark iterative training incorporating the DAEM algorithm and isolated unit training applying segment-phonetic labels of Malay. The preferred method for preparing segment-phonetic labels is the automatic segmentation. The automatic segmentation of Malay speech database is performed using two approaches which are uniform segmentation that applies fixed phone duration, and a cross-lingual approach that adopts the acoustic model of English. We have measured the segmentation error of the two segmentation approaches to ascertain their relative effectiveness. A listening test was used to evaluate the intelligibility and naturalness of the synthetic speech produced from the iterative and isolated unit training. We also compare the performance of the HMM-based speech synthesis system with existing Malay speech synthesis systems.
    Download PDF (854K)
  • Akihiro NAGASE, Nami NAKANO, Masako ASAMURA, Jun SOMEYA, Gosuke OHASHI
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 5 Pages 1283-1292
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    The authors have evaluated a method of expanding the bit depth of image signals called SGRAD, which requires fewer calculations, while degrading the sharpness of images less. Where noise is superimposed on image signals, the conventional method for obtaining high bit depth sometimes incorrectly detects the contours of images, making it unable to sufficiently correct the gradation. Requiring many line memories is also an issue with the conventional method when applying the process to vertical gradation. As a solution to this particular issue, SGRAD improves the method of detecting contours with transiting gradation to effectively correct the gradation of image signals which noise is superimposed on. In addition, the use of a prediction algorithm for detecting gradation reduces the scale of the circuit with less correction of the vertical gradation.
    Download PDF (1176K)
  • Zhouxin YANG, Takio KURITA
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 5 Pages 1293-1303
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Numerous studies have been focusing on the improvement of bag of features (BOF), histogram of oriented gradient (HOG) and scale invariant feature transform (SIFT). However, few works have attempted to learn the connection between them even though the latter two are widely used as local feature descriptor for the former one. Motivated by the resemblance between BOF and HOG/SIFT in the descriptor construction, we improve the performance of HOG/SIFT by a) interpreting HOG/SIFT as a variant of BOF in descriptor construction, and then b) introducing recently proposed approaches of BOF such as locality preservation, data-driven vocabulary, and spatial information preservation into the descriptor construction of HOG/SIFT, which yields the BOF-driven HOG/SIFT. Experimental results show that the BOF-driven HOG/SIFT outperform the original ones in pedestrian detection (for HOG), scene matching and image classification (for SIFT). Our proposed BOF-driven HOG/SIFT can be easily applied as replacements of the original HOG/SIFT in current systems since they are generalized versions of the original ones.
    Download PDF (2794K)
  • Seung-Jin RYU, Hae-Yeoun LEE, Heung-Kyu LEE
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 5 Pages 1304-1311
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Seam carving, which preserves semantically important image content during resizing process, has been actively researched in recent years. This paper proposes a novel forensic technique to detect the trace of seam carving. We exploit the energy bias and noise level of images under analysis to reliably unveil the evidence of seam carving. Furthermore, we design a detector investigating the relationship among neighboring pixels to estimate the inserted seams. Experimental results from a large set of test images indicates the superior performance of the proposed methods for both seam carving and seam insertion.
    Download PDF (2435K)
  • Chao-Hong CHEN, Ying-ping CHEN
    Type: PAPER
    Subject area: Biocybernetics, Neurocomputing
    2014 Volume E97.D Issue 5 Pages 1312-1323
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Estimation of distribution algorithms (EDAs), since they were introduced, have been successfully used to solve discrete optimization problems and hence proven to be an effective methodology for discrete optimization. To enhance the applicability of EDAs, researchers started to integrate EDAs with discretization methods such that the EDAs designed for discrete variables can be made capable of solving continuous optimization problems. In order to further our understandings of the collaboration between EDAs and discretization methods, in this paper, we propose a quality measure of discretization methods for EDAs. We then utilize the proposed quality measure to analyze three discretization methods: fixed-width histogram (FWH), fixed-height histogram (FHH), and greedy random split (GRS). Analytical measurements are obtained for FHH and FWH, and sampling measurements are conducted for FHH, FWH, and GRS. Furthermore, we integrate Bayesian optimization algorithm (BOA), a representative EDA, with the three discretization methods to conduct experiments and to observe the performance difference. A good agreement is reached between the discretization quality measurements and the numerical optimization results. The empirical results show that the proposed quality measure can be considered as an indicator of the suitability for a discretization method to work with EDAs.
    Download PDF (1117K)
  • Rong XU, Jun OHYA, Yoshinobu SATO, Bo ZHANG, Masakatsu G. FUJIE
    Type: PAPER
    Subject area: Biological Engineering
    2014 Volume E97.D Issue 5 Pages 1324-1335
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Toward the actualization of an automatic navigation system for fetoscopic tracheal occlusion (FETO) surgery, this paper proposes a 3D ultrasound (US) calibration-based approach that can locate the fetal facial surface, oral cavity, and airways by a registration between a 3D fetal model and 3D US images. The proposed approach consists of an offline process and online process. The offline process first reconstructs the 3D fetal model with the anatomies of the oral cavity and airways. Then, a point-based 3D US calibration system based on real-time 3D US images, an electromagnetic (EM) tracking device, and a novel cones' phantom, computes the matrix that transforms the 3D US image space into the world coordinate system. In the online process, by scanning the mother's body with a 3D US probe, 3D US images containing the fetus are obtained. The fetal facial surface extracted from the 3D US images is registered to the 3D fetal model using an ICP-based (iterative closest point) algorithm and the calibration matrices, so that the fetal facial surface as well as the oral cavity and airways are located. The results indicate that the 3D US calibration system achieves an FRE (fiducial registration error) of 1.49±0.44mm and a TRE (target registration error) of 1.81±0.56mm by using 24 fiducial points from two US volumes. A mean TRE of 1.55±0.46 mm is also achieved for measuring location accuracy of the 3D fetal facial surface extracted from 3D US images by 14 target markers, and mean location errors of 2.51±0.47 mm and 3.04±0.59 mm are achieved for indirectly measuring location accuracy of the pharynx and the entrance of the trachea, respectively, which satisfy the requirement of the FETO surgery.
    Download PDF (2750K)
  • Jang-Kyun AHN, Hyun-Woo JANG, Hyoung-Kyu SONG
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 5 Pages 1336-1339
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Although the QR decomposition M algorithm (QRD-M) detection reduces the complexity and achieves near-optimal detection performance, its complexity is still very high. In the proposed scheme, the received symbols through bad channel conditions are arranged in reverse order due to the performance of a system depending on the detection capability of the first layer. Simulation results show that the proposed scheme provides almost the same performance as the QRD-M. Moreover, the complexity is about 33.6% of the QRD-M for a bit error rate (BER) with 4×4 multi input multi output (MIMO) system.
    Download PDF (359K)
  • Yoshikazu WASHIZAWA, Tatsuya YOKOTA, Yukihiko YAMASHITA
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 5 Pages 1340-1344
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Most of the recent classification methods require tuning of the hyper-parameters, such as the kernel function parameter and the regularization parameter. Cross-validation or the leave-one-out method is often used for the tuning, however their computational costs are much higher than that of obtaining a classifier. Quadratically constrained maximum a posteriori (QCMAP) classifiers, which are based on the Bayes classification rule, do not have the regularization parameter, and exhibit higher classification accuracy than support vector machine (SVM). In this paper, we propose a multiple kernel learning (MKL) for QCMAP to tune the kernel parameter automatically and improve the classification performance. By introducing MKL, QCMAP has no parameter to be tuned. Experiments show that the proposed classifier has comparable or higher classification performance than conventional MKL classifiers.
    Download PDF (189K)
  • Woong-Kee LOH, Kyoung-Soo HAN
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2014 Volume E97.D Issue 5 Pages 1345-1348
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    A suffix tree is widely adopted for indexing genome sequences. While supporting highly efficient search, the suffix tree has a few shortcomings such as very large size and very long construction time. In this paper, we propose a very fast parallel algorithm to construct a disk-based suffix tree for human genome sequences. Our algorithm constructs a suffix array for part of the suffixes in the human genome sequence and then converts it into a suffix tree very quickly. It outperformed the previous algorithms by Loh et al. and Barsky et al. by up to 2.09 and 3.04 times, respectively.
    Download PDF (298K)
  • Woong-Kee LOH, Yang-Sae MOON, Young-Ho PARK
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 5 Pages 1349-1352
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    Due to the recent technical advances, GPUs are used for general applications as well as screen display. Many research results have been proposed to the performance of previous CPU-based algorithms by a few hundred times using the GPUs. In this paper, we propose a density-based clustering algorithm called GSCAN, which reduces the number of unnecessary distance computations using a grid structure. As a result of our experiments, GSCAN outperformed CUDA-DClust [2] and DBSCAN [3] by up to 13.9 and 32.6 times, respectively.
    Download PDF (1099K)
  • Pilsung KANG
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 5 Pages 1353-1357
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In this paper, a one-class Naïve Bayesian classifier (One-NB) for detecting toll frauds in a VoIP service is proposed. Since toll frauds occur irregularly and their patterns are too diverse to be generalized as one class, conventional binary-class classification is not effective for toll fraud detection. In addition, conventional novelty detection algorithms have struggled with optimizing their parameters to achieve a stable detection performance. In order to resolve the above limitations, the original Naïve Bayesian classifier is modified to handle the novelty detection problem. In addition, a genetic algorithm (GA) is employed to increase efficiency by selecting significant variables. In order to verify the performance of One-NB, comparative experiments using five well-known novelty detectors and three binary classifiers are conducted over real call data records (CDRs) provided by a Korean VoIP service company. The experimental results show that One-NB detects toll frauds more accurately than other novelty detectors and binary classifiers when the toll frauds rates are relatively low. In addition, The performance of One-NB is found to be more stable than the benchmark methods since no parameter optimization is required for One-NB.
    Download PDF (360K)
  • Marthinus Christoffel DU PLESSIS, Masashi SUGIYAMA
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 5 Pages 1358-1362
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    We consider the problem of learning a classifier using only positive and unlabeled samples. In this setting, it is known that a classifier can be successfully learned if the class prior is available. However, in practice, the class prior is unknown and thus must be estimated from data. In this paper, we propose a new method to estimate the class prior by partially matching the class-conditional density of the positive class to the input density. By performing this partial matching in terms of the Pearson divergence, which we estimate directly without density estimation via lower-bound maximization, we can obtain an analytical estimator of the class prior. We further show that an existing class prior estimation method can also be interpreted as performing partial matching under the Pearson divergence, but in an indirect manner. The superiority of our direct class prior estimation method is illustrated on several benchmark datasets.
    Download PDF (462K)
  • Sang Hyuck BAE, Jaewon PARK, CheolSe KIM, SeokWoo LEE, Woosup SHIN, Yo ...
    Type: LETTER
    Subject area: Human-computer Interaction
    2014 Volume E97.D Issue 5 Pages 1363-1366
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In this letter, we evaluate the parasitic capacitance of an LCD touch panel, the description and implementation of a differential input sensing circuit, and an algorithm suitable for large LCDs with integrated touch function. When projected capacitive touch sensors are integrated with a liquid crystal display, the sensors have a very large amount of parasitic capacitance with the display elements. A differential input sensing circuit can detect small changes in the mutual capacitance from the touch of a finger. The circuit is realized using discrete components, and for the evaluation of a large-sized LCD touch panel, a printed circuit board touch panel is used.
    Download PDF (900K)
  • Yurui XIE, Qingbo WU, Bing LUO, Chao HUANG, Liangzhi TANG
    Type: LETTER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 5 Pages 1367-1370
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In this letter, we exploit a new framework for detecting the non-specific object via combing the top-down and bottom-up cues. Specifically, a novel supervised discriminative dictionaries learning method is proposed to learn the coupled dictionaries for the object and non-object feature spaces in terms of the top-down cue. Different from previous dictionary learning methods, the new data reconstruction residual terms of coupled feature spaces, the sparsity penalty measures on the representations and an inconsistent regularizer for the learned dictionaries are all incorporated in a unitized objective function. Then we derive an iterative algorithm to alternatively optimize all the variables efficiently. Considering the bottom-up cue, the proposed discriminative dictionaries learning is then integrated with an unsupervised dictionary learning to capture the objectness windows in an image. Experimental results show that the non-specific object detection problem can be effectively solved by the proposed dictionary leaning framework and outperforms some established methods.
    Download PDF (3575K)
  • Wenming YANG, Guoli MA, Fei ZHOU, Qingmin LIAO
    Type: LETTER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 5 Pages 1371-1373
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    This study proposes a feature-level fusion method that uses finger veins (FVs) and finger dorsal texture (FDT) for personal authentication based on orientation selection (OS). The orientation codes obtained by the filters correspond to different parts of an image (foreground or background) and thus different orientations offer different levels of discrimination performance. We have conducted an orientation component analysis on both FVs and FDT. Based on the analysis, an OS scheme is devised which combines the discriminative orientation features of both modalities. Our experiments demonstrate the effectiveness of the proposed method.
    Download PDF (1146K)
  • Bin YAO, Hua WU, Yun YANG, Yuyan CHAO, Atsushi OHTA, Haruki KAWANAKA, ...
    Type: LETTER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 5 Pages 1374-1378
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    The Euler number of a binary image is an important topological property for pattern recognition, and can be calculated by counting certain bit-quads in the image. This paper proposes an efficient strategy for improving the bit-quad-based Euler number computing algorithm. By use of the information obtained when processing the previous bit quad, the number of times that pixels must be checked in processing a bit quad decreases from 4 to 2. Experiments demonstrate that an algorithm with our strategy significantly outperforms conventional Euler number computing algorithms.
    Download PDF (678K)
  • Fei ZHOU, Wen SUN, Qingmin LIAO
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 5 Pages 1379-1381
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    A new scheme based on multi-order visual comparison is proposed for full-reference image quality assessment. Inspired by the observation that various image derivatives have great but different effects on visual perception, we perform respective comparison on different orders of image derivatives. To obtain an overall image quality score, we adaptively integrate the results of different comparisons via a perception-inspired strategy. Experimental results on public databases demonstrate that the proposed method is more competitive than some state-of-the-art methods, benchmarked against subjective assessment given by human beings.
    Download PDF (343K)
  • Hwa-Soo WOO, Jong-Wha CHONG
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 5 Pages 1382-1385
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we propose an algorithm for contrast enhancement based on Adaptive Histogram Equalization (AHE) to improve image quality. Most histogram-based contrast enhancement methods have problems with excessive or low image contrast enhancement. This results in unnatural output images and the loss of visual information. The proposed method manipulates the slope of the input of the Probability Density Function (PDF) histogram. We also propose a pixel redistribution method using convolution to compensate for excess pixels after the slope modification procedure. Our method adaptively enhances the contrast of the input image and shows good simulation results compared with conventional methods.
    Download PDF (1816K)
  • Chang-shuai WANG, Jong-wha CHONG
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 5 Pages 1386-1389
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    In this paper, a novel White-RGB (WRGB) color filter array-based imaging system for cell phone is presented to reduce noise and reproduce color in low illumination. The core process is based on adaptive diagonal color separation to recover color components from a white signal using diagonal reference blocks and location-based color ratio estimation in the luminance space. The experiments, which are compared with the RGB and state-of-the-art WRGB approaches, show that our imaging system performs well for various spatial frequency images and color restoration in low-light environments.
    Download PDF (1063K)
  • Zhengcong WANG, Peng WANG, Hongguang ZHANG, Hongjun ZHANG, Shibao ZHEN ...
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 5 Pages 1390-1393
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    High Efficiency Video Coding (HEVC) is the latest video coding standard that is supported by JCT-VC. In this letter, an encoding algorithm for early termination of Coding Unit (CU) and Prediction Unit (PU) based on the texture direction is proposed for the HEVC intra prediction. Experimental results show that the proposed algorithm provides an average 40% total encoding time reduction with the negligible loss of rate-distortion.
    Download PDF (614K)
  • Zhiwei RUAN, Guijin WANG, Xinggang LIN, Jing-Hao XUE, Yong JIANG
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 5 Pages 1394-1397
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    The transfer of prior knowledge from source domains can improve the performance of learning when the training data in a target domain are insufficient. In this paper we propose a new strategy to transfer deformable part models (DPMs) for object detection, using offline-trained auxiliary DPMs of similar categories as source models to improve the performance of the target object detector. A DPM presents an object by using a root filter and several part filters. We use these filters of the auxiliary DPMs as prior knowledge and adapt the filters to the target object. With a latent transfer learning method, appropriate local features are extracted for the transfer of part filters. Our experiments demonstrate that this strategy can lead to a detector superior to some state-of-the-art methods.
    Download PDF (871K)
  • Young-Seok CHOI
    Type: LETTER
    Subject area: Biological Engineering
    2014 Volume E97.D Issue 5 Pages 1398-1401
    Published: May 01, 2014
    Released: May 01, 2014
    JOURNALS FREE ACCESS
    This letter presents a new entropy measure for electroencephalograms (EEGs), which reflects the underlying dynamics of EEG over multiple time scales. The motivation behind this study is that neurological signals such as EEG possess distinct dynamics over different spectral modes. To deal with the nonlinear and nonstationary nature of EEG, the recently developed empirical mode decomposition (EMD) is incorporated, allowing an EEG to be decomposed into its inherent spectral components, referred to as intrinsic mode functions (IMFs). By calculating Shannon entropy of IMFs in a time-dependent manner and summing them over adaptive multiple scales, the result is an adaptive subscale entropy measure of EEG. Simulation and experimental results show that the proposed entropy properly reveals the dynamical changes over multiple scales.
    Download PDF (779K)
feedback
Top