IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E96.D, Issue 9
Displaying 1-40 of 40 articles from this issue
Special Section on Dependable Computing
  • Hiroshi TAKAHASHI
    2013 Volume E96.D Issue 9 Pages 1905-1906
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Download PDF (84K)
  • Nobuyasu KANEKAWA
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1907-1913
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This paper investigates potential to improve fault-detection coverage by means of on-chip redundancy. The international standard on functional safety, namely, IEC61508 Ed. 2.0 Part 2 Annex E.3 prescribes the upper bound of βIC (common cause failure (CCF) ratio to all failures) is 0.25 to satisfy frequency upper bound of dangerous failure in the safety function for SIL (Safety Integrated Level) 3. On the other hand, this paper argues that the βIC does not necessarily have to be less than 0.25 for SIL 3, and that the upper bound of βIC can be determined depending on failure rate λ and CCF detection coverage. In other words, the frequency upper bound of dangerous failure for SIL3 can also be satisfied with βIC higher than 0.25 if the failure rate λ is lower than 400[fit]. Moreover, the paper shows that on-chip redundancy has potential to satisfy SIL 4 requirement; the frequency upper bound of dangerous failure for SIL4 can be satisfied with feasible ranges of βIC, λ and CCF coverage which can be realized by redundant code.
    Download PDF (1256K)
  • Masashi IMAI, Tomohiro YONEDA
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1914-1925
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    We propose a fault diagnosis and reconfiguration method based on the Pair and Swap scheme to improve the reliability and the MTTF (Mean Time To Failure) of network-on-chip based multiple processor systems where each processor core has its private memory. In the proposed scheme, two identical copies of a given task are executed on a pair of processor cores and the results are compared repeatedly in order to detect processor faults. If a fault is detected by mismatches, the fault is identified and isolated using a TMR (Triple Module Redundancy) and the system is reconfigured by the redundant processor cores. We propose that each task is quadruplicated and statically assigned to private memories so that each memory has only two different tasks. We evaluate the reliability of the proposed quadruplicated task allocation scheme in the viewpoint of MTTF. As a result, the MTTF of the proposed scheme is over 4.3 times longer than that of the duplicated task allocation scheme.
    Download PDF (1609K)
  • Shohei KOTAKI, Masato KITAKAMI
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1926-1932
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    NAND Flash memories are widely used as data storages today. The memories are not intrinsically error free because they are affected by several physical disturbances. Technology scaling and introduction of multi-level cell (MLC) has improved data density, but it has made error effect more significant. Error control codes (ECC) are essential to improve reliability of NAND Flash memories. Efficiency of codes depends on error characteristic of systems, and codes are required to be designed to reflect this characteristic. In MLC Flash memories, errors tend to direct values to neighborhood. These errors are a class of M-ary asymmetric symbol error. Some codes which reflect the asymmetric property were proposed. They are designed to correct only 1 level shift errors because almost all of the errors in the memories are in such errors. But technology scaling, increase of program/erase (P/E) cycles, and MLC storing the large number of bits can cause multiple-level shift. This paper proposes single error control codes which can correct an error of more than 1 levels shift. Because the number of levels to be corrected is selectable, we can fit it into noise magnitude. Furthermore, it is possible to add error detecting function for error of the larger shift. Proposed codes are equivalent to a conventional integer codes, which can correct 1 level shift, on a certain parameter. Therefore, the codes are said to be generalization of conventional integer codes. Evaluation results show information lengths to respective check symbol lengths are larger than nonbinary Hamming codes and other M-ary asymmetric symbol error correcting codes.
    Download PDF (245K)
  • Hiroyuki OKAMURA, Tadashi DOHI
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1933-1940
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This paper presents the opportunity-based software rejuvenation policy and the optimization problem of software rejuvenation trigger time maximizing the system performance index. Our model is based on a basic semi-Markov software rejuvenation model by Dohi et al. 2000 under the environment where possible time, called opportunity, to execute software rejuvenation is limited. In the paper, we consider two stochastic point processes; renewal process and Markovian arrival process to represent the opportunity process. In particular, we derive the existence condition of the optimal trigger time under the two point processes analytically. In numerical examples, we illustrate the optimal design of the rejuvenation trigger schedule based on empirical data.
    Download PDF (411K)
  • Kumiko TADANO, Jianwen XIANG, Fumio MACHIDA, Yoshiharu MAENO
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1941-1951
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Large-scale disasters may cause simultaneous failures of many components in information systems. In the design for disaster recovery, operational procedures to recover from simultaneous component failures need to be determined so as to satisfy the time-to-recovery objective within the limited budget. For this purpose, it is beneficial to identify the smallest unacceptable combination of component failures (SUCCF) which exceeds the acceptable cost for recovering the system. This allows us to know the limitation of the recovery capability of the designed recovery operation procedure. In this paper, we propose a technique to identify the SUCCF by predicting the required cost for recovery from each combination of component failures with and without two-person cross-check of execution of recovery operations. We synthesize analytic models from the description of recovery operation procedure in the form of SysML Activity Diagram, and solve the models to predict the time-to-recovery and the cost. An example recovery operation procedure for a commercial database management system is used to demonstrate the proposed technique.
    Download PDF (2147K)
  • Naoya ONIZAWA, Atsushi MATSUMOTO, Takahiro HANYU
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1952-1961
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This paper introduces open-wire fault-resilient multiple-valued codes for reliable asynchronous point-to-point global communication links. In the proposed encoding, two communication modules assign complementary codewords that change between two valid states without an open-wire fault. Under an open-wire fault, at each module, the codewords don't reach to one of the two valid states and remains as “invalid” states. The detection of the invalid states makes it possible to stop sending wrong codewords caused by an open-wire fault. The detectability of the open-wire fault based on the proposed encoding is proven for m-of-n codes. The proposed code used in the multiple-valued asynchronous global communication link is capable of detecting a single open-wire fault with 3.08-times higher coding efficiency compared with a conventional multiple-valued code used in a triple-modular redundancy (TMR) link that detects an open-wire fault under the same dynamic range of logical values.
    Download PDF (852K)
  • Nobutaka KITO, Naofumi TAKAGI
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1962-1970
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    We propose a low-overhead fault-secure parallel prefix adder. We duplicate carry bits for checking purposes. Only one half of normal carry bits are compared with the corresponding redundant carry bits, and the hardware overhead of the adder is low. For concurrent error detection, we also predict the parity of the result. The adder uses parity-based error detection and it has high compatibility with systems that have parity-based error detection. We can implement various fault-secure parallel prefix adders such as Sklansky adder, Brent-Kung adder, Han-Carlson adder, and Kogge-Stone adder. The area overhead of the proposed adder is about 15% lower than that of a previously proposed adder that compares all the carry bits.
    Download PDF (993K)
  • A.K.M. Mahfuzul ISLAM, Hidetoshi ONODERA
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1971-1979
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This paper proposes the use of on-chip monitor circuits to detect process shift and process spread for post-silicon diagnosis and model-hardware correlation. The amounts of shift and spread allow test engineers to decide the correct test strategy. Monitor structures suitable for detection of process shift and process spread are discussed. Test chips targeting a nominal process corner as well as 4 other corners of “slow-slow”, “fast-fast”, “slow-fast” and “fast-slow” are fabricated in a 65nm process. The monitor structures correctly detects the location of each chip in the process space. The outputs of the monitor structures are further analyzed and decomposed into the process variations in threshold voltage and gate length for model-hardware correlation. Path delay predictions match closely with the silicon values using the extracted parameter shifts. On-chip monitors capable of detecting process shift and process spread are helpful for performance prediction of digital and analog circuits, adaptive delay testing and post-silicon statistical analysis.
    Download PDF (921K)
  • Susumu KOBAYASHI, Fumihiro MINAMI
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1980-1985
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    As the LSI process technology advances and the gate size becomes smaller, the signal delay on interconnect becomes a significant factor in the signal path delay. Also, as the size of interconnect structure becomes smaller, the interconnect process variations have become one of the dominant factors which influence the signal delay and thus clock skew. Therefore, controlling the influence of interconnect process variations on clock skew is a crucial issue in the advanced process technologies. In this paper, we propose a method for minimizing clock skew fluctuations caused by interconnect process variations. The proposed method identifies the suitable balance of clock buffer size and wire length in order to minimize the clock skew fluctuations caused by the interconnect process variations. Experimental results on test circuits of 28nm process technology show that the proposed method reduces the clock skew fluctuations by 30-92% compared to the conventional method.
    Download PDF (1413K)
  • Hiroyuki YOTSUYANAGI, Hiroyuki MAKIMOTO, Takanobu NIMIYA, Masaki HASHI ...
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1986-1993
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This paper proposes a method for testing delay faults using a boundary scan circuit in which a time-to-digital converter (TDC) is embedded. The incoming transitions from the other cores or chips are captured at the boundary scan circuit. The TDC circuit is modified to set the initial value for a delay line through which the transition is propagated. The condition for measuring timing slacks of two or more paths is also investigated since the overlap of the signals may occur in the delay line of the TDC in our boundary scan circuit. An experimental IC with the TDC and boundary scan is fabricated and is measured to estimate the delay of some paths measured by the TDC embedded in boundary scan cells. The simulation results for a benchmark circuit with the boundary scan circuit are also shown for the case that timing slacks of multiple paths can be observed even if the signals overlap in the TDC.
    Download PDF (881K)
  • Hiroshi YAMAZAKI, Motohiro WAKAZONO, Toshinori HOSOKAWA, Masayoshi YOS ...
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 1994-2002
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In recent years, the growing density and complexity of VLSIs have led to an increase in the numbers of test patterns and fault models. Test patterns used in VLSI testing are required to provide high quality and low cost. Don't care (X) identification techniques and X-filling techniques are methods to satisfy these requirements. However, conventional X-identification techniques are less effective for application-specific fields such as test compaction because the X-bits concentrate on particular primary inputs and pseudo primary inputs. In this paper, we propose a don't care identification method for test compaction. The experimental results for ITC'99 and ISCAS'89 benchmark circuits show that a given test set can be efficiently compacted by the proposed method.
    Download PDF (1480K)
  • Kohei MIYASE, Ryota SAKAI, Xiaoqing WEN, Masao ASO, Hiroshi FURUKAWA, ...
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 2003-2011
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Test power has become a critical issue, especially for low-power devices with deeply optimized functional power profiles. Particularly, excessive capture power in at-speed scan testing may cause timing failures that result in test-induced yield loss. This has made capture-safety checking mandatory for test vectors. However, previous capture-safety checking metrics suffer from inadequate accuracy since they ignore the time relations among different transitions caused by a test vector in a circuit. This paper presents a novel metric called the Transition-Time-Relation-based (TTR) metric which takes transition time relations into consideration in capture-safety checking. Detailed analysis done on an industrial circuit has demonstrated the advantages of the TTR metric. Capture-safety checking with the TTR metric greatly improves the accuracy of test vector sign-off and low-capture-power test generation.
    Download PDF (1686K)
  • Senling WANG, Yasuo SATO, Seiji KAJIHARA, Kohei MIYASE
    Article type: PAPER
    2013 Volume E96.D Issue 9 Pages 2012-2020
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In this paper we propose a novel method to reduce power consumption during scan testing caused by test responses at scan-out operation for logic BIST. The proposed method overwrites some flip-flops (FFs) values before starting scan-shift so as to reduce the switching activity at scan-out operation. In order to relax the fault coverage loss caused by filling new FF values before observing the capture values at the FFs, the method employs multi-cycle scan test with partial observation. For deriving larger scan-out power reduction with less fault coverage loss and preventing hardware overhead increase, the FFs to be filled are selected in a predetermined ratio. For overwriting values, we prepare three value filling methods so as to achieve larger scan-out power reduction. Experiment for ITC99 benchmark circuits shows the effectiveness of the methods. Nearly 51% reduction of scan-out power and 57% reduction of peak scan-out power are achieved with little fault coverage loss for 20% FFs selection, while hardware overhead is little that only 0.05%.
    Download PDF (2274K)
  • Jongbin KO, Seokjun LEE, Yong-hun LIM, Seong-ho JU, Taeshik SHON
    Article type: LETTER
    2013 Volume E96.D Issue 9 Pages 2021-2025
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    With the proliferation of smart grids and the construction of various electric IT systems and networks, a next-generation substation automation system (SAS) based on IEC 61850 has been agreed upon as a core element of smart grids. However, research on security vulnerability analysis and quantification for automated substations is still in the preliminary phase. In particular, it is not suitable to apply existing security vulnerability quantification approaches to IEC 61850-based SAS because of its heterogeneous characteristics. In this paper, we propose an IEC 61850-based SAS network modeling and evaluation approach for security vulnerability quantification. The proposed approach uses network-level and device groupings to categorize the characteristic of the SAS. In addition, novel attack scenarios are proposed through a zoning scheme to evaluate the network model. Finally, an MTTC (Mean Time-to-Compromise) scheme is used to verify the proposed network model using a sample attack scenario.
    Download PDF (618K)
  • Tsu-Lin LI, Masaki HASHIZUME, Shyue-Kung LU
    Article type: LETTER
    2013 Volume E96.D Issue 9 Pages 2026-2030
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    NROM is one of the emerging non-volatile-memory technologies, which is promising for replacing current floating-gate-based non-volatile memory such as flash memory. In order to raise the fabrication yield and enhance its reliability, a novel test and repair flow is proposed in this paper. Instead of the conventional fault replacement techniques, a novel fault masking technique is also exploited by considering the logical effects of physical defects when the customer's code is to be programmed. In order to maximize the possibilities of fault masking, a novel data inversion technique is proposed. The corresponding BIST architectures are also presented. According to experimental results, the repair rate and fabrication yield can be improved significantly. Moreover, the incurred hardware overhead is almost negligible.
    Download PDF (588K)
  • Hideki YOSHIKAWA, Masahiro KAMINAGA, Arimitsu SHIKODA, Toshinori SUZUK ...
    Article type: LETTER
    2013 Volume E96.D Issue 9 Pages 2031-2035
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    We present a round addition differential fault analysis (DFA) for some lightweight 80-bit block ciphers. It is shown that only one correct ciphertext and two faulty ciphertexts are required to reconstruct secret keys in 80-bit Piccolo and TWINE, and the reconstructions are easier than 128-bit CLEFIA.
    Download PDF (417K)
  • Keehang KWON, Sungwoo HUR, Mi-Young PARK
    Article type: LETTER
    2013 Volume E96.D Issue 9 Pages 2036-2038
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    To deal with failures as simply as possible, we propose a new foundation for the core (untyped) C++, which is based on a new logic called task logic or imperative logic. We then introduce a sequential-disjunctive statement of the form S : R. This statement has the following semantics: execute S and R sequentially. It is considered a success if at least one of S, R is a success. This statement is useful for dealing with inessential errors without explicitly catching them.
    Download PDF (58K)
Regular Section
  • Ying-Dar LIN, Kuei-Chung CHANG, Yuan-Cheng LAI, Yu-Sheng LAI
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2013 Volume E96.D Issue 9 Pages 2039-2046
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    The computing of applications in embedded devices suffers tight constraints on computation and energy resources. Thus, it is important that applications running on these resource-constrained devices are aware of the energy constraint and are able to execute efficiently. The existing execution time and energy profiling tools could help developers to identify the bottlenecks of applications. However, the profiling tools need large space to store detailed profiling data at runtime, which is a hard demand upon embedded devices. In this article, a reconfigurable multi-resolution profiling (RMP) approach is proposed to handle this issue on embedded devices. It first instruments all profiling points into source code of the target application and framework. Developers can narrow down the causes of bottleneck by adjusting the profiling scope using the configuration tool step by step without recompiling the profiled targets. RMP has been implemented as an open source tool on Android systems. Experiment results show that the required log space using RMP for a web browser application is 25 times smaller than that of Android debug class, and the profiling error rate of execution time is proven 24 times lower than that of debug class. Besides, the CPU and memory overheads of RMP are only 5% and 6.53% for the browsing scenario, respectively.
    Download PDF (1391K)
  • Masayuki SATO, Ryusuke EGAWA, Hiroyuki TAKIZAWA, Hiroaki KOBAYASHI
    Article type: PAPER
    Subject area: Computer System
    2013 Volume E96.D Issue 9 Pages 2047-2054
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Chip multiprocessors (CMPs) improve performance by simultaneously executing multiple threads using integrated multiple cores. However, since these cores commonly share one cache, inter-thread cache conflicts often limit the performance improvement by multi-threading. This paper focuses on two causes of inter-thread cache conflicts. In shared caches of CMPs, cached data fetched by one thread are frequently evicted by another thread. Such an eviction, called inter-thread kickout (ITKO), is one of the major causes of inter-thread cache conflicts. The other cause is capacity shortage that occurs when one cache is shared by threads demanding large cache capacities. If the total capacity demanded by the threads exceeds the actual cache capacity, the threads compete to use the limited cache capacity, resulting in capacity shortage. To address inter-thread cache conflicts, we must take into account both ITKOs and capacity shortage. Therefore, this paper proposes a capacity-aware thread scheduling method combined with cache partitioning. In the proposed method, inter-thread cache conflicts due to ITKOs and capacity shortage are decreased by cache partitioning and thread scheduling, respectively. The proposed scheduling method estimates the capacity demand of each thread with an estimation method used in the cache partitioning mechanism. Based on the estimation used for cache partitioning, the thread scheduler decides thread combinations sharing one cache so as to avoid capacity shortage. Evaluation results suggest that the proposed method can improve overall performance by up to 8.1%, and the performance of individual threads by up to 12%. The results also show that both cache partitioning and thread scheduling are indispensable to avoid both ITKOs and capacity shortage simultaneously. Accordingly, the proposed method can significantly reduce the inter-thread cache conflicts and hence improve performance.
    Download PDF (1092K)
  • Hiroki SHIRAYANAGI, Hiroshi YAMADA, Kenji KONO
    Article type: PAPER
    Subject area: Software System
    2013 Volume E96.D Issue 9 Pages 2055-2064
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Current network elements consume 10-20% of the total power in data centers. Today's network elements are not energy-proportional and consume a constant amount of energy regardless of the amount of traffic. Thus, turning off unused network switches is the most efficient way of reducing the energy consumption of data center networks. This paper presents Honeyguide, an energy optimizer for data center networks that not only turns off inactive switches but also increases the number of inactive switches for better energy-efficiency. To this end, Honeyguide combines two techniques: 1) virtual machine (VM) and traffic consolidation, and 2) a slight extension to the existing tree-based topologies. Honeyguide has the following advantages. The VM consolidation, which is gracefully combined with traffic consolidation, can handle severe requirements on fault tolerance. It can be introduced into existing data centers without replacing the already-deployed tree-based topologies. Our simulation results demonstrate that Honeyguide can reduce the energy consumption of network elements better than the conventional VM migration schemes, and the savings are up to 7.8% in a fat tree with k=12.
    Download PDF (1163K)
  • Ye WANG, Xiaohu YANG, Cheng CHANG, Alexander J. KAVS
    Article type: PAPER
    Subject area: Software Engineering
    2013 Volume E96.D Issue 9 Pages 2065-2074
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Natural language (NL) requirements are usually human-centric and therefore error-prone and inaccurate. In order to improve the 3Cs of natural language requirements, namely Consistency, Correctness and Completeness, in this paper we propose a systematic pattern matching approach supporting both NL requirements modeling and inconsistency, incorrectness and incompleteness analysis among requirements. We first use business process modeling language to model NL requirements and then develop a formal language — Workflow Patterns-based Process Language (WPPL) — to formalize NL requirements. We leverage workflow patterns to perform two-level 3Cs checking on the formal representation based on a coherent set of checking rules. Our approach is illustrated through a real world financial service example — Global Equity Trading System (GETS).
    Download PDF (4326K)
  • Yongseok OH, Jongmoo CHOI, Donghee LEE, Sam H. NOH
    Article type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2013 Volume E96.D Issue 9 Pages 2075-2086
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    The Log-structured File System (LFS) transforms random writes to a huge sequential one to provide superior write performance on storage devices. However, LFS inherently suffers from overhead incurred by cleaning segments. Specifically, when file system utilization is high and the system is busy, write performance of LFS degenerates significantly due to high cleaning cost. Also, in the newer flash memory based SSD storage devices, cleaning leads to reduced SSD lifetime as it incurs more writes. In this paper, we propose an enhancement to the original LFS to alleviate the performance degeneration due to cleaning when the system is busy. The new scheme, which we call Slack Space Recycling (SSR), allows LFS to delay on-demand cleaning during busy hours such that cleaning may be done when the load is much lighter. Specifically, it writes modified data directly to invalid areas (slack space) of used segments instead of cleaning on-demand, pushing back cleaning for later. SSR also has the added benefit of increasing the lifetime of the now popular SSD storage devices. We implement the new SSR-LFS file system in Linux and perform a large set of experiments. The results of these experiments show that the SSR scheme significantly improves performance of LFS for a wide range of storage utilization settings and that the lifetime of SSDs is extended considerably.
    Download PDF (1570K)
  • Yutaka KATSUYAMA, Yoshinobu HOTTA, Masako OMACHI, Shinichiro OMACHI
    Article type: PAPER
    Subject area: Pattern Recognition
    2013 Volume E96.D Issue 9 Pages 2087-2095
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Reducing the time complexity of character matching is critical to the development of efficient Japanese Optical Character Recognition (OCR) systems. To shorten the processing time, recognition is usually split into separate pre-classification and precise recognition stages. For high overall recognition performance, the pre-classification stage must both have very high classification accuracy and return only a small number of putative character categories for further processing. Furthermore, for any practical system, the speed of the pre-classification stage is also critical. The associative matching (AM) method has often been used for fast pre-classification because of its use of a hash table and reliance on just logical bit operations to select categories, both of which make it highly efficient. However, a certain level of redundancy exists in the hash table because it is constructed using only the minimum and maximum values of the data on each axis and therefore does not take account of the distribution of the data. We propose a novel method based on the AM method that satisfies the performance criteria described above but in a fraction of the time by modifying the hash table to reduce the range of each category of training characters. Furthermore, we show that our approach outperforms pre-classification by VQ clustering, ANN, LSH and AM in terms of classification accuracy, reducing the number of candidate categories and total processing time across an evaluation test set comprising 116,528 Japanese character images.
    Download PDF (686K)
  • Bei HE, Guijin WANG, Chenbo SHI, Xuanwu YIN, Bo LIU, Xinggang LIN
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 9 Pages 2096-2106
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    Based on sample-pair refinement and local optimization, this paper proposes a high-accuracy and quick matting algorithm. First, in order to gather foreground/background samples effectively, we shoot rays in hybrid (gradient and uniform) directions. This strategy utilizes the prior knowledge to adjust the directions for effective searching. Second, we refine sample-pairs of pixels by taking into account neighbors'. Both high confidence sample-pairs and usable foreground/background components are utilized and thus more accurate and smoother matting results are achieved. Third, to reduce the computational cost of sample-pair selection in coarse matting, this paper proposes an adaptive sample clustering approach. Most redundant samples are eliminated adaptively, where the computational cost decreases significantly. Finally, we convert fine matting into a de-noising problem, which is optimized by minimizing the observation and state errors iteratively and locally. This leads to less space and time complexity compared with global optimization. Experiments demonstrate that we outperform other state-of-the-art methods in local matting both on accuracy and efficiency.
    Download PDF (6382K)
  • Danyi LI, Weifeng LI, Qingmin LIAO
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 9 Pages 2107-2114
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In this paper, we propose a hybrid fuzzy geometric active contour method, which embeds the spatial fuzzy clustering into the evolution of geometric active contour. In every iteration, the evolving curve works as a spatial constraint on the fuzzy clustering, and the clustering result is utilized to construct the fuzzy region force. On one hand, the fuzzy region force provides a powerful capability to avoid the leakages at weak boundaries and enhances the robustness to various noises. On the other hand, the local information obtained from the gradient feature map contributes to locating the object boundaries accurately and improves the performance on the images with heterogeneous foreground or background. Experimental results on synthetic and real images have shown that our model can precisely extract object boundaries and perform better than the existing representative hybrid active contour approaches.
    Download PDF (1350K)
  • Wei-Ho TSAI, Jun-Wei LIN, Der-Chang TSENG
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 9 Pages 2115-2125
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This study extends conventional fingerprint recognition from a supervised to an unsupervised framework. Instead of enrolling fingerprints from known persons to identify unknown fingerprints, our aim is to partition a collection of unknown fingerprints into clusters, so that each cluster consists of fingerprints from the same finger and the number of generated clusters equals the number of distinct fingers involved in the collection. Such an unsupervised framework is helpful to handle the situation where a collection of captured fingerprints are not from the enrolled people. The task of fingerprint clustering is formulated as a problem of minimizing the clustering errors characterized by the Rand index. We estimate the Rand index by computing the similarities between fingerprints and then apply a genetic algorithm to minimize the Rand index. Experiments conducted using the FVC2002 database show that the proposed fingerprint clustering method outperforms an intuitive method based on hierarchical agglomerative clustering. The experiments also show that the number of clusters determined by our system is close to the true number of distinct fingers involved in the collection.
    Download PDF (1462K)
  • Song-Hyon KIM, Kyong-Ha LEE, Inchul SONG, Hyebong CHOI, Yoon-Joon LEE
    Article type: LETTER
    Subject area: Fundamentals of Information Systems
    2013 Volume E96.D Issue 9 Pages 2126-2130
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    We address the problem of processing graph pattern matching queries over a massive set of data graphs in this letter. As the number of data graphs is growing rapidly, it is often hard to process such queries with serial algorithms in a timely manner. We propose a distributed graph querying algorithm, which employs feature-based comparison and a filter-and-verify scheme working on the MapReduce framework. Moreover, we devise an efficient scheme that adaptively tunes a proper feature size at runtime by sampling data graphs. With various experiments, we show that the proposed method outperforms conventional algorithms in terms of scalability and efficiency.
    Download PDF (519K)
  • Eui-Young LEE, Hyoung-Kyu SONG
    Article type: LETTER
    Subject area: Information Network
    2013 Volume E96.D Issue 9 Pages 2131-2134
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    The conventional hybrid STBC schemes can achieve less BER performance for STBC detection schemes than conventional STBC schemes since SM symbols interfere with STBC symbols. Therefore, this letter proposes the improved scheme for hybrid STBC systems. STBC and SM schemes are combined for the hybrid space-time block code system. Our approach effectively obtains both diversity gain and spectral efficiency gain. The proposed scheme offers improved BER performance since it uses iterative detection. Moreover, it increases the data rate effectively with a little performance loss.
    Download PDF (462K)
  • Tao LIU, Tianrui LI, Yihong CHEN
    Article type: LETTER
    Subject area: Information Network
    2013 Volume E96.D Issue 9 Pages 2135-2138
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In this letter, a distributed TDMA-based data gathering scheme for wireless sensor networks, called DTDGS, is proposed in order to avoid transmission collisions, achieve high levels of power conservation and improve network lifetime. Our study is based on corona-based network division and a distributed TDMA-based scheduling mechanism. Different from a centralized algorithm, DTDGS does not need a centralized gateway to assign the transmission time slots and compute the route for each node. In DTDGS, each node selects its transmission slots and next-hop forwarding node according to the information gathered from neighbor nodes. It aims at avoiding transmission collisions and balancing energy consumption among nodes in the same corona. Compared with previous data gathering schemes, DTDGS is highly scalable and energy efficient. Simulation results show high the energy efficiency of DTDGS.
    Download PDF (853K)
  • Pum Mo RYU, Myung-Gil JANG, Hyun-Ki KIM, So-Young PARK
    Article type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2013 Volume E96.D Issue 9 Pages 2139-2142
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    We propose a novel method for knowledge consolidation based on a knowledge graph as a next step in relation extraction from text. The knowledge consolidation method consists of entity consolidation and relation consolidation. During the entity consolidation process, identical entities are found and merged using both name similarity and relation similarity measures. In the relation consolidation process, incorrect relations are removed using cardinality properties, temporal information and relation weight in given graph structure. In our experiment, we could generate compact and clean knowledge graphs where number of entities and relations are reduced by 6.1% and by 17.4% respectively with increasing relation accuracy from 77.0% to 85.5%.
    Download PDF (744K)
  • Sixuan ZHAO, Soo Ngee KOH, Kang Kwong LUKE
    Article type: LETTER
    Subject area: Educational Technology
    2013 Volume E96.D Issue 9 Pages 2143-2146
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This paper proposes prosodic unit based segmentation for prosody evaluation by using pitch accent detection and forced alignment techniques. Support Vector Machine (SVM) is used to evaluate the prosody of non-native English speakers without reference utterances. Experimental results show the superiority of prosodic unit segmentation over word segmentation in terms of classification accuracy and dimension of the feature vectors used by SVM.
    Download PDF (620K)
  • Chun WANG, Zhongyuan LAI, Hongyuan WANG
    Article type: LETTER
    Subject area: Pattern Recognition
    2013 Volume E96.D Issue 9 Pages 2147-2151
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In this paper, we propose the Perceptual Shape Decomposition (PSD) to detect fingers for a Kinect-based hand gesture recognition system. The PSD is formulated as a discrete optimization problem by removing all negative minima with minimum cost. Experiments show that our PSD is perceptually relevant and robust against distortion and hand variations, and thus improves the recognition system performance.
    Download PDF (618K)
  • Yongwon JEONG, Sangjun LIM, Young Kuk KIM, Hyung Soon KIM
    Article type: LETTER
    Subject area: Speech and Hearing
    2013 Volume E96.D Issue 9 Pages 2152-2155
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    We present an acoustic model adaptation method where the transformation matrix for a new speaker is given by the product of bases and a weight matrix. The bases are built from the parallel factor analysis 2 (PARAFAC2) of training speakers' transformation matrices. We perform continuous speech recognition experiments using the WSJ0 corpus.
    Download PDF (140K)
  • Kun-Ching WANG
    Article type: LETTER
    Subject area: Speech and Hearing
    2013 Volume E96.D Issue 9 Pages 2156-2161
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This letter introduces innovative VAD based on horizontal spectral entropy with long-span of time (HSELT) feature sets to improve mobile ASR performance in low signal-to-noise ratio (SNR) conditions. Since the signal characteristics of nonstationary noise change with time, we need long-term information of the noisy speech signal to define a more robust decision rule yielding high accuracy. We find that HSELT measures can horizontally enhance the transition between speech and non-speech segments. Based on this finding, we use the HSELT measures to achieve high accuracy for detecting speech signal form various stationary and nonstationary noises.
    Download PDF (720K)
  • Jangwon LEE, Kugjin YUN, Doug Young SUH, Kyuheon KIM
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 9 Pages 2162-2165
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    This letter proposes a new delivery format in order to realize unified transmissions of stereoscopic video contents over a dynamic adaptive streaming scheme. With the proposed delivery format, various forms of stereoscopic video contents regardless of their encoding and composition types can be delivered over the current dynamic adaptive streaming scheme. In addition, the proposed delivery format supports dynamic and efficient switching between 2D and 3D sequences in an interoperable manner for both 2D and 3D digital devices, regardless of their capabilities. This letter describes the designed delivery format and shows dynamic interoperable applications for 2D and 3D mixed contents with the implemented system in order to verify its features and efficiency.
    Download PDF (1216K)
  • Youngsoo PARK, Taewon KIM, Namho HUR
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 9 Pages 2166-2169
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    A method of frame synchronization between the color video and depth-map video for depth based 3D video using edge coherence is proposed. We find a synchronized pair of frames using edge coherence by computing the maximum number of overlapped edge pixels between the color video and depth-map video in regions of temporal frame difference. The experimental results show that the proposed method can be used for synchronization of depth-based 3D video and that it is robust against Gaussian noise with σ = less than 30 and video compression by H.264/AVC with QP = less than 44.
    Download PDF (849K)
  • Min-Young NA, Tae-Young KIM
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 9 Pages 2170-2173
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In this paper, we present a real-time hand pose recognition method to provide an intuitive user interface through hand poses or gestures without a keyboard and a mouse. For this, the areas of right and left hands are segmented from the depth camera image, and noise compensation is performed. Then, the rotation angle and the centroid point of each hand area are calculated. Subsequently, joint points and end points of a finger are detected by expanding a circle at regular intervals from a centroid point of the hand. Lastly, the hand pose is recognized by matching between the current hand information and the hand model of previous frame and the hand model is updated for the next frame. This method enables users to predict the hidden fingers through the hand model information of the previous frame using temporal coherence in consecutive frames. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 95% and the performance indicated over 32fps. The proposed method can be used as a contactless input interface in presentation, advertisement, education, and game applications.
    Download PDF (1944K)
  • Yuan HU, Wei LIU
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 9 Pages 2174-2176
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In this paper, we present an approach for 3D face recognition based on Multi-level Partition of Unity (MPU) Implicits under pose and expression variations. The MPU Implicits are used for reconstructing 3D face surface in a hierarchical way. Three landmarks, nose, left eyehole and right eyehole, can be automatically detected with the analysis of curvature features at lower levels of reconstruted face. Thus, the 3D faces are initially registered to a common coordinate system based on the three landmarks. A variant of Iterative Closest Point (ICP) algorithm is proposed for matching the point surface of a given probe face to the implicits face surface in the gallery. To evaluate the performance of our approach for 3D face recognition, we perform an experiment on GavabDB face database. The results of the experiment show that our method based on MPU Implicits and Adaptive ICP has great capability for 3D face recognition under pose and expression variations.
    Download PDF (555K)
  • Lihua GUO
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 9 Pages 2177-2181
    Published: September 01, 2013
    Released on J-STAGE: September 01, 2013
    JOURNAL FREE ACCESS
    In the image classification applications, the test sample with multiple man-handcrafted descriptions can be sparsely represented by a few training subjects. Our paper is motivated by the success of multi-task joint sparse representation (MTJSR), and considers that the different modalities of features not only have the constraint of joint sparsity across different tasks, but also have the constraint of local manifold structure across different features. We introduce the constraint of local manifold structure into the MTJSR framework, and propose the Locality-constrained multi-task joint sparse representation method (LC-MTJSR). During the optimization of the formulated objective, the stochastic gradient descent method is used to guarantee fast convergence rate, which is essential for large-scale image categorization. Experiments on several challenging object classification datasets show that our proposed algorithm is better than the MTJSR, and is competitive with the state-of-the-art multiple kernel learning methods.
    Download PDF (415K)
feedback
Top