IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E93.D, Issue 7
Displaying 1-44 of 44 articles from this issue
Special Section on Machine Vision and its Applications
  • Ken-ichi MAEDA
    2010 Volume E93.D Issue 7 Pages 1669
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Download PDF (58K)
  • Tomoyuki SHIBATA, Toshikazu WADA
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1670-1681
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This paper presents a novel algorithm for Nearest Neighbor (NN) classifier. NN classification is a well-known method of pattern classification having the following properties: * it performs maximum-margin classification and achieves less than twice the ideal Bayesian error, * it does not require knowledge of pattern distributions, kernel functions or base classifiers, and * it can naturally be applied to multiclass classification problems. Among the drawbacks are A) inefficient memory use and B) ineffective pattern classification speed. This paper deals with the problems A and B. In most cases, NN search algorithms, such as k-d tree, are employed as a pattern search engine of the NN classifier. However, NN classification does not always require the NN search. Based on this idea, we propose a novel algorithm named k-d decision tree (KDDT). Since KDDT uses Voronoi-condensed prototypes, it consumes less memory than naive NN classifiers. We have confirmed that KDDT is much faster than NN search-based classifier through a comparative experiment (from 9 to 369 times faster than NN search based classifier). Furthermore, in order to extend applicability of the KDDT algorithm to high-dimensional NN classification, we modified it by incorporating Gabriel editing or RNG editing instead of Voronoi condensing. Through experiments using simulated and real data, we have confirmed the modified KDDT algorithms are superior to the original one.
    Download PDF (1668K)
  • Norimichi UKITA, Akira MAKINO, Masatsugu KIDODE
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1682-1689
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    In this research, we focus on how to track a target region that lies next to similar regions (e.g. a forearm and an upper arm) in zoom-in images. Many previous tracking methods express the target region (i.e. a part in a human body) with a single model such as an ellipse, a rectangle, and a deformable closed region. With the single model, however, it is difficult to track the target region in zoom-in images without confusing it and its neighboring similar regions (e.g. “a forearm and an upper arm” and “a small region in a torso and its neighboring regions”) because they might have the same texture patterns and do not have the detectable border between them. In our method, a group of feature points in a target region is extracted and tracked as the model of the target. Small differences between the neighboring regions can be verified by focusing only on the feature points. In addition, (1) the stability of tracking is improved using particle filtering and (2) tracking robust to occlusions is realized by removing unreliable points using random sampling. Experimental results demonstrate the effectiveness of our method even when occlusions occur.
    Download PDF (589K)
  • Gholamreza AKBARIZADEH, Gholam Ali REZAI-RAD, Shahriar BARADARAN SHOKO ...
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1690-1699
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    A new method of segmentation for Synthetic Aperture Radar (SAR) images using the skewness wavelet energy has been presented. The skewness is the third order cumulant which measures the local texture along the region-based active contour. Nonlinearity in intensity inhomogeneities often occur in SAR images due to the speckle noise. In this paper we propose a region-based active contour model that is able to use the intensity information in local regions and to cope with the speckle noise and nonlinear intensity inhomogeneity of SAR images. We use a wavelet coefficients energy distribution to analyze the SAR image texture in each sub-band. A fitting energy called skewness wavelet energy is defined in terms of a contour and a functional so that, the regions and their interfaces will be modeled by level set functions. A functional relationship has been calculated on these level sets in terms of the third order cumulant, from which an energy minimization is derived. Minimizing the calculated functions derives the optimal segmentation based on the texture definitions. The results of the implemented algorithm on the test images from the Radarsat SAR images of agricultural and urban regions show a desirable performance of the proposed method.
    Download PDF (4558K)
  • Kentaro YOKOI
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1700-1707
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This paper presents Probabilistic Bi-polar Radial Reach Correlation (PrBPRRC), a change detection method that is robust against illumination changes and background movements. Most of the traditional change detection methods are robust against either illumination changes or background movements; BPRRC is one of the illumination-robust change detection methods. We introduce a probabilistic background texture model into BPRRC and add the robustness against background movements including foreground invasions such as moving cars, walking people, swaying trees, and falling snow. We show the superiority of PrBPRRC in the environment with illumination changes and background movements by using three public datasets and one private dataset: ATON Highway data, Karlsruhe traffic sequence data, PETS 2007 data, and Walking-in-a-room data.
    Download PDF (1297K)
  • Zhuo YANG, Sei-ichiro KAMATA
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1708-1715
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Polar Fourier Descriptor(PFD) and Spherical Fourier Descriptor(SFD) are rotation invariant feature descriptors for two dimensional(2D) and three dimensional(3D) image retrieval and pattern recognition tasks. They are demonstrated to show superiorities compared with other methods on describing rotation invariant features of 2D and 3D images. However in order to increase the computation speed, fast computation method is needed especially for machine vision applications like realtime systems, limited computing environments and large image databases. This paper presents fast computation method for PFD and SFD that are deduced based on mathematical properties of trigonometric functions and associated Legendre polynomials. Proposed fast PFD and SFD are 8 and 16 times faster than direct calculation that significantly boost computation process. Furthermore, the proposed methods are also compact for memory requirements for storing PFD and SFD basis in lookup tables. The experimental results on both synthetic and real data are given to illustrate the efficiency of the proposed method.
    Download PDF (1084K)
  • Shuijiong WU, Peilin LIU, Yiqing HUANG, Qin LIU, Takeshi IKENAGA
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1716-1726
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    H.264/AVC encoder employs rate control to adaptively adjust quantization parameter (QP) to enable coded video to be transmitted over a constant bit-rate (CBR) channel. In this topic, bit allocation is crucial since it is directly related with actual bit generation and the coding quality. Meanwhile, the rate-distortion-optimization (RDO) based mode-decision technique also affects performance a lot for the strong relation among mode, bits, and quality. This paper presents a multi-stage rate control scheme for R-D optimized H.264/AVC encoders under CBR video transmission. To enhance the precision of the complexity estimation and bit allocation, a frequency-domain parameter named mean-absolute-transform-difference (MATD) is adopted to represent frame and macroblock (MB) residual complexity. Second, the MATD ratio is utilized to enhance the accuracy of frame layer bit prediction. Then, by considering the bit usage status of whole sequence, a measurement combining forward and backward bit analysis is proposed to adjust the Lagrange multiplier λMODE on frame layer to optimize the mode decision for all MBs within the current frame. On the next stage, bits are allocated on MB layer by proposed remaining complexity analysis. Computed QP is further adjusted according to predicted MB texture bits. Simulation results show the PSNR improvement is up to 1.13dB by using our algorithm, and the stress of output buffer control is also largely released compared with the recommended rate control in H.264/AVC reference software JM13.2.
    Download PDF (1850K)
  • Hideki NAKAYAMA, Tatsuya HARADA, Yasuo KUNIYOSHI
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1727-1736
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Generic image recognition techniques are widely studied for automatic image indexing. However, many of these methods are computationally too heavy for a practically large setup. Thus, for realizing scalability, it is important to properly balance the trade-off between performance and computational cost. In recent years, methods based on a bag-of-keypoints approach have been successful and widely used. However, the preprocessing cost for building visual words becomes immense in large-scale datasets. On the other hand, methods based on global image features have been used for a long time. Because global image features can be extracted rapidly, it is relatively easy to use them with large datasets. However, the performance of global feature methods is usually poor compared to the bag-of-keypoints methods. This paper proposes a simple but powerful scheme of boosting the performance of global image features by densely sampling low-level statistical moments of local features. Also, we use a scalable learning and classification method which is substantially lighter than a SVM. Our method achieved performance comparable to state-of-the-art methods despite its remarkable simplicity.
    Download PDF (549K)
  • Shaopeng TANG, Satoshi GOTO
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1737-1744
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    In this paper, we propose a novel feature named histogram of template (HOT) for human detection in still images. For every pixel of an image, various templates are defined, each of which contains the pixel itself and two of its neighboring pixels. If the texture and gradient values of the three pixels satisfy a pre-defined formula, the central pixel is regarded to meet the corresponding template for this formula. Histograms of pixels meeting various templates are calculated for a set of formulas, and combined to be the feature for detection. Compared to the other features, the proposed feature takes texture as well as the gradient information into consideration. Besides, it reflects the relationship between 3 pixels, instead of focusing on only one. Experiments for human detection are performed on INRIA dataset, which shows the proposed HOT feature is more discriminative than histogram of orientated gradient (HOG) feature, under the same training method.
    Download PDF (895K)
  • Il-Woong JEONG, Jin CHOI, Kyusung CHO, Yong-Ho SEO, Hyun Seung YANG
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1745-1753
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Detecting emergency situation is very important to a surveillance system for people like elderly live alone. A vision-based emergency response system with a paramedic mobile robot is presented in this paper. The proposed system is consisted of a vision-based emergency detection system and a mobile robot as a paramedic. A vision-based emergency detection system detects emergency by tracking people and detecting their actions from image sequences acquired by single surveillance camera. In order to recognize human actions, interest regions are segmented from the background using blob extraction method and tracked continuously using generic model. Then a MHI (Motion History Image) for a tracked person is constructed by silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. When an emergency is detected, a mobile robot can help to diagnose the status of the person in the situation. To send the mobile robot to the proper position, we implement mobile robot navigation algorithm based on the distance between the person and a mobile robot. We validate our system by showing emergency detection rate and emergency response demonstration using the mobile robot.
    Download PDF (572K)
  • Koichiro ENOMOTO, Masashi TODA, Yasuhiro KUWAHARA
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1754-1760
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    The quantity and state of fishery resources must be known so that they can be sustained. The fish culture industry is also planning to investigate resources. The results of investigations are used to estimate the catch size, times fish are caught, and future stocks. We have developed a method for extracting scallop areas from gravel seabed images to assess fish resources and also developed an automatic system that measures their quantities, sizes, and states. Japanese scallop farms for fisheries are found on gravel and sand seabeds. The seabed images are used for fishery investigations, which are absolutely necessary to visually estimate, and help us avoid using the acoustic survey. However, there is no automatic technology to measure the quantities, sizes, and states of resources, and so the current investigation technique is the manual measurement by experts. There are varied problems in automating technique. The photography environments have a high degree of noise, including large differences in lighting. Gravel, sand, clay, and debris are also included in the images. In the gravel field, we can see scallop features, such as colors, striped patterns, and fan-like shapes. This paper describes the features of our image extracting method, presents the results, and evaluates its effectiveness.
    Download PDF (1274K)
  • Dung-Nghi TRUONG CONG, Louahdi KHOUDOUR, Catherine ACHARD, Lounis DOUA ...
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1761-1772
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This paper presents an automatic system for detecting and re-identifying people moving in different sites with non-overlapping views. We first propose an automatic process for silhouette extraction based on the combination of an adaptive background subtraction algorithm and a motion detection module. Such a combination takes advantage of both approaches and is able to tackle the problem of particular environments. The silhouette extraction results are then clustered based on their spatial belonging and colorimetric characteristics in order to preserve only the key regions that effectively represent the appearance of a person. The next important step consists in characterizing the extracted silhouettes by the appearance-based signatures. Our proposed descriptor, which includes both color and spatial feature of objects, leads to satisfying results compared to other descriptors in the literature. Since the passage of a person needs to be characterized by multiple frames, a large quantity of data has to be processed. Thus, a graph-based algorithm is used to realize the comparison of passages of people in front of cameras and to make the final decision of re-identification. The global system is tested on two real and difficult data sets recorded in very different environments. The experimental results show that our proposed system leads to very satisfactory results.
    Download PDF (2030K)
  • Fan-Chieh CHENG, Shanq-Jang RUAN
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1773-1779
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    The use of image contrast enhancement has become increasingly essential due to the need to better show the visual information contained within the image for all vision-based systems. This has lead to motivation for the design of a powerful and accurate automatic contrast enhancement for a digital image. Histogram equalization is the most commonly used contrast enhancement method. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. In this paper, we propose a novel histogram equalization method using the automatic histogram separation along with the piecewise transformed function. The contrast enhancement results of the proposed method were not only analyzed through qualitative visual inspection and for quantitative accuracy, but are also compared to the results of other state-of-the-art methods.
    Download PDF (593K)
  • Romain GALLEN, Nicolas HAUTIÈRE, Eric DUMONT
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1780-1787
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    In this article, we propose a new way to estimate fog extinction at night with a camera. We also propose a method for the classification of fog depending on the forward scattering. We show that a characterization of fog based on the atmospheric extinction parameter only is not sufficient, specifically in the perspective of adaptive lighting for road safety. This method has been validated on synthetic images generated with a semi Monte-Carlo ray tracing software dedicated to fog simulation as well as with experiments in a fog chamber, we present the results and discuss the method, its potential applications and its limits.
    Download PDF (794K)
  • Tomohiko OHTSUKA, Daisuke WATANABE
    Article type: PAPER
    2010 Volume E93.D Issue 7 Pages 1788-1797
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    The singular points of fingerprints, viz. core and delta, are important referential points for the classification of fingerprints. Several conventional approaches such as the Poincaré index method have been proposed; however, these approaches are not reliable with poor-quality fingerprints. This paper proposes a new core and delta detection employing singular candidate analysis and an extended relational graph. Singular candidate analysis allows the use both the local and global features of ridge direction patterns and realizes high tolerance to local image noise; this involves the extraction of locations where there is high probability of the existence of a singular point. Experimental results using the fingerprint image databases FVC2000 and FVC2002, which include several poor-quality images, show that the success rate of the proposed approach is 10% higher than that of the Poincaré index method for singularity detection, although the average computation time is 15%-30% greater.
    Download PDF (1956K)
Regular Section
  • Hiroyuki GOTO
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 7 Pages 1798-1806
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This research addresses a high-speed computation method for the Kleene star of the weighted adjacency matrix in a max-plus algebraic system. We focus on systems whose precedence constraints are represented by a directed acyclic graph and implement it on a Cell Broadband Engine™ (CBE) processor. Since the resulting matrix gives the longest travel times between two adjacent nodes, it is often utilized in scheduling problem solvers for a class of discrete event systems. This research, in particular, attempts to achieve a speedup by using two approaches: parallelization and SIMDization (Single Instruction, Multiple Data), both of which can be accomplished by a CBE processor. The former refers to a parallel computation using multiple cores, while the latter is a method whereby multiple elements are computed by a single instruction. Using the implementation on a Sony PlayStation 3™ equipped with a CBE processor, we found that the SIMDization is effective regardless of the system's size and the number of processor cores used. We also found that the scalability of using multiple cores is remarkable especially for systems with a large number of nodes. In a numerical experiment where the number of nodes is 2000, we achieved a speedup of 20 times compared with the method without the above techniques.
    Download PDF (1019K)
  • Chih-Sheng CHEN, Shen-Yi LIN, Min-Hsuan FAN, Chua-Huang HUANG
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 7 Pages 1807-1815
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    We develop a novel construction method forn-dimensional Hilbert space-filling curves. The construction method includes four steps: block allocation, Gray permutation, coordinate transformation and recursive construction. We use the tensor product theory to formulate the method. Ann-dimensional Hilbert space-filling curve of 2r elements on each dimension is specified as a permutation which rearranges 2rn data elements stored in the row major order as in C language or the column major order as in FORTRAN language to the order of traversing ann-dimensional Hilbert space-filling curve. The tensor product formulation ofn-dimensional Hilbert space-filling curves uses stride permutation, reverse permutation, and Gray permutation. We present both recursive and iterative tensor product formulas ofn-dimensional Hilbert space-filling curves. The tensor product formulas are directly translated into computer programs which can be used in various applications. The process of program generation is explained in the paper.
    Download PDF (214K)
  • Toshiki SAITOH, Katsuhisa YAMANAKA, Masashi KIYOMI, Ryuhei UEHARA
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 7 Pages 1816-1823
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    We investigate connected proper interval graphs without vertex labels. We first give the number of connected proper interval graphs of n vertices. Using this result, a simple algorithm that generates a connected proper interval graph uniformly at random up to isomorphism is presented. Finally an enumeration algorithm of connected proper interval graphs is proposed. The algorithm is based on reverse search, and it outputs each connected proper interval graph in O(1) time.
    Download PDF (284K)
  • Yung-Kuei LU, Ming-Der SHIEH
    Article type: PAPER
    Subject area: Computer System
    2010 Volume E93.D Issue 7 Pages 1824-1831
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This paper presents a high-speed, low-complexity VLSI architecture based on the modified Euclidean (ME) algorithm for Reed-Solomon decoders. The low-complexity feature of the proposed architecture is obtained by reformulating the error locator and error evaluator polynomials to remove redundant information in the ME algorithm proposed by Truong. This increases the hardware utilization of the processing elements used to solve the key equation and reduces hardware by 30.4%. The proposed architecture retains the high-speed feature of Truong's ME algorithm with a reduced latency, achieved by changing the initial settings of the design. Analytical results show that the proposed architecture has the smallest critical path delay, latency, and area-time complexity in comparison with similar studies. An example RS(255, 239) decoder design, implemented using the TSMC 0.18µm process, can reach a throughput rate of 3Gbps at an operating frequency of 375MHz and with a total gate count of 27, 271.
    Download PDF (776K)
  • Jeong-Hoon LEE, Kyu-Young WHANG, Hyo-Sang LIM, Byung SUK LEE, Jun-Seok ...
    Article type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 7 Pages 1832-1847
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    In this paper, we study the problem of processing continuous range queries in a hierarchical wireless sensor network. Recently, as the size of sensor networks increases due to the growth of ubiquitous computing environments and wireless networks, building wireless sensor networks in a hierarchical configuration is put forth as a practical approach. Contrasted with the traditional approach of building networks in a “flat” structure using sensor devices of the same capability, the hierarchical approach deploys devices of higher-capability in a higher tier, i.e., a tier closer to the server. While query processing in flat sensor networks has been widely studied, the study on query processing in hierarchical sensor networks has been inadequate. In wireless sensor networks, the main costs that should be considered are the energy for sending data and the storage for storing queries. There is a trade-off between these two costs. Based on this, we first propose a progressive processing method that effectively processes a large number of continuous range queries in hierarchical sensor networks. The proposed method uses the query merging technique proposed by Xiang et al. as the basis. In addition, the method considers the trade-off between the two costs. More specifically, it works toward reducing the storage cost at lower-tier nodes by merging more queries and toward reducing the energy cost at higher-tier nodes by merging fewer queries (thereby reducing “false alarms”). We then present how to build a hierarchical sensor network that is optimalwith respect to the weighted sum of the two costs. This allows for a cost-based systematic control of the trade-off based on the relative importance between the storage and energy in a given network environment and application. Experimental results show that the proposed method achieves a near-optimal control between the storage and energy and reduces the cost by 1.002 - 3.210 times compared with the cost achieved using the flat (i.e., non-hierarchical) setup as in the work by Xiang et al.
    Download PDF (2171K)
  • Meng GE, Kwok-Yan LAM, Jianbin LI, Siu-Leung CHUNG
    Article type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 7 Pages 1848-1856
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Wireless ad hoc network is one of the most suitable platforms for providing communication services to support mobile applications in public areas where no fixed communication infrastructure exists. However, due to the open nature of wireless links and lack of security infrastructure in an ad hoc network environment, applications operating on ad hoc network platforms are subjected to non-trivial security challenges. Asymmetric key management, which is widely adopted to be an effective basis for security services in an open network environment, typically plays a crucial role in meeting the security requirements of such applications. In this paper, we propose a secure asymmetric key management scheme, the Ubiquitous and Secure Certificate Service (USCS), which is based on a variant of the Distributed Certificate Authority (DCA) - the Fully Distributed Certificate Authority (FDCA). Similar to FDCA, USCS introduces the presence of 1-hop neighbors which hold shares of DCA's private signature key, and can collaborate to issue certificates, thereby providing asymmetric key management service. Both USCS and FDCA aim to achieve higher availability than the basic DCA scheme; however, USCS is more secure than FDCA in that the former achieves high availability by distributing existing shares to new members, rather than generating new shares as the FDCA scheme does. In order to realise the high availability potential of USCS, a share selection algorithm is also proposed. Experimental results demonstrated that USCS is a more secure approach of the DCA scheme in that it can achieve stronger security than FDCA while attaining high availability similar to that of FDCA. Experiments also showed that USCS incurs only moderate communication overheads.
    Download PDF (434K)
  • Hiroshi IWATA, Satoshi OHTAKE, Hideo FUJIWARA
    Article type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 7 Pages 1857-1865
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Information on false paths in a circuit is useful for design and testing. The use of this information may contribute not only to reducing circuit area, the time required for logic synthesis, test generation and test application of the circuit, but also to alleviating over-testing. Since identification of the false paths at gate level is hard, several methods using high-level design information have been proposed. These methods are effective only if the correspondence between paths at register transfer level (RTL) and at gate level can be established. Until now, giving restriction on logic synthesis is the only way to establish the correspondence. However, it is not practical for industrial designs. In this paper, we propose a method for mapping RTL false paths to their corresponding gate level paths without such a specific logic synthesis; it guarantees that the corresponding gate level paths are false. Experimental results show that our path mapping method can establish the correspondences of RTL false paths and many gate level false paths.
    Download PDF (349K)
  • Shi-Cho CHA
    Article type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 7 Pages 1866-1877
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This work presents novel technical and legal approaches that address privacy concerns for personal data in RFID systems. In recent years, to minimize the conflict between convenience and the privacy risk of RFID systems, organizations have been requested to disclose their policies regarding RFID activities, obtain customer consent, and adopt appropriate mechanisms to enforce these policies. However, current research on RFID typically focuses on enforcement mechanisms to protect personal data stored in RFID tags and prevent organizations from tracking user activity through information emitted by specific RFID tags. A missing piece is how organizations can obtain customers' consent efficiently and flexibly. This study recommends that organizations obtain licenses automatically or semi-automatically before collecting personal data via RFID technologies rather than deal with written consents. Such digitalized and standard licenses can be checked automatically to ensure that collection and use of personal data is based on user consent. While individuals can easily control who has licenses and license content, the proposed framework provides an efficient and flexible way to overcome the deficiencies in current privacy protection technologies for RFID systems.
    Download PDF (802K)
  • Osama OUDA, Norimichi TSUMURA, Toshiya NAKAGUCHI
    Article type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 7 Pages 1878-1888
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Despite their usability advantages over traditional authentication systems, biometrics-based authentication systems suffer from inherent privacy violation and non-revocability issues. In order to address these issues, the concept of cancelable biometrics was introduced as a means of generating multiple, revocable, and noninvertible identities from true biometric templates. Apart from BioHashing, which is a two-factor cancelable biometrics technique based on mixing a set of tokenized user-specific random numbers with biometric features, cancelable biometrics techniques usually cannot preserve the recognition accuracy achieved using the unprotected biometric systems. However, as the employed token can be lost, shared, or stolen, BioHashing suffers from the same issues associated with token-based authentication systems. In this paper, a reliable tokenless cancelable biometrics scheme, referred to as BioEncoding, for protecting IrisCodes is presented. Unlike BioHashing, BioEncoding can be used as a one-factor authentication scheme that relies only on sole IrisCodes. A unique noninvertible compact bit-string, referred to as BioCode, is randomly derived from a true IrisCode. Rather than the true IrisCode, the derived BioCode can be used efficiently to verify the user identity without degrading the recognition accuracy obtained using original IrisCodes. Additionally, BioEncoding satisfies all the requirements of the cancelable biometrics construct. The performance of BioEncoding is compared with the performance of BioHashing in the stolen-token scenario and the experimental results show the superiority of the proposed method over BioHashing-based techniques.
    Download PDF (1934K)
  • Jaehoon KIM, Seog PARK
    Article type: PAPER
    Subject area: Dependable Computing
    2010 Volume E93.D Issue 7 Pages 1889-1899
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Recently, a graph labeling technique based on prime numbers has been suggested for reducing the costly transitive closure computations in RDF query languages. The suggested prime number graph labeling provides the benefit of fast query processing by a simple divisibility test of labels. However, it has an inherent problem that originates with the nature of prime numbers. Since each prime number must be used exclusively, labels can become significantly large. Therefore, in this paper, we introduce a novel optimization technique to effectively reduce the problem of label overflow. The suggested idea is based on graph decomposition. When label overflow occurs, the full graph is divided into several sub-graphs, and nodes in each sub-graph are separately labeled. Through experiments, we also analyze the effectiveness of the graph decomposition optimization, which is evaluated by the number of divisions.
    Download PDF (1147K)
  • Chen-Sung CHANG
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2010 Volume E93.D Issue 7 Pages 1900-1908
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This paper applies the Evolutionary Programming (EP) algorithm and a risk assessment technique to obtain an optimal solution to the Unit Maintenance Scheduling Decision (UMSD) problem subject to economic cost and power security constraints. The proposed approach employs a risk assessment model to evaluate the security of the power supply system and uses the EP algorithm to establish the optimal unit maintenance schedule. The effectiveness of the proposed methodology is verified through testing using the IEEE Reliability Test System (RTS). The test results confirm that the proposed approach can to ensure that the system security and outperforms the existing deterministic and stochastic optimization methods both in terms of the quality of the solution and the computational effort required. Therefore, the proposed methodology represents a particular effective technique for the UMSD.
    Download PDF (1046K)
  • Keigo NAKAMURA, Tomoki TODA, Hiroshi SARUWATARI, Kiyohiro SHIKANO
    Article type: PAPER
    Subject area: Rehabilitation Engineering and Assistive Technology
    2010 Volume E93.D Issue 7 Pages 1909-1917
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    We have so far proposed a speaking-aid system for laryngectomees using a statistical voice conversion technique. In the proposed system, artificial speech articulated with extremely small sound source signals is detected with a Non-Audible Murmur (NAM) microphone, and then, the detected artificial speech is converted into more natural voice in a probabilistic manner. Although this system basically allows laryngectomees to speak while keeping the external source signals silent, it is still questionable how much these new sound source signals affect the converted speech quality. In this paper, we investigate the impact of various sound source signals on voice conversion accuracy. Various small sound source signals are designed by changing the spectral envelope and the waveform power independently. We conduct objective and subjective evaluations. The results of these experimental evaluations demonstrate that voice conversion accepts 1) various sound source signals with different spectral envelopes and 2) large degree of power of the sound source signals unless the power of speaking parts is almost equal to that of silence parts. Moreover, we also investigate the effectiveness of enhancing auditory feedback during speaking with the extremely small sound source signals.
    Download PDF (537K)
  • Wei TANG, Dongju LI, Tsuyoshi ISSHIKI, Hiroaki KUNIEDA
    Article type: PAPER
    Subject area: Pattern Recognition
    2010 Volume E93.D Issue 7 Pages 1918-1926
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Orientation field (OF) estimation is a fundamental process in fingerprint authentication systems. In this paper, a novel binary pattern based low-cost OF estimation algorithm is proposed. The new method consists of two modules. The first is block-level orientation estimation and averaging in vector space by pixel level orientation statistics. The second is orientation quantization and smoothing. In the second module, the continuous orientation is quantized into fixed orientations with sufficient resolution (interval between fixed orientations). An effective smoothing scheme on the quantized orientation space is also proposed. The proposed algorithm is capable of stably processing poor-quality fingerprint images and is validated by tests conducted on an adaptive OF matching scheme. The proposed algorithm is also implemented into a fingerprint System on Chip (SoC) to comfirm that it satisfies the strict requirements of embedded system.
    Download PDF (664K)
  • Seong-Jun HAHM, Yuichi OHKAWA, Masashi ITO, Motoyuki SUZUKI, Akinori I ...
    Article type: PAPER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 7 Pages 1927-1935
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    We propose an improved reference speaker weighting (RSW) and speaker cluster weighting (SCW) approach that uses an aspect model. The concept of the approach is that the adapted model is a linear combination of a few latent reference models obtained from a set of reference speakers. The aspect model has specific latent-space characteristics that differ from orthogonal basis vectors of eigenvoice. The aspect model is a “mixture-of-mixture” model. We first calculate a small number of latent reference models as mixtures of distributions of the reference speaker's models, and then the latent reference models are mixed to obtain the adapted distribution. The mixture weights are calculated based on the expectation maximization (EM) algorithm. We use the obtained mixture weights for interpolating mean parameters of the distributions. Both training and adaptation are performed based on likelihood maximization with respect to the training and adaptation data, respectively. We conduct a continuous speech recognition experiment using a Korean database (KAIST-TRADE). The results are compared to those of a conventional MAP, MLLR, RSW, eigenvoice and SCW. Absolute word accuracy improvement of 2.06 point was achieved using the proposed method, even though we use only 0.3 s of adaptation data.
    Download PDF (515K)
  • ChangCheng WU, ChunYu ZHAO, DaYue CHEN
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 7 Pages 1936-1943
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    A novel filter is introduced in this paper to improve the ability of radiometric based method on suppressing impulse noise. Firstly, a new method is introduced to design the impulsive weight by measuring how impulsive a pixel is. Then, the impulsive weight is combined with the radiometric weight to obtain the evaluated values on each pixel in the whole corrupted image. The impulsive weight is mainly designed to suppress the impulse noise, while the radiometric weight is mainly designed to protect the noise-free pixel. Extensive experiments demonstrate that the proposed algorithm can perform much better than other filters in terms of the quantitative and qualitative aspects.
    Download PDF (3722K)
  • Giseok CHOE, Jongho NANG
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 7 Pages 1944-1956
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.
    Download PDF (10294K)
  • Rodrigo SANTAMARÍA, Roberto THERÓN
    Article type: PAPER
    Subject area: Computer Graphics
    2010 Volume E93.D Issue 7 Pages 1957-1964
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Hypergraphs drawn in the subset standard are useful to represent group relationships using topographic characteristics such as intersection, exclusion and enclosing. However, they present cluttering when dealing with a moderately high number of nodes (more than 20) and large hyperedges (connecting more than 10 nodes, with three or more overlapping nodes). At this complexity level, a study of the visual encoding of hypergraphs is required in order to reduce cluttering and increase the understanding of larger sets. Here we present a graph model and a visual design that help in the visualization of group relationships represented by hypergraphs. This is done by the use of superimposed visualization layers with different abstraction levels and the help of interaction and navigation through the display.
    Download PDF (3857K)
  • Yuyu LIU, Yoichi SATO
    Article type: PAPER
    Subject area: Multimedia Pattern Processing
    2010 Volume E93.D Issue 7 Pages 1965-1975
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
    Download PDF (839K)
  • Chooi-Ling GOH, Taro WATANABE, Hirofumi YAMAMOTO, Eiichiro SUMITA
    Article type: PAPER
    Subject area: Natural Language Processing
    2010 Volume E93.D Issue 7 Pages 1976-1983
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    We present a method to constrain a statistical generative word alignment model with the output from a discriminative model. The discriminative model is trained using a small set of hand-aligned data that ensures higher precision in alignment. On the other hand, the generative model improves the recall of alignment. By combining these two models, the alignment output becomes more suitable for use in developing a translation model for a phrase-based statistical machine translation (SMT) system. Our experimental results show that the joint alignment model improves the translation performance. The improvement in average of BLEU and METEOR scores is around 1.0-3.9 points.
    Download PDF (647K)
  • Hae Young LEE, Seung-Min PARK, Tae Ho CHO
    Article type: LETTER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 7 Pages 1984-1986
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This paper presents an approach to implementing simulation models for SAM fuzzy controllers without the use of external components. The approach represents a fuzzy controller as a composition of simple simulation models which involve only basic operations.
    Download PDF (366K)
  • Yuanyuan ZHANG, Shijun LIN, Li SU, Depeng JIN, Lieguang ZENG
    Article type: LETTER
    Subject area: Computer System
    2010 Volume E93.D Issue 7 Pages 1987-1990
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Since the length of wires between different layers, even between the top and bottom layers, is acceptably small in 3D mesh-based NoC (three-Dimensional mesh-based Network on Chip), a structure in which an IP (Intelligence Property) core in a certain layer directly connected to a proper router in another layer may efficiently decrease the average latency of messages and increase the maximum throughput. With this idea, in the paper, we introduce a dual-port access structure, in which each IP core except that in the bottom layer is connected to two routers in two adjacent layers, and, in particular, the IP core in the bottom layer can be directly connected to the proper router in the top layer. Furthermore, we derive the close form expression of the average number of hops of messages and also give the quantitative analysis of the performance when the dual-port access structure is used. All the analytical results reveal that the average number of hops is reduced and the system performance is improved, including a decrease of average latency and an increase of maximum throughput. Finally, the simulation results confirm our theoretical analysis and show the advantage of the proposed dual-port access structure with a relatively small increment of area overhead.
    Download PDF (152K)
  • Gile Narcisse FANZOU TCHUISSANG, Ning WANG, Nathalie Cindy KUICHEU, Fr ...
    Article type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 7 Pages 1991-1994
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This paper discusses the issues involved in the design of a complete information retrieval system for DataSpace based on user relevance probabilistic schemes. First, Information Hidden Model (IHM) is constructed taking into account the users' perception of similarity between documents. The system accumulates feedback from the users and employs it to construct user oriented clusters. IHM allows integrating uncertainty over multiple, interdependent classifications and collectively determines the most likely global assignment. Second, Three different learning strategies are proposed, namely query-related UHH, UHB and UHS (User Hidden Habit, User Hidden Background, and User Hidden keyword Semantics) to closely represent the user mind. Finally, the probability ranking principle shows that optimum retrieval quality can be achieved under certain assumptions. An optimization algorithm to improve the effectiveness of the probabilistic process is developed. We first predict the data sources where the query results could be found. Therefor, compared with existing approaches, our precision of retrieval is better and do not depend on the size and the DataSpace heterogeneity.
    Download PDF (195K)
  • Eun-Jun YOON, Kee-Young YOO
    Article type: LETTER
    Subject area: Information Network
    2010 Volume E93.D Issue 7 Pages 1995-1996
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    In 2009, Jeong et al. proposed a new searchable encryption scheme with keyword-recoverability which is secure even if the adversaries have any useful partial information about the keyword. They also proposed an extension scheme for multi-keywords. However, this paper demonstrates that Jeong et al.'s schemes are vulnerable to off-line keyword guessing attacks, where an adversary (insider/outsider) can retrieve information of certain keyword from any captured query message of the scheme.
    Download PDF (63K)
  • Jun-Cheol PARK
    Article type: LETTER
    Subject area: Information Network
    2010 Volume E93.D Issue 7 Pages 1997-2000
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    User privacy preservation is critical to prevent many sophisticated attacks that are based on the user's server access patterns and ID-related information. We propose a password-based user authentication scheme that provides strong privacy protection using one-time credentials. It eliminates the possibility of tracing a user's authentication history and hides the user's ID and password even from servers. In addition, it is resistant against user impersonation even if both a server's verification database and a user's smart card storage are disclosed. We also provide a revocation scheme for a user to promptly invalidate the user's credentials on a server when the user's smart card is compromised. The schemes use lightweight operations only such as computing hashes and bitwise XORs.
    Download PDF (71K)
  • Young-Shin HAN, SoYoung KIM, TaeKyu KIM, Jason J. JUNG
    Article type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2010 Volume E93.D Issue 7 Pages 2001-2004
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    We exploit a structural knowledge representation scheme called System Entity Structure (SES) methodology to represent and manage wafer failure patterns which can make a significant influence to FABs in the semiconductor industry. It is important for the engineers to simulate various system verification processes by using predefined system entities (e.g., decomposition, taxonomy, and coupling relationships of a system) contained in the SES. For better computational performance, given a certain failure pattern, a Pruned SES (PES) can be extracted by selecting the only relevant system entities from the SES. Therefore, the SES-based simulation system allows the engineers to efficiently evaluate and monitor semiconductor data by i) analyzing failures to find out the corresponding causes and ii) managing historical data related to such failures.
    Download PDF (350K)
  • Makoto SAKAI, Norihide KITAOKA, Kazuya TAKEDA
    Article type: LETTER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 7 Pages 2005-2008
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    Acoustic feature transformation is widely used to reduce dimensionality and improve speech recognition performance. In this letter we focus on dimensionality reduction methods that minimize the average classification error. Unfortunately, minimization of the average classification error may cause considerable overlaps between distributions of some classes. To mitigate risks of considerable overlaps, we propose a dimensionality reduction method that minimizes the maximum classification error. We also propose two interpolated methods that can describe the average and maximum classification errors. Experimental results show that these proposed methods improve speech recognition performance.
    Download PDF (133K)
  • Guoan YANG, Huub VAN DE WETERING, Ming HOU, Chihiro IKUTA, Yuehu LIU
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 7 Pages 2009-2011
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    This letter proposes a novel design approach for optimal contourlet filter banks based on the parametric 9/7 filter family. The Laplacian pyramid decomposition is replaced by optimal 9/7 filter banks with rational coefficients, and directional filter banks are activated using a pkva 12 filter in the contourlets. Moreover, based on this optimal 9/7 filter, we present an image denoising approach using a contourlet domain hidden Markov tree model. Finally, experimental results show that our approach in denoising images with texture detail is only 0.20dB less compared to the method of Po and Do, and the visual quality is as good as for their method. Compared with the method of Po and Do, our approach has lower computational complexity and is more suitable for VLSI hardware implementation.
    Download PDF (217K)
  • Do QUAN, Yo-Sung HO
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 7 Pages 2012-2015
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    In this letter, we present a simple but efficient intra prediction mode decision for H.264/AVC. Based on our investigation, the DC mode appears to be the superior prediction mode among the various candidates. We propose an intra-mode decision algorithm where the DC mode is chosen as a candidate for the best prediction mode. By experimental results, on average, the proposed algorithm significantly saves 81.905% of the entire encoding time compared to the H.264 reference software; besides, it reduces negligible peak signal-to-noise ratio (PSNR) values and slightly increases bitrates.
    Download PDF (497K)
  • Xiaokan WANG, Xia MAO, Catalin-Daniel CALEANU
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 7 Pages 2016-2019
    Published: July 01, 2010
    Released on J-STAGE: July 01, 2010
    JOURNAL FREE ACCESS
    For improving the nonlinear alignment performance of Active Appearance Models (AAM), we apply a variant of the nonlinear manifold learning algorithm, Local Linear Embedded, to model shape-texture manifold. Experiments show that our method maintains a lower alignment residual to some small scale movements compared with traditional AAM based on Principal Component Analysis (PCA) and makes a successful alignment to large scale motions when PCA-AAM failed.
    Download PDF (1300K)
feedback
Top