IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E95.D , Issue 7
Showing 1-35 articles out of 35 articles from the selected issue
Special Section on Machine Vision and its Applications
  • Yasuyo KITA
    2012 Volume E95.D Issue 7 Pages 1721
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Download PDF (77K)
  • Michal KAWULOK, Jolanta KAWULOK, Bogdan SMOLKA
    Type: PAPER
    Subject area: Image Synthesis
    2012 Volume E95.D Issue 7 Pages 1722-1730
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Image colorization is a semi-automatic process of adding colors to monochrome images and videos. Using existing methods, required human assistance can be limited to annotating the image with color scribbles or selecting a reference image, from which the colors are transferred to a source image or video sequence. In the work reported here we have explored how to exploit the textural information to improve this process. For every scribbled image we determine the discriminative textural feature domain. After that, the whole image is projected onto the feature space, which makes it possible to estimate textural similarity between any two pixels. For single image colorization based on a set of color scribbles, our contribution lies in using the proposed feature space domain rather than the luminance channel. In case of color transfer used for colorization of video sequences, the feature space is generated based on a reference image, and textural similarity is used to match the pixels between the reference and source images. We have conducted extensive experimental validation which confirmed the importance of using textural information and demonstrated that our method significantly improves colorization result.
    Download PDF (4035K)
  • Takashi SHIBATA, Akihiko IKETANI, Shuji SENDA
    Type: PAPER
    Subject area: Image Synthesis
    2012 Volume E95.D Issue 7 Pages 1731-1739
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This paper presents a novel inpainting method based on structure estimation. The method first estimates an initial image that captures the rough structure and colors in the missing region. This image is generated by probabilistically estimating the gradient within the missing region based on edge segments intersecting its boundary, and then by flooding the colors on the boundary into the missing region. The color flooding is formulated as an energy minimization problem, and is efficiently optimized by the conjugate gradient method. Finally, by locally replacing the missing region with local patches similar to both the adjacent patches and the initial image, the inpainted image is synthesized. The initial image not only serves as a guide to ensure the underlying structure is preserved, but also allows the patch selection process to be carried out in a greedy manner, which leads to substantial speedup. Experimental results show the proposed method is capable of preserving the underlying structure in the missing region, while achieving more than 5 times faster computational speed than the state-of-the-art inpainting method. Subjective evaluation of image quality also shows the proposed method outperforms the previous methods.
    Download PDF (2623K)
  • Frank PERBET, Björn STENGER, Atsuto MAKI
    Type: PAPER
    Subject area: Segmentation
    2012 Volume E95.D Issue 7 Pages 1740-1748
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This paper presents a novel algorithm to generate homogeneous superpixels from Markov random walks. We exploit Markov clustering (MCL) as the methodology, a generic graph clustering method based on stochastic flow circulation. In particular, we introduce a graph pruning strategy called compact pruning in order to capture intrinsic local image structure. The resulting superpixels are homogeneous, i.e. uniform in size and compact in shape. The original MCL algorithm does not scale well to a graph of an image due to the square computation of the Markov matrix which is necessary for circulating the flow. The proposed pruning scheme has the advantages of faster computation, smaller memory footprint, and straightforward parallel implementation. Through comparisons with other recent techniques, we show that the proposed algorithm achieves state-of-the-art performance.
    Download PDF (15984K)
  • Jian ZHANG, Sei-ichiro KAMATA
    Type: PAPER
    Subject area: Segmentation
    2012 Volume E95.D Issue 7 Pages 1749-1757
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    With the wide usage of multispectral images, a fast efficient multidimensional clustering method becomes not only meaningful but also necessary. In general, to speed up the multidimensional images' analysis, a multidimensional feature vector should be transformed into a lower dimensional space. The Hilbert curve is a continuous one-to-one mapping from N-dimensional space to one-dimensional space, and can preserves neighborhood as much as possible. However, because the Hilbert curve is generated by a recurve division process, ‘Boundary Effects’ will happen, which means data that are close in N-dimensional space may not be close in one-dimensional Hilbert order. In this paper, a new efficient approach based on the space-filling curves is proposed for classifying multispectral satellite images. In order to remove ‘Boundary Effects’ of the Hilbert curve, multiple Hilbert curves, z curves, and the Pseudo-Hilbert curve are used jointly. The proposed method extracts category clusters from one-dimensional data without computing any distance in N-dimensional space. Furthermore, multispectral images can be analyzed hierarchically from coarse data distribution to fine data distribution in accordance with different application. The experimental results performed on LANDSAT data have demonstrated that the proposed method is efficient to manage the multispectral images and can be applied easily.
    Download PDF (1936K)
  • Sebastien CALLIER, Hideo SAITO
    Type: PAPER
    Subject area: Segmentation
    2012 Volume E95.D Issue 7 Pages 1758-1765
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Raster maps are widely available in the everyday life, and can contain a huge amount of information of any kind using labels, pictograms, or color code e.g. However, it is not an easy task to extract roads from those maps due to those overlapping features. In this paper, we focus on an automated method to extract roads by using linear features detection to search for seed points having a high probability to belong to roads. Those linear features are lines of pixels of homogenous color in each direction around each pixel. After that, the seeds are then expanded before choosing to keep or to discard the extracted element. Because this method is not mainly based on color segmentation, it is also suitable for handwritten maps for example. The experimental results demonstrate that in most cases our method gives results similar to usual methods without needing any previous data or user input, but do need some knowledge on the target maps; and does work with handwritten maps if drawn following some basic rules whereas usual methods fail.
    Download PDF (2696K)
  • Shoichi SHIMIZU, Hironobu FUJIYOSHI
    Type: PAPER
    Subject area: Matching
    2012 Volume E95.D Issue 7 Pages 1766-1774
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This paper proposes a high-precision, high-speed keypoint matching method using two-stage randomized trees (RTs). The keypoint classification uses conventional RTs for high-precision, real-time keypoint matching. However, the wide variety of view transformations for templates expressed by RTs make it diffidult to achieve high-precision classification for all transformations with a single RTs. To solve this problem, the proposed method classifies the template view transformations in the first stage and then, in the second stage, classifies the keypoints using the RTs that corresponds to each of the view transformations classified in the first stage. Testing demonstrated that the proposed method is 88.5% more precise than SIFT, and 63.5% more precise than using conventional RTs for images in which the viewpoint of the object is rotated by 70 degrees. We have also shown that the proposed method supports real-time keypoint matching at 12fps.
    Download PDF (2139K)
  • Yanlei GU, Mehrdad PANAHPOUR TEHRANI, Tomohiro YENDO, Toshiaki FUJII, ...
    Type: PAPER
    Subject area: Recognition
    2012 Volume E95.D Issue 7 Pages 1775-1790
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
    Download PDF (3877K)
  • Dao-Huu HUNG, Gee-Sern HSU, Sheng-Luen CHUNG, Hideo SAITO
    Type: PAPER
    Subject area: Recognition
    2012 Volume E95.D Issue 7 Pages 1791-1803
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, a fast and automated method of counting pedestrians in crowded areas is proposed along with three contributions. We firstly propose Local Empirical Templates (LET), which are able to outline the foregrounds, typically made by single pedestrians in a scene. LET are extracted by clustering foregrounds of single pedestrians with similar features in silhouettes. This process is done automatically for unknown scenes. Secondly, comparing the size of group foreground made by a group of pedestrians to that of appropriate LET captured in the same image patch with the group foreground produces the density ratio. Because of the local scale normalization between sizes, the density ratio appears to have a bound closely related to the number of pedestrians who induce the group foreground. Finally, to extract the bounds of density ratios for groups of different number of pedestrians, we propose a 3D human models based simulation in which camera viewpoints and pedestrians' proximity are easily manipulated. We collect hundreds of typical occluded-people patterns with distinct degrees of human proximity and under a variety of camera viewpoints. Distributions of density ratios with respect to the number of pedestrians are built based on the computed density ratios of these patterns for extracting density ratio bounds. The simulation is performed in the offline learning phase to extract the bounds from the distributions, which are used to count pedestrians in online settings. We reveal that the bounds seem to be invariant to camera viewpoints and humans' proximity. The performance of our proposed method is evaluated with our collected videos and PETS 2009's datasets. For our collected videos with the resolution of 320x240, our method runs in real-time with good accuracy and frame rate of around 30 fps, and consumes a small amount of computing resources. For PETS 2009's datasets, our proposed method achieves competitive results with other methods tested on the same datasets [1], [2].
    Download PDF (5144K)
  • Akira ISHII, Hiroaki YAMASHIRO
    Type: PAPER
    Subject area: 3D Reconstruction
    2012 Volume E95.D Issue 7 Pages 1804-1810
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    A differential pair of convergent and divergent lenses with adjustable lens spacing (“differential lens”) was devised as a varifocal lens and was successfully integrated into an object-space telecentric lens to build a focus mechanism with constant magnification. This integration was done by placing the front principal point of the varifocal lens at the rear focal point of the telecentric lens within a practical tolerance of positioning. Although the constant-magnification focus mechanism is a parallel projection system, a system for perfect perspective projection imaging without shifting the projection center during focusing could be built simply by properly setting this focus mechanism between an image-taking lens with image-space telecentricity and an image sensor. The focus resolution experimentally obtained was 0.92µm (σ) for the parallel projection system with a depth range of 1.0mm and this was 0.25mm (σ) for the perspective projection system with a range from 120 to 350mm within a desktop space. A marginal image resolution of 100lp/mm was obtained with optical distortion of less than 0.2% in the parallel projection system. The differential lens could work up to 55Hz for a sinusoidal change in lens spacing with a peak-to-valley amplitude of 425µm when a tiny divergent lens that was plano-concave was translated by a piezoelectric positioner. Therefore, images that were entirely in focus were generated at a frame rate of 30Hz for an object moving at a speed of around 150mm/s in depth within the desk top space. Thus, three-dimensional (3-D) imaging that provided 3-D resolution based on fast focusing was accomplished in both microscopic and macroscopic spaces.
    Download PDF (2018K)
  • Kazuki MATSUDA, Norimichi UKITA
    Type: PAPER
    Subject area: 3D Reconstruction
    2012 Volume E95.D Issue 7 Pages 1811-1818
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This paper proposes a method for reconstructing a smooth and accurate 3D surface. Recent machine vision techniques can reconstruct accurate 3D points and normals of an object. The reconstructed point cloud is used for generating its 3D surface by surface reconstruction. The more accurate the point cloud, the more correct the surface becomes. For improving the surface, how to integrate the advantages of existing techniques for point reconstruction is proposed. Specifically, robust and dense reconstruction with Shape-from-Silhouettes (SfS) and accurate stereo reconstruction are integrated. Unlike gradual shape shrinking by space carving, our method obtains 3D points by SfS and stereo independently and accepts the correct points reconstructed. Experimental results show the improvement by our method.
    Download PDF (4243K)
  • Qingyong LI, Yaping HUANG, Zhengping LIANG, Siwei LUO
    Type: LETTER
    Subject area: Image Processing
    2012 Volume E95.D Issue 7 Pages 1819-1822
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
    Download PDF (346K)
  • Seok-Min CHAE, In-Ho SONG, Sung-Hak LEE, Kyu-Ik SOHNG
    Type: LETTER
    Subject area: Signal Processing
    2012 Volume E95.D Issue 7 Pages 1823-1826
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this study, we show that the motion blur is caused by exposure time of video camera as well as the characteristics of LCD system. Also, we suggest that evaluation method of motion picture quality according to the frequency response of video camera and LCD systems of hold and scanning backlight type.
    Download PDF (413K)
  • Junying XIA, Xiaoquan XU, Qi ZHANG, Jiulong XIONG
    Type: LETTER
    Subject area: 3D Pose
    2012 Volume E95.D Issue 7 Pages 1827-1829
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Existing pose estimation algorithms suffer from either low performance or heavy computation cost. In this letter, we present an approach to improve the attractive algorithm called Orthogonal Iteration. A new form of fundamental equations is derived which reduces the computation cost significantly. And paraperspective camera model is used instead of weak perspective camera model during initialization which improves the stability. Experiment results validate the accuracy and stability of the proposed algorithm and show that its computational complexity is favorably compare to the O(n) non-iterative algorithm.
    Download PDF (256K)
Regular Section
  • Makoto OHKI
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2012 Volume E95.D Issue 7 Pages 1830-1838
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.
    Download PDF (1407K)
  • Yuyu YUAN, Chuanyi LIU, Jie CHENG, Xiaoliang WANG
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2012 Volume E95.D Issue 7 Pages 1839-1846
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Execution performance is critical for large-scale and data-intensive workflows. This paper proposes DISWOP, a novel scheduling algorithm for data-intensive workflow optimizations; it consists of three main steps: workflow process generation, task & resource mapping, and task clustering. To evaluate the effectiveness and efficiency of DISWOP, a comparison evaluation of different workflows is conducted a prototype workflow platform. The results show that DISWOP can speed up execution performance by about 1.6-2.3 times depending on the task scale.
    Download PDF (1041K)
  • Yusaku KANETA, Shingo YOSHIZAWA, Shin-ichi MINATO, Hiroki ARIMURA, Yos ...
    Type: PAPER
    Subject area: Computer System
    2012 Volume E95.D Issue 7 Pages 1847-1857
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose a novel architecture for large-scale regular expression matching, called dynamically reconfigurable bit-parallel NFA architecture (Dynamic BP-NFA), which allows dynamic loading of regular expressions on-the-fly as well as efficient pattern matching for fast data streams. This is the first dynamically reconfigurable hardware with guaranteed performance for the class of extended patterns, which is a subclass of regular expressions consisting of union of characters and its repeat. This class allows operators such as character classes, gaps, optional characters, and bounded and unbounded repeats of character classes. The key to our architecture is the use of bit-parallel pattern matching approach, in which the information of an input non-deterministic finite automaton (NFA) is first compactly encoded in bit-masks stored in a collection of registers and block RAMs. Then, the NFA is efficiently simulated by a fixed circuitry using bitwise Boolean and arithmetic operations consuming one input character per clock regardless of the actual contents of an input text. Experimental results showed that our hardwares for both string and extended patterns were comparable to previous dynamically reconfigurable hardwares in their performances.
    Download PDF (719K)
  • Bo LIU, Peng CAO, Min ZHU, Jun YANG, Leibo LIU, Shaojun WEI, Longxing ...
    Type: PAPER
    Subject area: Computer System
    2012 Volume E95.D Issue 7 Pages 1858-1871
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This paper presents a novel architecture design to optimize the reconfiguration process of a coarse-grained reconfigurable architecture (CGRA) called Reconfigurable Multimedia System II (REMUS-II). In REMUS-II, the tasks in multi-media applications are divided into two parts: computing-intensive tasks and control-intensive tasks. Two Reconfigurable Processor Units (RPUs) for accelerating computing-intensive tasks and a Micro-Processor Unit (µPU) for accelerating control-intensive tasks are contained in REMUS-II. As a large-scale CGRA, REMUS-II can provide satisfying solutions in terms of both efficiency and flexibility. This feature makes REMUS-II well-suited for video processing, where higher flexibility requirements are posed and a lot of computation tasks are involved. To meet the high requirement of the dynamic reconfiguration performance for multimedia applications, the reconfiguration architecture of REMUS-II should be well designed. To optimize the reconfiguration architecture of REMUS-II, a hierarchical configuration storage structure and a 3-stage reconfiguration processing structure are proposed. Furthermore, several optimization methods for configuration reusing are also introduced, to further improve the performance of reconfiguration process. The optimization methods include two aspects: the multi-target reconfiguration method and the configuration caching strategies. Experimental results showed that, with the reconfiguration architecture proposed, the performance of reconfiguration process will be improved by 4 times. Based on RTL simulation, REMUS-II can support the 1080p@32fps of H.264 HiP@Level4 and 1080p@40fps High-level MPEG-2 stream decoding at the clock frequency of 200MHz. The proposed REMUS-II system has been implemented on a TSMC 65nm process. The die size is 23.7mm2 and the estimated on-chip dynamic power is 620mW.
    Download PDF (1457K)
  • Marat ZHANIKEEV, Yoshiaki TANAKA
    Type: PAPER
    Subject area: Software System
    2012 Volume E95.D Issue 7 Pages 1872-1881
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In NGN standards, End Host, also referred to as Terminal Equipment (TE), holds an important place in end-to-end path performance. However, most researchers neglect TE performance when considering performance of end-to-end paths. As far as the authors' knowledge goes, no previous study has proposed a model for TE performance. This paper proposes a method for measuring performance of TE and model extraction based on measurement data. The measurement was made possible with the use of a special NPU (Network Processing Unit) implemented as a programmable NIC. Along with the probing itself, a framework for removing the skew between the NPU and OS is developed in this paper. The multidimensional analysis includes method of probing, packet size and background traffic volume, and studies their effect on TE performance. A method for extracting a generic TE model is proposed. The outcome of this research can be used for modelling TE in simulations and in modelling end-to-end performance when considering QoS in NGN.
    Download PDF (731K)
  • Hsin-Hung LIN, Toshiaki AOKI, Takuya KATAYAMA
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2012 Volume E95.D Issue 7 Pages 1882-1893
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we introduce an approach of service adaptation for behavior mismatching services using pushdown model checking. This approach uses pushdown systems as model of adaptors so that capturing non-regular behavior in service interactions is possible. Also, the use of pushdown model checking integrates adaptation and verification. This guarantees that an adaptor generated by our approach not only solves behavior mismatches but also satisfies usual verification properties if specified. Unlike conventional approaches, we do not count on specifications of adaptor contracts but take only information from behavior interfaces of services and perform fully automated adaptor generation. Three requirements relating to behavior mismatches, unbounded messages, and branchings are retrieved from behavior interfaces and used to build LTL properties for pushdown model checking. Properties for unbounded messages, i.e., messages sent and received arbitrary multiple times, are especially addressed since it characterizes non-regular behavior in service composition. This paper also shows some experimental results from a prototype tool and provides directions for building BPEL adaptors from behavior interface of generated adaptor. The results show that our approach does solve behavior mismatches and successfully capture non-regular behavior in service composition under the scale of real service applications.
    Download PDF (1620K)
  • Amril SYALIM, Takashi NISHIDE, Kouichi SAKURAI
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2012 Volume E95.D Issue 7 Pages 1894-1907
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Recently, there is much concern about the provenance of distributed processes, that is about the documentation of the origin and the processes to produce an object in a distributed system. The provenance has many applications in the forms of medical records, documentation of processes in the computer systems, recording the origin of data in the cloud, and also documentation of human-executed processes. The provenance of distributed processes can be modeled by a directed acyclic graph (DAG) where each node represents an entity, and an edge represents the origin and causal relationship between entities. Without sufficient security mechanisms, the provenance graph suffers from integrity and confidentiality problems, for example changes or deletions of the correct nodes, additions of fake nodes and edges, and unauthorized accesses to the sensitive nodes and edges. In this paper, we propose an integrity mechanism for provenance graph using the digital signature involving three parties: the process executors who are responsible in the nodes' creation, a provenance owner that records the nodes to the provenance store, and a trusted party that we call the Trusted Counter Server (TCS) that records the number of nodes stored by the provenance owner. We show that the mechanism can detect the integrity problem in the provenance graph, namely unauthorized and malicious “authorized” updates even if all the parties, except the TCS, collude to update the provenance. In this scheme, the TCS only needs a very minimal storage (linear with the number of the provenance owners). To protect the confidentiality and for an efficient access control administration, we propose a method to encrypt the provenance graph that allows access by paths and compartments in the provenance graph. We argue that encryption is important as a mechanism to protect the provenance data stored in an untrusted environment. We analyze the security of the integrity mechanism, and perform experiments to measure the performance of both mechanisms.
    Download PDF (660K)
  • Weihong CAI, Richeng HUANG, Xiaoli HOU, Gang WEI, Shui XIAO, Yindong C ...
    Type: PAPER
    Subject area: Information Network
    2012 Volume E95.D Issue 7 Pages 1908-1917
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Role-based access control (RBAC) model has been widely recognized as an efficient access control model and becomes a hot research topic of information security at present. However, in the large-scale enterprise application environments, the traditional RBAC model based on the role hierarchy has the following deficiencies: Firstly, it is unable to reflect the role relationships in complicated cases effectively, which does not accord with practical applications. Secondly, the senior role unconditionally inherits all permissions of the junior role, thus if a user is under the supervisor role, he may accumulate all permissions, and this easily causes the abuse of permission and violates the least privilege principle, which is one of the main security principles. To deal with these problems, we, after analyzing permission types and role relationships, proposed the concept of atom role and built an atom-role-based access control model, called ATRBAC, by dividing the permission set of each regular role based on inheritance path relationships. Through the application-specific analysis, this model can well meet the access control requirements.
    Download PDF (979K)
  • Xinpeng ZHANG, Yasuhito ASANO, Masatoshi YOSHIKAWA
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2012 Volume E95.D Issue 7 Pages 1918-1931
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Mining and explaining relationships between concepts are challenging tasks in the field of knowledge search. We propose a new approach for the tasks using disjoint paths formed by links in Wikipedia. Disjoint paths are easy to understand and do not contain redundant information. To achieve this approach, we propose a naive method, as well as a generalized flow based method, and a technique for mining more disjoint paths using the generalized flow based method. We also apply the approach to classification of relationships. Our experiments reveal that the generalized flow based method can mine many disjoint paths important for understanding a relationship, and the classification is effective for explaining relationships.
    Download PDF (1361K)
  • Nattapong TONGTEP, Thanaruk THEERAMUNKONG
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2012 Volume E95.D Issue 7 Pages 1932-1946
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Extracting named entities (NEs) and their relations is more difficult in Thai than in other languages due to several Thai specific characteristics, including no explicit boundaries for words, phrases and sentences; few case markers and modifier clues; high ambiguity in compound words and serial verbs; and flexible word orders. Unlike most previous works which focused on NE relations of specific actions, such as work_for, live_in, located_in, and kill, this paper proposes more general types of NE relations, called predicate-oriented relation (PoR), where an extracted action part (verb) is used as a core component to associate related named entities extracted from Thai Texts. Lacking a practical parser for the Thai language, we present three types of surface features, i.e. punctuation marks (such as token spaces), entity types and the number of entities and then apply five alternative commonly used learning schemes to investigate their performance on predicate-oriented relation extraction. The experimental results show that our approach achieves the F-measure of 97.76%, 99.19%, 95.00% and 93.50% on four different types of predicate-oriented relation (action-location, location-action, action-person and person-action) in crime-related news documents using a data set of 1,736 entity pairs. The effects of NE extraction techniques, feature sets and class unbalance on the performance of relation extraction are explored.
    Download PDF (3849K)
  • Akihiro INOKUCHI, Hiroaki IKUTA, Takashi WASHIO
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2012 Volume E95.D Issue 7 Pages 1947-1958
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    The mining of frequent subgraphs from labeled graph data has been studied extensively. Furthermore, much attention has recently been paid to frequent pattern mining from graph sequences. A method, called GTRACE, has been proposed to mine frequent patterns from graph sequences under the assumption that changes in graphs are gradual. Although GTRACE mines the frequent patterns efficiently, it still needs substantial computation time to mine the patterns from graph sequences containing large graphs and long sequences. In this paper, we propose a new version of GTRACE that permits efficient mining of frequent patterns based on the principle of a reverse search. The underlying concept of the reverse search is a general scheme for designing efficient algorithms for hard enumeration problems. Our performance study shows that the proposed method is efficient and scalable for mining both long and large graph sequence patterns and is several orders of magnitude faster than the original GTRACE.
    Download PDF (480K)
  • Hironori TAKEUCHI, Taiga NAKAMURA, Takahira YAMAGUCHI
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2012 Volume E95.D Issue 7 Pages 1959-1968
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.
    Download PDF (896K)
  • Jegoon RYU, Sei-ichiro KAMATA, Alireza AHRARY
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 7 Pages 1969-1978
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose a novel gait recognition framework - Spherical Space Model with Human Point Clouds (SSM-HPC) to recognize front view of human gait. A new gait representation - Marching in Place (MIP) gait is also introduced which preserves the spatiotemporal characteristics of individual gait manner. In comparison with the previous studies on gait recognition which usually use human silhouette images from image sequences, this research applies three dimensional (3D) point clouds data of human body obtained from stereo camera. The proposed framework exhibits gait recognition rates superior to those of other gait recognition methods.
    Download PDF (3072K)
  • Bobo ZENG, Guijin WANG, Xinggang LIN, Chunxiao LIU
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 7 Pages 1979-1988
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This work presents a real-time human detection system for VGA (Video Graphics Array, 640×480) video, which well suits visual surveillance applications. To achieve high running speed and accuracy, firstly we design multiple fast scalar feature types on the gradient channels, and experimentally identify that NOGCF (Normalized Oriented Gradient Channel Feature) has better performance with Gentle AdaBoost in cascaded classifiers. A confidence measure for cascaded classifiers is developed and utilized in the subsequent tracking stage. Secondly, we propose to use speedup techniques including a detector pyramid for multi-scale detection and channel compression for integral channel calculation respectively. Thirdly, by integrating the detector's discrete detected humans and continuous detection confidence map, we employ a two-layer tracking by detection algorithm for further speedup and accuracy improvement. Compared with other methods, experiments show the system is significantly faster with 20fps running speed in VGA video and has better accuracy as well.
    Download PDF (1672K)
  • Zhenfeng SHI, Xiamu NIU, Liyang YU
    Type: PAPER
    Subject area: Computer Graphics
    2012 Volume E95.D Issue 7 Pages 1989-2001
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    Visual degradation is usually introduced during 3D mesh simplification. The main issue in mesh simplification is to maximize the simplification ratio while minimizing the visual degradation. Therefore, effective and objective evaluation of the visual degradation is essential in order to select the simplification ratio. Some objective geometric and subjective perceptual metrics have been proposed. However, few objective metrics have taken human visual characteristics into consideration. To evaluate the visual degradation introduced by mesh simplification for a 3D triangular object, we integrate the structural degradation with mesh saliency and propose a new objective and multi-scale evaluation metric named Global Perceptual Structural Degradation (GPSD). The proper selection of the simplification ratio under a given distance-to-viewpoint is also discussed in this paper. The accuracy and validity of the proposed metric have been demonstrated through subjective experiments. The experimental results confirm that the GPSD metric shows better 3D model-based multi-scale perceptual evaluation capability.
    Download PDF (5769K)
  • Yancang CHEN, Lunguo XIE
    Type: LETTER
    Subject area: Computer System
    2012 Volume E95.D Issue 7 Pages 2002-2005
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This paper presents a single-cycle shared output buffered router for Networks-on-Chip. In output ports, each input port always has an output virtual-channel (VC) which can be exchanged by VC swapper. Its critical path is only 24 logic gates, and it reduces 9.4% area overhead compared with the classical router.
    Download PDF (309K)
  • Guangchun LUO, Ying MA, Ke QIN
    Type: LETTER
    Subject area: Software Engineering
    2012 Volume E95.D Issue 7 Pages 2006-2008
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    An asymmetric classifier based on kernel partial least squares is proposed for software defect prediction. This method improves the prediction performance on imbalanced data sets. The experimental results validate its effectiveness.
    Download PDF (67K)
  • Yeo-Chan YOON, Chang-Ki LEE, Hyun-Ki KIM, Myung-Gil JANG, Pum Mo RYU, ...
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2012 Volume E95.D Issue 7 Pages 2009-2012
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we present a supervised learning method to seek out answers to the most frequently asked descriptive questions: reason, method, and definition questions. Most of the previous systems for question answering focus on factoids, lists or definitional questions. However, descriptive questions such as reason questions and method questions are also frequently asked by users. We propose a system for these types of questions. The system conducts an answer search as follows. First, we analyze the user's question and extract search keywords and the expected answer type. Second, information retrieval results are obtained from an existing search engine such as Yahoo or Google. Finally, we rank the results to find snippets containing answers to the questions based on a ranking SVM algorithm. We also propose features to identify snippets containing answers for descriptive questions. The features are adaptable and thus are not dependent on answer type. Experimental results show that the proposed method and features are clearly effective for the task.
    Download PDF (591K)
  • Sungyong YOON, Hee-Suk PANG, Koeng-Mo SUNG
    Type: LETTER
    Subject area: Speech and Hearing
    2012 Volume E95.D Issue 7 Pages 2013-2016
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.
    Download PDF (258K)
  • Tung-chin LEE, Young-cheol PARK, Dae-hee YOUN
    Type: LETTER
    Subject area: Speech and Hearing
    2012 Volume E95.D Issue 7 Pages 2017-2020
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose a switchable linear prediction (LP)/warped linear prediction (WLP) hybrid scheme for the transform coded excitation (TCX) coder, which is adopted as a core codec in AMR-WB+ and USAC. The proposed algorithm selects either an LP or WLP filter on a per-frame basis. To provide a smooth transitions between LP and WLP frames, a window switching scheme is developed using sine and rectangular windows. In addition, a Gaussian Mixture Model (GMM)-based classification module is used to determine the prediction mode. Through a subjective listening test it was confirmed that the proposed LP/WLP switching scheme offers improved sound quality.
    Download PDF (1182K)
  • Yanli WAN, Zhenjiang MIAO, Zhen TANG, Lili WAN, Zhe WANG
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 7 Pages 2021-2024
    Published: July 01, 2012
    Released: July 01, 2012
    JOURNALS FREE ACCESS
    This letter proposes an efficient local descriptor for wide-baseline dense matching. It improves the existing Daisy descriptor by combining intensity-based Haar wavelet response with a new color-based ratio model. The color ratio model is invariant to changes of viewing direction, object geometry, and the direction, intensity and spectral power distribution of the illumination. The experiments show that our descriptor has high discriminative power and robustness.
    Download PDF (764K)
feedback
Top