IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
E96.D 巻, 8 号
選択された号の論文の38件中1~38を表示しています
Special Section on Reconfigurable Systems
  • Hideharu AMANO
    2013 年 E96.D 巻 8 号 p. 1581
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
  • Shouyi YIN, Dajiang LIU, Leibo LIU, Shaojun WEI
    原稿種別: PAPER
    専門分野: Design Methodology
    2013 年 E96.D 巻 8 号 p. 1582-1591
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    A coarse-grained reconfigurable architecture (CGRA) is typically hybrid architecture, which is composed of a reconfigurable processing unit (RPU) and a host microprocessor. Many computation-intensive kernels (e.g., loop nests) are often mapped onto RPUs to speed up the execution of programs. Thus, mapping optimization of loop nests is very important to improve the performance of CGRA. Processing element (PE) utilization rate, communication volume and reconfiguration cost are three crucial factors for the performance of RPUs. Loop transformations can affect these three performance influencing factors greatly, and would be of much significance when mapping loops onto RPUs. In this paper, a joint loop transformation approach for RPUs is proposed, where the PE utilization rate, communication cost and reconfiguration cost are under a joint consideration. Our approach could be integrated into compilers for CGRAs to improve the operating performance. Compared with the communication-minimal approach, experimental results show that our scheme can improve 5.8% and 13.6% of execution time on motion estimation (ME) and partial differential equation (PDE) solvers kernels, respectively. Also, run-time complexity is acceptable for the practical cases.
  • Tanvir AHMED, Jun YAO, Yuko HARA-AZUMI, Shigeru YAMASHITA, Yasuhiko NA ...
    原稿種別: PAPER
    専門分野: Design Methodology
    2013 年 E96.D 巻 8 号 p. 1592-1601
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Nowadays, fault tolerance has been playing a progressively important role in covering increasing soft/hard error rates in electronic devices that accompany the advances of process technologies. Research shows that wear-out faults have a gradual onset, starting with a timing fault and then eventually leading to a permanent fault. Error detection is thus a required function to maintain execution correctness. Currently, however, many highly dependable methods to cover permanent faults are commonly over-designed by using very frequent checking, due to lack of awareness of the fault possibility in circuits used for the pending executions. In this research, to address the over-checking problem, we introduce a metric for permanent defects, as operation defective probability (ODP), to quantitatively instruct the check operations being placed only at critical positions. By using this selective checking approach, we can achieve a near-100% dependability by having about 53% less check operations, as compared to the ideal reliable method, which performs exhaustive checks to guarantee a zero-error propagation. By this means, we are able to reduce 21.7% power consumption by avoiding the non-critical checking inside the over-designed approach.
  • Qian ZHAO, Kazuki INOUE, Motoki AMAGASAKI, Masahiro IIDA, Morihiro KUG ...
    原稿種別: PAPER
    専門分野: Design Methodology
    2013 年 E96.D 巻 8 号 p. 1602-1612
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    The most widely used open-source field programmable gate array (FPGA) placement and routing tool is the Versatile Packing, Placement and Routing (VPR) software developed at the University of Toronto, Canada. VPR calculates area and timing using target FPGA architecture and physical information. However, it cannot be used in FPGA IP design efficiently for two reasons. First, VPR cannot directly support most newly developed FPGA architectures, and modifying the C-coded VPR so that it can be used to evaluate a number of new architectures is time consuming. Second, the accuracy of the VPR performance results is inadequate for the evaluation of a complete FPGA IP in a design that targets the production of LSI. We propose an FPGA design framework that is focused on improving FPGA IP design efficiency. A novel FPGA routing tool is developed in this framework, namely the EasyRouter which uses the C# language. When an object-oriented programming method is used, there is less source code and it is easier to manage compared to VPR, thus shortening the development time. By using simple HDL code templates, EasyRouter can automatically generate the entire HDL code for a chip and the configuration bitstream. With these files, the FPGA IP can be evaluated with commercial VLSI CAD systems with high accuracy and reliability.
  • Kazuteru NAMBA, Nobuhide TAKASHINA, Hideo ITO
    原稿種別: PAPER
    専門分野: Test and Verification
    2013 年 E96.D 巻 8 号 p. 1613-1623
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Small delay defects can cause serious issues such as very short lifetime in the recent VLSI devices. Delay measurement is useful to detect small delay defects in manufacturing testing. This paper presents a design for delay measurement to detect small delay defects on global routing resources, such as double, hex and long lines, in a Xilinx Virtex 4 based FPGA. This paper also shows a measurement method using the proposed design. The proposed measurement method is based on an existing one for SoC using delay value measurement circuit (DVMC). The proposed measurement modifies the construction of configurable logic blocks (CLBs) and utilizes an on-chip DVMC newly added. The number of configurations required by the proposed measurement is 60, which is comparable to that required by stuck-at fault testing for global routing resources in FPGAs. The area overhead is low for general FPGAs, in which the area of routing resources is much larger than that of the other elements such as CLBs. The area of every modified CLB is 7% larger than an original CLB, and the area of the on-chip DVMC is 22% as large as that of an original CLB. For recent FPGAs, we can estimate that the area overhead is approximately 2% or less of the FPGAs.
  • Toshihiro KAMEDA, Hiroaki KONOURA, Dawood ALNAJJAR, Yukio MITSUYAMA, M ...
    原稿種別: PAPER
    専門分野: Test and Verification
    2013 年 E96.D 巻 8 号 p. 1624-1631
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    This paper proposes a procedure for avoiding delay faults in field with slack assessment during standby time. The proposed procedure performs path delay testing and checks if the slack is larger than a threshold value using selectable delay embedded in basic elements (BE). If the slack is smaller than the threshold, a pair of BEs to be replaced, which maximizes the path slack, is identified. Experimental results with two application circuits mapped on a coarse-grained architecture show that for aging-induced delay degradation a small threshold slack, which is less than 1ps in a test case, is enough to ensure the delay fault prediction.
  • Yoshiya KOMATSU, Masanori HARIYAMA, Michitaka KAMEYAMA
    原稿種別: PAPER
    専門分野: Architecture
    2013 年 E96.D 巻 8 号 p. 1632-1644
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    This paper presents a novel architecture of an asynchronous FPGA for handshake-component-based design. The handshake-component-based design is suitable for large-scale, complex asynchronous circuit because of its understandability. This paper proposes an area-efficient architecture of an FPGA that is suitable for handshake-component-based asynchronous circuit. Moreover, the Four-Phase Dual-Rail encoding is employed to construct circuits robust to delay variation because the data paths are programmable in FPGA. The FPGA based on the proposed architecture is implemented in a 65nm process. Its evaluation results show that the proposed FPGA can implement handshake components efficiently.
  • Son-Truong NGUYEN, Masaaki KONDO, Tomoya HIRAO, Koji INOUE
    原稿種別: PAPER
    専門分野: Architecture
    2013 年 E96.D 巻 8 号 p. 1645-1653
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Nowadays, the trend of developing micro-processor with hundreds of cores brings a promising prospect for embedded systems. Realizing a high performance and low power many-core processor is becoming a primary technical challenge. Generally, three major issues required to be resolved includes: 1) realizing efficient massively parallel processing, 2) reducing dynamic power consumption, and 3) improving software productivity. To deal with these issues, we propose a solution to use many low-performance but small and very low-power cores to obtain very high performance, and develop a referential many-core architecture and a program development environment. This paper introduces a many-core architecture named SMYLEref and its prototype system with off-the-shelf FPGA evaluation boards. The initial evaluation results of several SPLASH2 benchmark programs conducted on our developed 128-core platform are also presented and discussed in this paper.
  • Gugang GAO, Peng CAO, Jun YANG, Longxing SHI
    原稿種別: PAPER
    専門分野: Application
    2013 年 E96.D 巻 8 号 p. 1654-1666
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    One of the largest challenges for coarse-grained reconfigurable arrays (CGRAs) is how to efficiently map applications. The key issues for mapping are (1) how to reduce the memory bandwidth, (2) how to exploit parallelism in algorithms and (3) how to achieve load balancing and take full advantage of the hardware potential. In this paper, we propose a novel parallelism scheme, called ‘Hybrid partitioning’, for mapping a H.264 high definition (HD) decoder onto REMUS-II, a CGRA system-on-chip (SoC). Combining good features of data partitioning and task partitioning, our methodology mainly consists of three levels from top to bottom: (1) hybrid task pipeline based on slice and macroblock (MB) level; (2) MB row-level data parallelism; (3) sub-MB level parallelism method. Further, on the sub-MB level, we propose a few mapping strategies such as hybrid variable block size motion compensation (Hybrid VBSMC) for MC, 2D-wave for intra 4×4, parallel processing order for deblocking. With our mapping strategies, we improved the algorithm's performance on REMUS-II. For example, with a luma 16×16MB, the Hybrid VBSMC achieves 4 times greater performance than VBSMC and 2.2 times greater performance than fixed 4×4 partition approach. Finally, we achieve 1080p@33fps H.264 high-profile (HiP)@level 4.1 decoding when the working frequency of REMUS-II is 200MHz. Compared with typical hardware platforms, we can achieve better performance, area, and flexibility. For example, our performance achieves approximately 175% improvement than that of a commercial CGRA processor XPP-III while only using 70% of its area.
  • Hiroki NAKAHARA, Tsutomu SASAO, Munehiro MATSUURA
    原稿種別: PAPER
    専門分野: Application
    2013 年 E96.D 巻 8 号 p. 1667-1675
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    This paper shows a virus scanning engine using two-stage matching. In the first stage, a binary CAM emulator quickly detects a part of the virus pattern, while in the second stage, the MPU detects the full length of the virus pattern. The binary CAM emulator is realized by an index generation unit (IGU) based on row-shift decomposition. The proposed system uses two off-chip SRAMs and a small FPGA. Thus, the cost and the power consumption are lower than the TCAM-based system. The system loaded 1,290,617 ClamAV virus patterns. As for the area and throughput, this system outperforms existing two-stage matching systems using FPGAs.
  • Keisuke DOHI, Kazuhiro NEGI, Yuichiro SHIBATA, Kiyoshi OGURI
    原稿種別: PAPER
    専門分野: Application
    2013 年 E96.D 巻 8 号 p. 1676-1684
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    We implement external memory-free deep pipelined FPGA implementation including HOG feature extraction and AdaBoost classification. To construct our design by compact FPGA, we introduce some simplifications of the algorithm and aggressive use of stream oriented architectures. We present comparison results between our simplified fixed-point scheme and an original floating-point scheme in terms of quality of results, and the results suggest the negative impact of the simplified scheme for hardware implementation is limited. We empirically show that, our system is able to detect human from 640×480 VGA images at up to 112FPS on a Xilinx Virtex-5 XC5VLX50 FPGA.
Regular Section
  • Nhat-Phuong TRAN, Myungho LEE, Sugwon HONG, Seung-Jae LEE
    原稿種別: PAPER
    専門分野: Fundamentals of Information Systems
    2013 年 E96.D 巻 8 号 p. 1685-1695
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Data encryption and decryption are common operations in network-based application programs that must offer security. In order to keep pace with the high data input rate of network-based applications such as the multimedia data streaming, real-time processing of the data encryption/decryption is crucial. In this paper, we propose a new parallelization approach to improve the throughput performance for the de-facto standard data encryption and decryption algorithm, AES-CTR (Counter mode of AES). The new approach extends the size of the block encrypted at one time across the unit block boundaries, thus effectively encrypting multiple unit blocks at the same time. This reduces the associated parallelization overheads such as the number of procedure calls, the scheduling and the synchronizations compared with previous approaches. Therefore, this leads to significant throughput performance improvements on a computing platform with a general-purpose multi-core processor and a Graphic Processing Unit (GPU).
  • Shinji KIKUCHI, Satoshi TSUCHIYA, Kunihiko HIRAISHI
    原稿種別: PAPER
    専門分野: Software System
    2013 年 E96.D 巻 8 号 p. 1696-1706
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Managing the configurations of complex systems consisting of various components requires the combined efforts by multiple domain experts. These experts have extensive knowledge about different components in the system they need to manage but little understanding of the issues outside their individual areas of expertise. As a result, the configuration constraints, changes, and procedures specified by those involved in the management of a complex system are often interrelated with one another without being noticed, and their integration into a coherent procedure for configuration represents a major challenge. The method of synthesizing the configuration procedure introduced in this paper addresses this challenge using a combination of formal specification and model finding techniques. We express the knowledge on system management with this method, which is provided by domain experts as first-order logic formulas in the Alloy specification language, and combine it with system-configuration information and the resulting specification. We then employ the Alloy Analyzer to find a system model that satisfies all the formulas in this specification. The model obtained corresponds to a procedure for system configurations that satisfies all expert-specified constraints. In order to reduce the resources needed in the procedure synthesis, we reduce the length of procedures to be synthesized by defining and using intermediate goal states to divide operation procedures into shorter steps. Finally, we evaluate our method through a case study on a procedure to consolidate virtual machines.
  • Minh-Quoc NGHIEM, Giovanni YOKO KRISTIANTO, Akiko AIZAWA
    原稿種別: PAPER
    専門分野: Data Engineering, Web Information Systems
    2013 年 E96.D 巻 8 号 p. 1707-1715
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    This paper explores the problem of semantic enrichment of mathematical expressions. We formulate this task as the translation of mathematical expressions from presentation markup to content markup. We use MathML, an application of XML, to describe both the structure and content of mathematical notations. We apply a method based on statistical machine translation to extract translation rules automatically. This approach contrasts with previous research, which tends to rely on manually encoded rules. We also introduce segmentation rules used to segment mathematical expressions. Combining segmentation rules and translation rules strengthens the translation system and archives significant improvements over a prior rule-based system.
  • Jeongseok SEO, Sungdeok CHA, Bin ZHU, Doohwan BAE
    原稿種別: PAPER
    専門分野: Information Network
    2013 年 E96.D 巻 8 号 p. 1716-1726
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Anomaly-based worm detection is a complement to existing signature-based worm detectors. It detects unknown worms and fills the gap between when a worm is propagated and when a signature is generated and downloaded to a signature-based worm detector. A major obstacle for its deployment to personal computers (PCs) is its high false positive alarms since a typical PC user lacks the skill to handle exceptions flagged by a detector without much knowledge of computers. In this paper, we exploit the feature of personal computers in which the user interacts with many running programs and the features combining various network characteristics. The model of a program's network behaviors is conditioned on the human interactions with the program. Our scheme automates detection of unknown worms with dramatically reduced false positive alarms while not compromising low false negatives, as proved by our experimental results from an implementation on Windows-based PCs to detect real world worms.
  • WonHee LEE, Samuel Sangkon LEE, Dong-Un AN
    原稿種別: PAPER
    専門分野: Artificial Intelligence, Data Mining
    2013 年 E96.D 巻 8 号 p. 1727-1733
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Clustering methods are divided into hierarchical clustering, partitioning clustering, and more. K-Means is a method of partitioning clustering. We improve the performance of a K-Means, selecting the initial centers of a cluster through a calculation rather than using random selecting. This method maximizes the distance among the initial centers of clusters. Subsequently, the centers are distributed evenly and the results are more accurate than for initial cluster centers selected at random. This is time-consuming, but it can reduce the total clustering time by minimizing allocation and recalculation. Compared with the standard algorithm, F-Measure is more accurate by 5.1%.
  • Joong Hyuk CHANG, Nam Hun PARK
    原稿種別: PAPER
    専門分野: Artificial Intelligence, Data Mining
    2013 年 E96.D 巻 8 号 p. 1734-1744
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    The mining problem over data streams has recently been attracting considerable attention thanks to the usefulness of data mining in various application fields of information science, and sequence data streams are so common in daily life. Therefore, a study on mining sequential patterns over sequence data streams can give valuable results for wide use in various application fields. This paper proposes a new framework for mining novel interesting sequential patterns over a sequence data stream and a mining method based on the framework. Assuming that a sequence with small time-intervals between its data elements is more valuable than others with large time-intervals, the novel interesting sequential pattern is defined and found by analyzing the time-intervals of data elements in a sequence as well as their orders. The proposed framework is capable of obtaining more interesting sequential patterns over sequence data streams whose data elements are highly correlated in terms of generation time.
  • Chen ZHANG, ShiXiong XIA, Bing LIU, Lei ZHANG
    原稿種別: PAPER
    専門分野: Artificial Intelligence, Data Mining
    2013 年 E96.D 巻 8 号 p. 1745-1753
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Maximum margin clustering (MMC) is a newly proposed clustering method that extends the large-margin computation of support vector machine (SVM) to unsupervised learning. Traditionally, MMC is formulated as a nonconvex integer programming problem which makes it difficult to solve. Several methods rely on reformulating and relaxing the nonconvex optimization problem as semidefinite programming (SDP) or second-order cone program (SOCP), which are computationally expensive and have difficulty handling large-scale data sets. In linear cases, by making use of the constrained concave-convex procedure (CCCP) and cutting plane algorithm, several MMC methods take linear time to converge to a local optimum, but in nonlinear cases, time complexity is still high. Since extreme learning machine (ELM) has achieved similar generalization performance at much faster learning speed than traditional SVM and LS-SVM, we propose an extreme maximum margin clustering (EMMC) algorithm based on ELM. It can perform well in nonlinear cases. Moreover, the kernel parameters of EMMC need not be tuned by means of random feature mappings. Experimental results on several real-world data sets show that EMMC performs better than traditional MMC methods, especially in handling large-scale data sets.
  • Sila CHUNWIJITRA, Arjulie JOHN BERENA, Hitoshi OKADA, Haruki UENO
    原稿種別: PAPER
    専門分野: Educational Technology
    2013 年 E96.D 巻 8 号 p. 1754-1765
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    In this paper, we propose a new online authoring tool for e-Learning system to meet the social demands for internationalized higher education. The tool includes two functions - an authoring function for creating video-based content by the instructor, and a viewing function for self-learning by students. In the authoring function, an instructor creates key markings onto the raw video stream to produce virtual video clips related to each slide. With key markings, some parts of the raw video stream can be easily skipped. The virtual video clips form an aggregated video stream that is used to synchronize with the slide presentation to create learning content. The synchronized content can be previewed immediately at the client computer prior to saving at the server. The aggregated video becomes the baseline for the viewing function. Based on aggregated video stream methodology, content editing requires only the changing of key markings without editing the raw video file. Furthermore, video and pointer synchronization is also proposed for enhancing the students' learning efficiency. In viewing function, video quality control and an adaptive video buffering method are implemented to support usage in various network environments. The total system is optimized to support cross-platform and cloud computing to break the limitation of various usages. The proposed method can provide simple authoring processes with clear user interface design for instructors, and help students utilize learning contents effectively and efficiently. In the user acceptance evaluation, most respondents agree with the usefulness, ease-of-use, and user satisfaction of the proposed system. The overall results show that the proposed authoring and viewing tools have higher user acceptance as a tool for e-Learning.
  • Hiroto SAIGO, Hisashi KASHIMA, Koji TSUDA
    原稿種別: PAPER
    専門分野: Pattern Recognition
    2013 年 E96.D 巻 8 号 p. 1766-1773
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it difficult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns for classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.
  • Hilman PARDEDE, Koji IWANO, Koichi SHINODA
    原稿種別: PAPER
    専門分野: Speech and Hearing
    2013 年 E96.D 巻 8 号 p. 1774-1782
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Spectral subtraction (SS) is an additive noise removal method which is derived in an extensive framework. In spectral subtraction, it is assumed that speech and noise spectra follow Gaussian distributions and are independent with each other. Hence, noisy speech also follows a Gaussian distribution. Spectral subtraction formula is obtained by maximizing the likelihood of noisy speech distribution with respect to its variance. However, it is well known that noisy speech observed in real situations often follows a heavy-tailed distribution, not a Gaussian distribution. In this paper, we introduce a q-Gaussian distribution in the non-extensive statistics to represent the distribution of noisy speech and derive a new spectral subtraction method based on it. We found that the q-Gaussian distribution fits the noisy speech distribution better than the Gaussian distribution does. Our speech recognition experiments using the Aurora-2 database showed that the proposed method, q-spectral subtraction (q-SS), outperformed the conventional SS method.
  • Hongbo ZHANG, Shaozi LI, Songzhi SU, Shu-Yuan CHEN
    原稿種別: PAPER
    専門分野: Image Processing and Video Processing
    2013 年 E96.D 巻 8 号 p. 1783-1792
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Many successful methods for recognizing human action are spatio-temporal interest point (STIP) based methods. Given a test video sequence, for a matching-based method using a voting mechanism, each test STIP casts a vote for each action class based on its mutual information with respect to the respective class, which is measured in terms of class likelihood probability. Therefore, two issues should be addressed to improve the accuracy of action recognition. First, effective STIPs in the training set must be selected as references for accurately estimating probability. Second, discriminative STIPs in the test set must be selected for voting. This work uses ε-nearest neighbors as effective STIPs for estimating the class probability and uses a variance filter for selecting discriminative STIPs. Experimental results verify that the proposed method is more accurate than existing action recognition methods.
  • Dubok PARK, David K. HAN, Changwon JEON, Hanseok KO
    原稿種別: PAPER
    専門分野: Image Processing and Video Processing
    2013 年 E96.D 巻 8 号 p. 1793-1799
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Images captured under foggy conditions often exhibit poor contrast and color. This is primarily due to the air-light which degrades image quality exponentially with fog depth between the scene and the camera. In this paper, we restore fog-degraded images by first estimating depth using the physical model characterizing the RGB channels in a single monocular image. The fog effects are then removed by subtracting the estimated irradiance, which is empirically related to the scene depth information obtained, from the total irradiance received by the sensor. Effective restoration of color and contrast of images taken under foggy conditions are demonstrated. In the experiments, we validate the effectiveness of our method compared with conventional method.
  • Pengyi HAO, Sei-ichiro KAMATA
    原稿種別: PAPER
    専門分野: Image Processing and Video Processing
    2013 年 E96.D 巻 8 号 p. 1800-1810
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    The topic of retrieving videos containing a desired person from a dataset just using the content of faces without any help of textual information has many interesting applications like video surveillance, social network, video mining, etc. However, traditional face matching against a huge number of detected faces leads to an unacceptable response time and may also reduce the accuracy due to the large variations in facial expressions, poses, lighting, etc. Therefore, in this paper we propose a novel method to generate discriminative “signatures” for efficiently retrieving the videos containing the same person with a query. In this research, the signature is defined as a compact, discriminative and reduced dimensionality representation, which is generated from a set of high-dimensional feature vectors of an individual. The desired videos are retrieved based on the similarities between the signature of the query and those of individuals in the database. In particular, we make the following contributions. Firstly, we give an algorithm of two directional linear discriminant analysis with maximum correntropy criterion (2DLDA-MCC) as an extension to our recently proposed maximum correntropy criterion based linear discriminant analysis (LDA-MCC). Both algorithms are robust to outliers and noise. Secondly, we present an approach for transferring a set of exemplars to a fixed-length signature using LDA-MCC and 2DLDA-MCC, resulting in two kinds of signatures that are called 1D signature and 2D signature. Finally, a novel video retrieval scheme is given based on the signatures, which has low storage requirement and can achieve a fast search. Evaluations on a large dataset of videos show reliable measurement of similarities by using the proposed signatures to represent the identities generated from videos. Experimental results also demonstrate that the proposed video retrieval scheme has the potential to substantially reduce the response time and slightly increase the mean average precision of retrieval.
  • Thanh Duc NGO, Hung Thanh VU, Duy-Dinh LE, Shin'ichi SATOH
    原稿種別: PAPER
    専門分野: Image Recognition, Computer Vision
    2013 年 E96.D 巻 8 号 p. 1811-1825
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Face retrieval in news video has been identified as a challenging task due to the huge variations in the visual appearance of the human face. Although several approaches have been proposed to deal with this problem, their extremely high computational cost limits their scalability to large-scale video datasets that may contain millions of faces of hundreds of characters. In this paper, we introduce approaches for face retrieval that are scalable to such datasets while maintaining competitive performances with state-of-the-art approaches. To utilize the variability of face appearances in video, we use a set of face images called face-track to represent the appearance of a character in a video shot. Our first proposal is an approach for extracting face-tracks. We use a point tracker to explore the connections between detected faces belonging to the same character and then group them into one face-track. We present techniques to make the approach robust against common problems caused by flash lights, partial occlusions, and scattered appearances of characters in news videos. In the second proposal, we introduce an efficient approach to match face-tracks for retrieval. Instead of using all the faces in the face-tracks to compute their similarity, our approach obtains a representative face for each face-track. The representative face is computed from faces that are sampled from the original face-track. As a result, we significantly reduce the computational cost of face-track matching while taking into account the variability of faces in face-tracks to achieve high matching accuracy. Experiments are conducted on two face-track datasets extracted from real-world news videos, of such scales that have never been considered in the literature. One dataset contains 1,497 face-tracks of 41 characters extracted from 370 hours of TRECVID videos. The other dataset provides 5,567 face-tracks of 111 characters observed from a television news program (NHK News 7) over 11 years. We make both datasets publically accessible by the research community. The experimental results show that our proposed approaches achieved a remarkable balance between accuracy and efficiency.
  • Deqian FU, Seong Tae JHANG
    原稿種別: PAPER
    専門分野: Image Recognition, Computer Vision
    2013 年 E96.D 巻 8 号 p. 1826-1835
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Aiming to alleviate the visual tracking problem of drift which reduces the abilities of almost all online visual trackers, a robust visual tracker (called CCMM tracker) is proposed with a coupled-classifier based on multiple representative appearance models. The coupled-classifier consists of root and head classifiers based on local sparse representation. The two classifiers collaborate to fulfil a tracking task within the Bayesian-based tracking framework, also to update their templates with a novel mechanism which tries to guarantee an update operation along the “right” orientation. Consequently, the tracker is more powerful in anti-interference. Meanwhile the multiple representative appearance models maintain features of the different submanifolds of the target appearance, which the target exhibited previously. The multiple models cooperatively support the coupled-classifier to recognize the target in challenging cases (such as persistent disturbance, vast change of appearance, and recovery from occlusion) with an effective strategy. The novel tracker proposed in this paper, by explicit inference, can reduce drift and handle frequent and drastic appearance variation of the target with cluttered background, which is demonstrated by the extensive experiments.
  • Hua Fei YIN, Chang Wen ZHENG
    原稿種別: PAPER
    専門分野: Computer Graphics
    2013 年 E96.D 巻 8 号 p. 1836-1844
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    A procedural terrain generation method is presented in this paper. It uses a user-drawn sketch map, which is a raster image with lines and polygons painted by different colors to represent sketches of different terrain features, as input to control the placement of terrain features. Some simple parameters which can be easily understood and adjusted by users are used to control the generation process. To further automatically generate terrains, a mechanism that automatically generates sketches is also put forward. The method is implemented in a PC, and experiments show that terrains are generated efficiently. This method provides users a controllable way to generate terrains.
  • Yanling LI, Qingwei ZHAO, Yonghong YAN
    原稿種別: PAPER
    専門分野: Natural Language Processing
    2013 年 E96.D 巻 8 号 p. 1845-1852
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Semantic concept in an utterance is obtained by a fuzzy matching methods to solve problems such as words' variation induced by automatic speech recognition (ASR), or missing field of key information by users in the process of spoken language understanding (SLU). A two-stage method is proposed: first, we adopt conditional random field (CRF) for building probabilistic models to segment and label entity names from an input sentence. Second, fuzzy matching based on similarity function is conducted between the named entities labeled by a CRF model and the reference characters of a dictionary. The experiments compare the performances in terms of accuracy and processing speed. Dice similarity and cosine similarity based on TF score can achieve better accuracy performance among four similarity measures, which equal to and greater than 93% in F1-measure. Especially the latter one improved by 8.8% and 9% respectively compared to q-gram and improved edit-distance, which are two conventional methods for string fuzzy matching.
  • Degen HUANG, Shanshan WANG, Fuji REN
    原稿種別: PAPER
    専門分野: Natural Language Processing
    2013 年 E96.D 巻 8 号 p. 1853-1861
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Comparable Corpora are valuable resources for many NLP applications, and extensive research has been done on information mining based on comparable corpora in recent years. While there are not enough large-scale available public comparable corpora at present, this paper presents a bi-directional CLIR-based method for creating comparable corpora from two independent news collections in different languages. The original Chinese document collections and English documents collections are crawled from XinHuaNet respectively and formatted in a consistent manner. For each document from the two collections, the best query keywords are extracted to represent the essential content of the document, and then the keywords are translated into the language of the other collection. The translated queries are run against the collection in the same language to pick up the candidate documents in the other language and candidates are aligned based on their publication dates and the similarity scores. Results show that our approach significantly outperforms previous approaches to the construction of Chinese-English comparable corpora.
  • Toru HIRAOKA, Kiichi URAHAMA
    原稿種別: LETTER
    専門分野: Fundamentals of Information Systems
    2013 年 E96.D 巻 8 号 p. 1862-1866
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    We propose a non-photorealistic rendering method for generating moire-picture-like color images from color photographs. The proposed method is performed in two steps. First, images with a staircasing effect are generated by a bilateral filter. Second, moire patterns are generated with an improved bilateral filter called an anti-bilateral filter. The characteristic of the anti-bilateral filter is to emphasize gradual boundaries.
  • Jung Hun PARK, Soohee HAN, Bokyu KWON
    原稿種別: LETTER
    専門分野: Fundamentals of Information Systems
    2013 年 E96.D 巻 8 号 p. 1867-1870
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    This paper concerns a problem of on-line model parameter estimations for multiple time-delay systems. In order to estimate unknown model parameters from measured state variables, we propose two schemes using Lyapunov's direct method, called parallel and series-parallel model estimators. It is shown through a numerical example that the proposed parallel and series-parallel model estimators can be effective when sufficiently rich inputs are applied.
  • Hyunha NAM, Hirotaka HACHIYA, Masashi SUGIYAMA
    原稿種別: LETTER
    専門分野: Fundamentals of Information Systems
    2013 年 E96.D 巻 8 号 p. 1871-1874
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Multi-label classification allows a sample to belong to multiple classes simultaneously, which is often the case in real-world applications such as text categorization and image annotation. In multi-label scenarios, taking into account correlations among multiple labels can boost the classification accuracy. However, this makes classifier training more challenging because handling multiple labels induces a high-dimensional optimization problem. In this paper, we propose a scalable multi-label method based on the least-squares probabilistic classifier. Through experiments, we show the usefulness of our proposed method.
  • Yong YU, Jianbing NI, Ying SUN
    原稿種別: LETTER
    専門分野: Information Network
    2013 年 E96.D 巻 8 号 p. 1875-1877
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    Reprogramming for wireless sensor networks is essential to upload new code or to alter the functionality of existing code. To overcome the weakness of the centralized approach of the traditional solutions, He et al. proposed the notion of distributed reprogramming where multiple authorized network users are able to reprogram sensor nodes without involving the base station. They also gave a novel distributed reprogramming protocol called SDRP by using identity-based signature, and provided a comprehensive security analysis for their protocol. In this letter, unfortunately, we demonstrate that SDRP is insecure as the protocol fails to satisfy the property of authenticity and integrity of code images, the most important security requirement of a secure reprogramming protocol.
  • Tsuyoshi SAWAGASHIRA, Tatsuro HAYASHI, Takeshi HARA, Akitoshi KATSUMAT ...
    原稿種別: LETTER
    専門分野: Artificial Intelligence, Data Mining
    2013 年 E96.D 巻 8 号 p. 1878-1881
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    The purpose of this study is to develop an automated scheme of carotid artery calcification (CAC) detection on dental panoramic radiographs (DPRs). The CAC is one of the indices for predicting the risk of arteriosclerosis. First, regions of interest (ROIs) that include carotid arteries are determined on the basis of inflection points of the mandibular contour. Initial CAC candidates are detected by using a grayscale top-hat filter and a simple grayscale thresholding technique. Finally, a rule-based approach and a support vector machine to reduce the number of false positive (FP) findings are applied using features such as area, location, and circularity. A hundred DPRs were used to evaluate the proposed scheme. The sensitivity for the detection of CACs was 90% with 4.3FPs (80% with 1.9FPs) per image. Experiments show that our computer-aided detection scheme may be useful to detect CACs.
  • Joji WATANABE, Tadaaki HOSAKA, Takayuki HAMAMOTO
    原稿種別: LETTER
    専門分野: Pattern Recognition
    2013 年 E96.D 巻 8 号 p. 1882-1885
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    For source camera identification, we propose a method to reconstruct the sensor pattern noise map from a size-reduced query image by minimizing an objective function derived from the observation model. Our method can be applied to multiple queries, and can thus be further improved. Experiments demonstrate the superiority of the proposed method over conventional interpolation-based magnification algorithms.
  • Jialiang PENG, Qiong LI, Ahmed A. ABD EL-LATIF, Ning WANG, Xiamu NIU
    原稿種別: LETTER
    専門分野: Pattern Recognition
    2013 年 E96.D 巻 8 号 p. 1886-1889
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    In this paper, a new finger vein recognition method based on Gabor wavelet and Local Binary Pattern (GLBP) is proposed. In the new scheme, Gabor wavelet magnitude and Local Binary Pattern operator are combined, so the new feature vector has excellent stability. We introduce Block-based Linear Discriminant Analysis (BLDA) to reduce the dimensionality of the GLBP feature vector and enhance its discriminability at the same time. The results of an experiment show that the proposed approach has excellent performance compared to other competitive approaches in current literatures.
  • Moojae LEE, Jung-Ju CHOI, Youngcheul WEE
    原稿種別: LETTER
    専門分野: Image Processing and Video Processing
    2013 年 E96.D 巻 8 号 p. 1890-1893
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    This paper presents a modified orthogonal fractal super-resolution (OFSR) method to improve the visual quality of an image along sharp edges. Although the OFSR method constructs a high-quality high-resolution image from a low-resolution counterpart, there are ringing artifacts observed along sharp edges which make the visual quality relatively low with respect to the numerical quantity. These artifacts are mainly caused by unnecessarily exaggerated pixel contrast along sharp edges within a range block. We restrict each contracted pixel value in a range block to a value between the minimum and maximum of its domain block pixel values. We also extend the domain block of the contraction function and find a better domain block using the range block mean. At the final step of the iteration, we adjust each pixel in the range block so that the range block mean and the corresponding pixel value of the low-resolution image are equal. According to our experimental results, the proposed method improves the visual quality along sharp edges and shows higher levels of numerical quantity than the OFSR method.
  • Yaming WANG, Jiansheng CHEN, Guangda SU
    原稿種別: LETTER
    専門分野: Image Recognition, Computer Vision
    2013 年 E96.D 巻 8 号 p. 1894-1897
    発行日: 2013/08/01
    公開日: 2013/08/01
    ジャーナル フリー
    In this paper, we design a new color space YUskinVskin from YUV color space, based on the principle of skin color with respect to the change of color temperature. Compared with previous work, this color space proved to be the optimal color space for hand segmentation with linear thresholds. We also propose a novel fingertip detection method based on the concomitance between finger and fingernail. The two techniques together improve the performance of hand contour and fingertip extraction in hand gesture recognition.
feedback
Top