IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E96.D , Issue 2
Showing 1-26 articles out of 26 articles from the selected issue
Special Section on The Internet Architectures, Protocols, and Applications for Diversified
  • Kenzi WATANABE
    2013 Volume E96.D Issue 2 Pages 175
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Download PDF (70K)
  • Seii SAI, Onur ALTINTAS, John KENNEY, Hideaki TANAKA, Yuji INOUE
    Type: INVITED PAPER
    2013 Volume E96.D Issue 2 Pages 176-183
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Intelligent Transport System (ITS), aiming to provide innovative services related to traffic management, road safety and convenience, has drawn much attention in academic and industrial worlds in recent years. Japan has been considered as an advanced country in ITS development. This paper first gives an overview of the current ITS operated in Japan including Vehicle Information and Communication System (VICS), Electronic Toll Collection System (ETC), and ITS-spot system. Then this paper introduces the trends and the directions of future ITS including the development of driver-assistant type of road safety system in Japan and USA, and the potential use of white space to meet the additional ITS needs in the future.
    Download PDF (1360K)
  • Shohei KAMAMURA, Daisaku SHIMAZAKI, Atsushi HIRAMATSU, Hidenori NAKAZA ...
    Type: PAPER
    2013 Volume E96.D Issue 2 Pages 184-192
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    This paper proposes an IP fast rerouting method which can be implemented in OpenFlow framework. While the current IP is robust, its reactive and global rerouting processes require the long recovery time against failure. On the other hand, IP fast rerouting provides a milliseconds-order recovery time by proactive and local restoration mechanism. Implementation of IP fast rerouting is not common in real systems, however; it requires the coordination of additional forwarding functions to a commercial hardware. We propose an IP fast rerouting mechanism using OpenFlow that separates control function from hardware implementation. Our mechanism does not require any extension of current forwarding hardware. On the contrary, increase of backup routes becomes main overhead of our proposal. We also embed the compression mechanism to our IP fast rerouting mechanism. We show the effectiveness of our IP fast rerouting in terms of the fast restoration and the backup routes compression effect through computer simulations.
    Download PDF (1200K)
  • Xun SHAO, Go HASEGAWA, Yoshiaki TANIGUCHI, Hirotaka NAKANO
    Type: PAPER
    2013 Volume E96.D Issue 2 Pages 193-201
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Multihoming is widely used by Internet service providers (ISPs) to obtain improved performance and reliability when connecting to the Internet. Recently, the use of overlay routing for network application traffic is rapidly increasing. As a source of both routing oscillation and cost increases, overlay routing is known to bring challenges to ISPs. In this paper, we study the interaction between overlay routing and a multihomed ISP's routing strategy with a Nash game model, and propose a routing strategy for the multihomed ISP to alleviate the negative impact of overlay traffic. We prove that with the proposed routing strategy, the network routing game can always converge to a stable state, and the ISP can reduce costs to a relatively low level. From numerical simulations, we show the efficiency and convergence resulting from the proposed routing strategy. We also discuss the conditions under which the multihomed ISP can realize minimum cost by the proposed strategy.
    Download PDF (834K)
  • Jun'ichi SHIMADA, Hitomi TAMURA, Masato UCHIDA, Yuji OIE
    Type: PAPER
    2013 Volume E96.D Issue 2 Pages 202-212
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Congestion inherently occurs on the Internet due to traffic concentration on certain nodes or links of networks. The traffic concentration is caused by inefficient use of topological information of networks in existing routing protocols, which reduces to inefficient mapping between traffic demands and network resources. Actually, the route with minimum cost, i.e., number of hops, selected as a transmission route by existing routing protocols would pass through specific nodes with common topological characteristics that could contribute to a large improvement in minimizing the cost. However, this would result in traffic concentration on such specific nodes. Therefore, we propose a measure of the distance between two nodes that is suitable for reducing traffic concentration on specific nodes. To consider the topological characteristics of the congestion points of networks, we define node-to-node distance by using a generalized norm, p-norm, of a vector of which elements are degrees of intermediate nodes of the route. Simulation results show that both the maximum Stress Centrality (SC) and the coefficient of variation of the SC are minimized in some network topologies by selecting transmission routes based on the proposed measure of node-to-node distance.
    Download PDF (2319K)
  • Hiroshi YAMAMOTO, Katsuyuki YAMAZAKI
    Type: PAPER
    2013 Volume E96.D Issue 2 Pages 213-225
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    With the wide-spread use of high-speed network connections and high performance mobile/sensor terminals available, new interactive services based on real-time contents have become available over the Internet. In these services, end-nodes (e.g, smart phone, sensors), which are dispersed over the Internet, generates the real-time contents (e.g, live video, sensor data about human activity), and those contents are utilized to support many kinds of human activities seen in the real world. For the services, a new decentralized contents distribution system which can accommodate a large number of content distributions and which can minimize the end-to-end streaming delay between the content publisher and the subscribers is proposed. In order to satisfy the requirements, the proposed content distribution system is equipped with utilizing two distributed resource selection methods. The first method, distributed hash table (DHT)-based contents management, makes it possible for the system to efficiently decide and locate the server managing content distributions in completely decentralized manner. And, the second one, location-aware server selection, is utilized to quickly select the appropriate servers that distribute the streamed contents to all subscribers in real time. This paper considers the performance of the proposed resource selection methods using a realistic computer simulation and shows that the system with the proposed methods has scalability for a large-scale distributed system that attracts a very large number of users, and achieves real-time locating of the contents without degrading end-to-end streaming delay of content.
    Download PDF (1288K)
  • Ved P. KAFLE, Ruidong LI, Daisuke INOUE, Hiroaki HARAI
    Type: PAPER
    2013 Volume E96.D Issue 2 Pages 226-237
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    For flexibility in supporting mobility and multihoming in edge networks and scalability of the backbone routing system, future Internet is expected to be based on the concept of ID/locator split. Heterogeneity Inclusion and Mobility Adaptation through Locator ID Separation (HIMALIS) has been designed as a generic future network architecture based on ID/locator split concept. It can natively support mobility, multihoming, scalable backbone routing and heterogeneous protocols in the network layer of the new generation network or future Internet. However, HIMALIS still lacks security functions to protect itself from various attacks during the procedures of storing, updating, and retrieving of ID/locator mappings, such as impersonation attacks. Therefore, in this paper, we address the issues of security functions design and implementation for the HIMALIS architecture. We present an integrated security scheme consisting of mapping registration and retrieval security, network access security, communication session security, and mobility security. Through the proposed scheme, the hostname to ID and locator mapping records can be securely stored and updated in two types of name registries, domain name registry and host name registry. Meanwhile, the mapping records retrieved securely from these registries are utilized for securing the network access process, communication sessions, and mobility management functions. The proposed scheme provides comprehensive protection of both control and data packets as well as the network infrastructure through an effective combination of asymmetric and symmetric cryptographic functions.
    Download PDF (2238K)
  • Masayoshi SHIMAMURA, Takeshi IKENAGA, Masato TSURU
    Type: PAPER
    2013 Volume E96.D Issue 2 Pages 238-248
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    The explosive growth of the usage along with a greater diversification of communication technologies and applications imposes the Internet to manage further scalability and diversity, requiring more adaptive and flexible sharing schemes of network resources. Especially when a number of large-scale distributed applications concurrently share the resource, efficacy of comprehensive usage of network, computation, and storage resources is needed from the viewpoint of information processing performance. Therefore, a reconsideration of the coordination and partitioning of functions between networks (providers) and applications (users) has become a recent research topic. In this paper, we first address the need and discuss the feasibility of adaptive network services by introducing special processing nodes inside the network. Then, a design and an implementation of an advanced relay node platform are presented, by which we can easily prototype and test a variety of advanced in-network processing on Linux and off-the-shelf PCs. A key feature of the proposed platform is that integration between kernel and userland spaces enables to easily and quickly develop various advanced relay processing. Finally, on the top of the advanced relay node platform, we implement and test an adaptive packet compression scheme that we previously proposed. The experimental results show the feasibility of both the developed platform and the proposed adaptive packet compression.
    Download PDF (1621K)
  • Masahiro YOSHIDA, Akihiro NAKAO
    Type: PAPER
    2013 Volume E96.D Issue 2 Pages 249-258
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    BitTorrent is one of the most popular P2P file sharing applications worldwide. Each BitTorrent network is called a swarm, and millions of peers may join multiple swarms. However, there are many unreachable peers (NATed (network address translated), firewalled, or inactive at the time of measurement) in each swarm; hence, existing techniques can only measure a part of all the peers in a swarm. In this paper, we propose an improved measurement method for BitTorrent swarms that include many unreachable peers. In essence, NATed peers and those behind firewalls are found by allowing them to connect to our crawlers by actively advertising our crawlers' addresses. Evaluation results show that the proposed method increases the number of unique contacted peers by 112% compared to the conventional method. Moreover, the proposed method increases the total volume of downloaded pieces by 66%. We investigate the sampling bias among the proposed and conventional methods, and we find that different measurement methods yield significantly different results.
    Download PDF (696K)
Regular Section
  • Kwangho CHA
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2013 Volume E96.D Issue 2 Pages 259-269
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    As the number of nodes in high-performance computing (HPC) systems increases, parallel I/O becomes an important issue: collective I/O is the specialized parallel I/O that provides the function of single-file based parallel I/O. Collective I/O in most message passing interface (MPI) libraries follows a two-phase I/O scheme in which the particular processes, namely I/O aggregators, perform important roles by engaging the communications and I/O operations. This approach, however, is based on a single-core architecture. Because modern HPC systems use multi-core computational nodes, the roles of I/O aggregators need to be re-evaluated. Although there have been many previous studies that have focused on the improvement of the performance of collective I/O, it is difficult to locate a study regarding the assignment scheme for I/O aggregators that considers multi-core architectures. In this research, it was discovered that the communication costs in collective I/O differed according to the placement of the I/O aggregators, where each node had multiple I/O aggregators. The performance with the two processor affinity rules was measured and the results demonstrated that the distributed affinity rule used to locate the I/O aggregators in different sockets was appropriate for collective I/O. Because there may be some applications that cannot use the distributed affinity rule, the collective I/O scheme was modified in order to guarantee the appropriate placement of the I/O aggregators for the accumulated affinity rule. The performance of the proposed scheme was examined using two Linux cluster systems, and the results demonstrated that the performance improvements were more clearly evident when the computational node of a given cluster system had a complicated architecture. Under the accumulated affinity rule, the performance improvements between the proposed scheme and the original MPI-IO were up to approximately 26.25% for the read operation and up to approximately 31.27% for the write operation.
    Download PDF (4058K)
  • Guangchun LUO, Hao CHEN, Caihui QU, Yuhai LIU, Ke QIN
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2013 Volume E96.D Issue 2 Pages 270-277
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Tree partitioning arises in many parallel and distributed computing applications and storage systems. Some operator scheduling problems need to partition a tree into a number of vertex-disjoint subtrees such that some constraints are satisfied and some criteria are optimized. Given a tree T with each vertex or node assigned a nonnegative integer weight, two nonnegative integers l and u (l<u), and a positive integer p, we consider the following tree partitioning problems: partitioning T into minimum number of subtrees or p subtrees, with the condition that the sum of node weights in each subtree is at most u and at least l. To solve the two problems, we provide a fast polynomial-time algorithm, including a pre-processing method and another bottom-up scheme with dynamic programming. With experimental studies, we show that our algorithm outperforms another prior algorithm presented by Ito et al. greatly.
    Download PDF (607K)
  • Ryota SHIOYA, Naruki KURATA, Takashi TOYOSHIMA, Masahiro GOSHIMA, Shui ...
    Type: PAPER
    Subject area: Computer System
    2013 Volume E96.D Issue 2 Pages 278-288
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Object-oriented languages have recently become common, making register indirect jumps more important than ever. In object-oriented languages, virtual functions are heavily used because they improve programming productivity greatly. Virtual function calls usually consist of register indirect jumps, and consequently, programs written in object-oriented languages contain many register indirect jumps. The prediction of the targets of register indirect jumps is more difficult than the prediction of the direction of conditional branches. Many predictors have been proposed for register indirect jumps, but they cannot predict the jump targets with high accuracy or require very complex hardware. We propose a method that resolves jump targets by forwarding execution results. Our proposal dynamically finds the producers of register indirect jumps in virtual function calls. After the execution of the producers, the execution results are forwarded to the processor's front-end. The jump targets can be resolved by the forwarded execution results without requiring prediction. Our proposal improves the performance of programs that include unpredictable register indirect jumps, because it does not rely on prediction but instead uses actual execution results. Our evaluation shows that the IPC improvement using our proposal is as high as 5.4% on average and 9.8% at maximum.
    Download PDF (1747K)
  • Kazuhito MATSUDA, Go HASEGAWA, Satoshi KAMEI, Masayuki MURATA
    Type: PAPER
    Subject area: Information Network
    2013 Volume E96.D Issue 2 Pages 289-302
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Overlay routing is an application-level routing mechanism on overlay networks. Previous researches have revealed that the overlay routing can improve user-perceived performance. However, it may also generate traffic unintended by ISPs, incurring additional monetary cost. In addition, since ISPs and end users have their own objectives respectively regarding traffic routing, overlay routing must be operated considering both standpoints. In the present paper, we propose a method to reduce inter-ISP transit costs caused by overlay routing from the both standpoints of ISPs and end users. To determine the relationships among ASes, which are required for ISP cost-aware routing, we construct a method to estimate a transit cost of overlay-routed paths from end-to-end network performance values. Utilizing the metric, we propose a novel method that controls overlay routing from the both standpoints of ISPs and end users. Through extensive evaluations using measurement results from the actual network environments, we confirm that the advantage of the proposed method whereby we can reduce the transit cost in the overlay routing and can control the overlay routing according to the objectives of both ISPs and end users.
    Download PDF (1464K)
  • Michihiro SHINTANI, Takashi SATO
    Type: PAPER
    Subject area: Dependable Computing
    2013 Volume E96.D Issue 2 Pages 303-313
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    We propose a novel technique for the estimation of device-parameters suitable for postfabrication performance compensation and adaptive delay testing, which are effective means to improve the yield and reliability of LSIs. The proposed technique is based on Bayes' theorem, in which the device-parameters of a chip, such as the threshold voltage of transistors, are estimated by current signatures obtained in a regular IDDQ testing framework. Neither additional circuit implementation nor additional measurement is required for the purpose of parameter estimation. Numerical experiments demonstrate that the proposed technique can achieve 10-mV accuracy in threshold voltage estimations.
    Download PDF (1348K)
  • Quoc Huy DO, Seiichi MITA, Hossein Tehrani Nik NEJAD, Long HAN
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2013 Volume E96.D Issue 2 Pages 314-328
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    We propose a practical local and global path-planning algorithm for an autonomous vehicle or a car-like robot in an unknown semi-structured (or unstructured) environment, where obstacles are detected online by the vehicle's sensors. The algorithm utilizes a probabilistic method based on particle filters to estimate the dynamic obstacles' locations, a support vector machine to provide the critical points and Bézier curves to smooth the generated path. The generated path safely travels through various static and moving obstacles and satisfies the vehicle's movement constraints. The algorithm is implemented and verified on simulation software. Simulation results demonstrate the effectiveness of the proposed method in complicated scenarios that posit the existence of multi moving objects.
    Download PDF (3278K)
  • Rashmi TURIOR, Danu ONKAEW, Bunyarit UYYANONVARA
    Type: PAPER
    Subject area: Pattern Recognition
    2013 Volume E96.D Issue 2 Pages 329-339
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Automatic vessel tortuosity measures are crucial for many applications related to retinal diseases such as those due to retinopathy of prematurity (ROP), hypertension, stroke, diabetes and cardiovascular diseases. An automatic evaluation and quantification of retinal vascular tortuosity would help in the early detection of such retinopathies and other systemic diseases. In this paper, we propose a novel tortuosity index based on principal component analysis. The index is compared with three existant indices using simulated curves and real retinal images to demonstrate that it is a valid indicator of tortuosity. The proposed index satisfies all the tortuosity properties such as invariance to translation, rotation and scaling and also the modulation properties. It is capable of differentiating the tortuosity of structures that visually appear to be different in tortuosity and shapes. The proposed index can automatically classify the image as tortuous or non tortuous. For an optimal set of training parameters, the prediction accuracy is as high as 82.94% and 86.6% on 45 retinal images at segment level and image level, respectively. The test results are verified against the judgement of two expert Ophthalmologists. The proposed index is marked by its inherent simplicity and computational attractiveness, and produces the expected estimate, irrespective of the segmentation approach. Examples and experimental results demonstrate the fitness and effectiveness of the proposed technique for both simulated curves and retinal images.
    Download PDF (1894K)
  • Yoshiki KUMAGAI, Gosuke OHASHI
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 2 Pages 340-348
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    There has recently been much research on content-based image retrieval (CBIR) that uses image features including color, shape, and texture. In CBIR, feature extraction is important because the retrieval result depends on the image feature. Query-by-sketch image retrieval is one of CBIR and query-by-sketch image retrieval is efficient because users simply have to draw a sketch to retrieve the desired images. In this type of retrieval, selecting the optimum feature extraction method is important because the retrieval result depends on the image feature. We have developed a query-by-sketch image retrieval method that uses an edge relation histogram (ERH) as a global and local feature intended for binary line images. This histogram is based on the patterns of distribution of other line pixels centered on each line pixel that have been obtained by global and local processing. ERH, which is a shift- and scale-invariant feature, focuses on the relation among the edge pixels. It is fairly simple to describe rotation- and symmetry-invariant features, and query-by-sketch image retrieval using ERH makes it possible to perform retrievals that are not affected by position, size, rotation, or mirroring. We applied the proposed method to 20,000 images in the Corel Photo Gallery. Experimental results showed that it was an effective means of retrieving images.
    Download PDF (2296K)
  • Kazu MISHIBA, Masaaki IKEHARA, Takeshi YOSHITOME
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 2 Pages 349-356
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    In this paper, we propose an improved seam merging method for content-aware image resizing. This method merges a two-pixel-width seam element into one new pixel in image reduction and inserts a new pixel between the two pixels in image enlargement. To preserve important contents and structure, our method uses energy terms associated with importance and structure. Our method preserve the main structures by using a cartoon version of the original image when calculating the structure energy. In addition, we introduce a new energy term to suppress the distortion generated by excessive reduction or enlargement in iterated merger or insertion. Experimental results demonstrate that the proposed method can produce satisfactory results in both image reduction and enlargement.
    Download PDF (3370K)
  • Manabu INUMA, Akira OTSUKA, Hideki IMAI
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 2 Pages 357-364
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    The security of biometric authentication systems against impersonation attack is usually evaluated by the false accept rate, FAR. The false accept rate FAR is a metric for zero-effort impersonation attack assuming that the attacker attempts to impersonate a user by presenting his own biometric sample to the system. However, when the attacker has some information about algorithms in the biometric authentication system, he might be able to find a “strange” sample (called a wolf) which shows high similarity to many templates and attempt to impersonate a user by presenting a wolf. Une, Otsuka, Imai[22], [23] formulated such a stronger impersonation attack (called it wolf attack), defined a new security metric (called wolf attack probability, WAP), and showed that WAP is extremely higher than FAR in a fingerprint-minutiae matching algorithm proposed by Ratha et al.[19]and in a finger-vein-patterns matching algorithm proposed by Miura et al.[15]. Previously, we constructed secure matching algorithms based on a feature-dependent threshold approach[8] and showed that if the score distribution is perfectly estimated for each input feature data, then the proposed algorithms can lower WAP to a small value almost the same as FAR. In this paper, in addition to reintroducing the results of our previous work[8], we show that the proposed matching algorithm can keep the false reject rate (FRR) low enough without degrading security, if the score distribution is normal for each feature data.
    Download PDF (301K)
  • Yaohua WANG, Shuming CHEN, Hu CHEN, Jianghua WAN, Kai ZHANG, Sheng LIU
    Type: LETTER
    Subject area: Computer System
    2013 Volume E96.D Issue 2 Pages 365-369
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    The efficiency of ubiquitous SIMD (Single Instruction Multiple Data) media processors is seriously limited by the bottleneck effect of the scalar kernels in media applications. To solve this problem, a dual-core framework, composed of a micro control unit and an instruction buffer, is proposed. This framework can dynamically decouple the scalar and vector pipelines of the original single-core SIMD architecture into two free-running cores. Thus, the bottleneck effect can be eliminated by effectively exploiting the parallelism between scalar and vector kernels. The dual-core framework achieves the best attributes of both single-core and dual-core SIMD architectures. Experimental results exhibit an average performance improvement of 33%, at an area overhead of 4.26%. What's more, with the increase of the SIMD width, higher performance gain and lower cost can be expected.
    Download PDF (606K)
  • Young-Sik EOM, Jong Wook KWAK, Seong-Tae JHANG, Chu-Shik JHON
    Type: LETTER
    Subject area: Computer System
    2013 Volume E96.D Issue 2 Pages 370-374
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Chip Multiprocessors (CMPs) allow different applications to share LLC (Last Level Cache). Since each application has different cache capacity demand, LLC capacity should be partitioned in accordance with the demands. Existing partitioning algorithms estimate the capacity demand of each core by stack processing considering the LRU (Least Recently Used) replacement policy only. However, anti-thrashing replacement algorithms like BIP (Binary Insertion Policy) and BIP-Bypass emerged to overcome the thrashing problem of LRU replacement policy in a working set greater than the available cache size. Since existing stack processing cannot estimate the capacity demand with anti-thrashing replacement policy, partitioning algorithms also cannot partition cache space with anti-thrashing replacement policy. In this letter, we prove that BIP replacement policy is not feasible to stack processing but BIP-bypass is. We modify stack processing to accommodate BIP-Bypass. In addition, we propose the pipelined hardware of modified stack processing. With this hardware, we can get the success function of the various capacities with anti-thrashing replacement policy and assess the cache capacity of shared cache adequate to each core in real time.
    Download PDF (723K)
  • Xianglei XING, Sidan DU, Hua JIANG
    Type: LETTER
    Subject area: Pattern Recognition
    2013 Volume E96.D Issue 2 Pages 375-378
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    We extend the Nonparametric Discriminant Analysis (NDA) algorithm to a semi-supervised dimensionality reduction technique, called Semi-supervised Nonparametric Discriminant Analysis (SNDA). SNDA preserves the inherent advantages of NDA, that is, relaxing the Gaussian assumption required for the traditional LDA-based methods. SNDA takes advantage of both the discriminating power provided by the NDA method and the locality-preserving power provided by the manifold learning. Specifically, the labeled data points are used to maximize the separability between different classes and both the labeled and unlabeled data points are used to build a graph incorporating neighborhood information of the data set. Experiments on synthetic as well as real datasets demonstrate the effectiveness of the proposed approach.
    Download PDF (378K)
  • June Sig SUNG, Doo Hwa HONG, Hyun Woo KOO, Nam Soo KIM
    Type: LETTER
    Subject area: Speech and Hearing
    2013 Volume E96.D Issue 2 Pages 379-382
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    In our previous study, we proposed the waveform interpolation (WI) approach to model the excitation signals for hidden Markov model (HMM)-based speech synthesis. This letter presents several techniques to improve excitation modeling within the WI framework. We propose both the time domain and frequency domain zero padding techniques to reduce the spectral distortion inherent in the synthesized excitation signal. Furthermore, we apply non-negative matrix factorization (NMF) to obtain a low-dimensional representation of the excitation signals. From a number of experiments, including a subjective listening test, the proposed method has been found to enhance the performance of the conventional excitation modeling techniques.
    Download PDF (397K)
  • Xue ZHANG, Anhong WANG, Bing ZENG, Lei LIU, Zhuo LIU
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 2 Pages 383-386
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Numerous examples in image processing have demonstrated that human visual perception can be exploited to improve processing performance. This paper presents another showcase in which some visual information is employed to guide adaptive block-wise compressive sensing (ABCS) for image data, i.e., a varying CS-sampling rate is applied on different blocks according to the visual contents in each block. To this end, we propose a visual analysis based on the discrete cosine transform (DCT) coefficients of each block reconstructed at the decoder side. The analysis result is sent back to the CS encoder, stage-by-stage via a feedback channel, so that we can decide which blocks should be further CS-sampled and what is the extra sampling rate. In this way, we can perform multiple passes of reconstruction to improve the quality progressively. Simulation results show that our scheme leads to a significant improvement over the existing ones with a fixed sampling rate.
    Download PDF (715K)
  • Xuefeng BAI, Tiejun ZHANG, Chuanjun WANG, Ahmed A. ABD EL-LATIF, Xiamu ...
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 2 Pages 387-391
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    Player detection is an important part in sports video analysis. Over the past few years, several learning based detection methods using various supervised two-class techniques have been presented. Although satisfactory results can be obtained, a lot of manual labor is needed to construct the training set. To overcome this drawback, this letter proposes a player detection method based on one-class SVM (OCSVM) using automatically generated training data. The proposed method is evaluated using several video clips captured from World Cup 2010, and experimental results show that our approach achieves a high detection rate while keeping the training set construction's cost low.
    Download PDF (876K)
  • Huiyun JING, Xin HE, Qi HAN, Xiamu NIU
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 2 Pages 392-395
    Published: February 01, 2013
    Released: February 01, 2013
    JOURNALS FREE ACCESS
    BRISK (Binary Robust Invariant Scalable Keypoints) works dramatically faster than well-established algorithms (SIFT and SURF) while maintaining matching performance. However BRISK relies on intensity, color information in the image is ignored. In view of the importance of color information in vision applications, we propose CBRISK, a novel method for taking into account color information during keypoint detection and description. Instead of grayscale intensity image, the proposed approach detects keypoints in the photometric invariant color space. On the basis of binary intensity BRISK (original BRISK) descriptor, the proposed approach embeds binary invariant color presentation in the CBRISK descriptors. Experimental results show that CBRISK is more discriminative and robust than BRISK with respect to photometric variation.
    Download PDF (566K)
feedback
Top