-
Youji IIGUNI
2008 Volume E91.A Issue 8 Pages
1857
Published: August 01, 2008
Released on J-STAGE: July 01, 2018
JOURNAL
RESTRICTED ACCESS
-
Masao YAMAGISHI, Isao YAMADA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1858-1866
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper presents a closed form solution to a problem of constructing the best lower bound of a
convex function under certain conditions. The function is assumed (I) bounded below by -ρ, and (II) differentiable and its derivative is
Lipschitz continuous with Lipschitz constant L. To construct the lower bound, it is also assumed that we can use the values ρ and
L together with the values of the function and its derivative at one specified point. By using the proposed lower bound, we derive a computationally efficient deep
monotone approximation operator to the
level set of the function. This operator realizes better approximation than
subgradient projection which has been utilized, as a monotone approximation operator to level sets of differentiable convex functions as well as nonsmooth convex functions. Therefore, by using the proposed operator, we can improve many signal processing algorithms essentially based on the subgradient projection.
View full abstract
-
Masaki MISONO, Isao YAMADA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1867-1874
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper presents a new adaptive minor subspace extraction algorithm based on an idea of Peng and Yi ('07) for approximating the single minor eigenvector of a covariance matrix. By utilizing the idea inductively in the nested orthogonal complement subspaces, the proposed algorithm succeeds to relax the numerical sensitivity which has been annoying conventional adaptive minor subspace extraction algorithms for example, Oja algorithm ('82) and its stabilized version: O-Oja algorithm ('02). Simulation results demonstrate that the proposed algorithm realizes more stable convergence than O-Oja algorithm.
View full abstract
-
Motoaki MOURI, Arao FUNASE, Andrzej CICHOCKI, Ichi TAKUMI, Hiroshi YAS ...
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1875-1882
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Anomalous environmental electromagnetic (EM) radiation waves have been reported as the portents of earthquakes. Our study's goal is predicting earthquakes using EM radiation waves by detecting some anomalies. We have been measuring the Extremely Low Frequency (ELF) range EM radiation waves all over Japan. However, the recorded data contain signals unrelated to earthquakes. These signals, as noise, confound earthquake prediction efforts. In this paper, we propose an efficient method of global signal elimination and enhancement local signals using Independent Component Analysis (ICA). We evaluated the effectiveness of this method.
View full abstract
-
Rawid BANCHUIN, Boonruk CHIPIPOP, Boonchareon SIRINAOVAKUL
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1883-1889
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this research, the practical OTA-based inductors of all structures have been studied and their complete passive equivalent circuit models, where the effects of both parasitic elements and finite opened-loop bandwidth have been taken into account, also contain only the conventional standard linear elements i. e. the ordinary resistor, inductor and capacitor, without any infeasible high order element e. g. super inductor etc., have been proposed. The resulting models have been found to be excellently accurate, excellently straight forward, far superior to the previously proposed ones and completely realizable by the passive elements. Hence, the proposed passive equivalent circuit models have been found to be the convenience and versatile tools for the implementation of any analog and mixed signal processing circuits and systems.
View full abstract
-
Shih-Chang LIANG, Wen-Jan CHEN
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1890-1897
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Thinning and line extraction of binary images not only reduces data storage amount, automatically creates the adjacency and relativity between line and points but also provides applications for automatic inspection systems, pattern recognition systems and vectorization. Based on the features of construction drawings, new thinning and line extraction algorithms were proposed in this study. The experimental results showed that the proposed method has a higher reliability and produces better quality than the various existing methods.
View full abstract
-
Sang-Churl NAM, Masahide ABE, Masayuki KAWAMATA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1898-1906
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper proposes a fast, efficient detection algorithm of missing data (also referred to as blotches) based on Markov Random Field (MRF) models with less computational load and a lower false alarm rate than the existing MRF-based blotch detection algorithms. The proposed algorithm can reduce the computational load by applying fast block-matching motion estimation based on the diamond searching pattern and restricting the attention of the blotch detection process to only the candidate bloch areas. The problem of confusion of the blotches is frequently seen in the vicinity of a moving object due to poorly estimated motion vectors. To solve this problem, we incorporate a weighting function with respect to the pixels, which are accurately detected by our moving edge detector and inputed into the formulation. To solve the blotch detection problem formulated as a maximum
a posteriori (MAP) problem, an iterated conditional modes (ICM) algorithm is used. The experimental results show that our proposed method results in fewer blotch detection errors than the conventional blotch detectors, and enables lower computational cost and the more efficient detecting performance when compared with existing MRF-based detectors.
View full abstract
-
Seungwu HAN, Masaaki FUJIYOSHI, Hitoshi KIYA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1907-1914
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper proposes an image authentication method that detects tamper and localizes tampered areas efficiently. The efficiency of the proposed method is summarized as the following three points. 1) This method offers coarse-to-fine tamper localization by hierarchical data hiding so that further tamper detection is suppressed for blocks labeled as genuine in the uppper layer. 2) Since the image feature description in the top layer is hidden over an image, the proposed method enciphers the data in the top layer rather than enciphers all data in all layers. 3) The proposed method is based on the reversible data hiding scheme that does not use highly-costed compression technique. These three points makes the proposed method superior to the conventional methods using compression techniques and methods using multi-tiered data hiding that requires integrity verification in many blocks even the image is genuine. Simulation results show the effectiveness of the proposed method.
View full abstract
-
Takahiro OGAWA, Miki HASEYAMA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1915-1923
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
A projection onto convex sets (POCS)-based annotation method for semantic image retrieval is presented in this paper. Utilizing database images previously annotated by keywords, the proposed method estimates unknown semantic features of a query image from its known visual features based on a POCS algorithm, which includes two novel approaches. First, the proposed method semantically assigns database images to some clusters and introduces a nonlinear eigenspace of visual and semantic features in each cluster into the constraint of the POCS algorithm. This approach accurately provides semantic features for each cluster by using its visual features in the least squares sense. Furthermore, the proposed method monitors the error converged by the POCS algorithm in order to select the optimal cluster including the query image. By introducing the above two approaches into the POCS algorithm, the unknown semantic features of the query image are successfully estimated from its known visual features. Consequently, similar images can be easily retrieved from the database based on the obtained semantic features. Experimental results verify the effectiveness of the proposed method for semantic image retrieval.
View full abstract
-
Karn PATANUKHOM, Akinori NISHIHARA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1924-1934
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
A blind image restoration for non-linear motion blurs with non-uniform point spread functions based on multiple blurred versions of a same scene is proposed. The restoration is separately considered as identification and deconvolution problems. In the proposed identification process, an identification difficulty is introduced to rank an order of blur identification. A blurred image with the lowest identification difficulty is initially identified by using a single-image-based scheme. Then, other images are identified based on a cross convolution relation between each pair of blurred images. In addition, an iterative feedback scheme is applied to improve the identification results. For the deconvolution process, a spatial adaptive scheme using regional optimal terminating points is modified from a conventional iterative deconvolution scheme. The images are decomposed into sub-regions based on smoothness. The regional optimal terminating points are independently assigned to suppress a noise in smooth regions and sharpen the image in edgy regions. The optimal terminating point for each region is decided by considering a discrepancy error. Restoration examples of simulated and real world blurred images are experimented to demonstrate the performance of the proposed method.
View full abstract
-
Qin LIU, Yiqing HUANG, Satoshi GOTO, Takeshi IKENAGA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1935-1943
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Compared with previous standards, H. 264/AVC adopts variable block size motion estimation (VBSME) and multiple reference frames (MRF) to improve the video quality. Full search motion estimation algorithm (FS), which calculates every search candidate in the search window for 7 block type with multiple reference frames, consumes massive computation power. Mathematical analysis reveals that the aliasing problem of subsampling algorithm comes from high frequency signal components. Moreover, high frequency signal components are also the main issues that make MRF algorithm essential. As we know, a picture being rich of texture must contain lots of high frequency signals. So based on these mathematical investigations, two fast VBSME algorithms are proposed in this paper, namely edge block detection based subsampling method and motion vector based MRF early termination algorithm. Experiments show that strong correlation exists among the motion vectors of those blocks belonging to the same macroblock. Through exploiting this feature, a dynamically adjustment of the search ranges of integer motion estimation is proposed in this paper. Combing our proposed algorithms with UMHS almost saves 96-98% Integer Motion Estimation (IME) time compared to the exhaustive search algorithm. The induced coding quality loss is less than 0.8% bitrate increase or 0.04dB PSNR decline on average.
View full abstract
-
Zhenyu LIU, Satoshi GOTO, Takeshi IKENAGA
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1944-1952
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
The key to high performance in video coding lies on efficiently reducing the temporal redundancies. For this purpose, H.264/AVC coding standard has adopted variable block size motion estimation on multiple reference frames to improve the coding gain. However, the computational complexity of motion estimation is also increased in proportion to the product of the reference frame number and the intermode number. The mathematical analysis in this paper reveals that the prediction errors mainly depend on the image edge gradient amplitude and quantization parameter. Consequently, this paper proposes the image content based early termination algorithm, which outperforms the original method adopted by JVT reference software, especially at high and moderate bit rates. In light of rate-distortion theory, this paper also relates the homogeneity of image to the quantization parameter. For the homogenous block, its search computation for futile reference frames and intermodes can be efficiently discarded. Therefore, the computation saving performance increases with the value of quantization parameter. These content based fast algorithms were integrated with Unsymmetrical-cross Multihexagon-grid Search (UMHexagonS) algorithm to demonstrate their performance. Compared to the original UMHexagonS fast matching algorithm, 26.14-54.97% search time can be saved with an average of 0.0369dB coding quality degradation.
View full abstract
-
Chuntao WANG, Jiangqun NI, Rongyue ZHANG, Goo-Rak KWON, Sung-Jea KO
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1953-1960
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Robustness and invisibility are two contrary constraints for robust invisible watermarking. Instead of the conventional strategy with human visual system (HVS) model, this paper presents a content-adaptive approach to further optimize the constraint between them. To reach this target, the entropy-based and integrated HVS (IHVS) based measures are constructed so as to adaptively choose the suitable components for watermark insertion and detection. Such a kind of scheme potentially gives rise to synchronization problem between the encoder and decoder under the framework of blind watermarking, which is then solved by incorporating the repeat-accumulate (RA) code with erasure and error correction. Moreover, a new hidden Markov model (HMM) based detector in wavelet domain is introduced to reduce the computation complexity and is further developed into a posterior one to avoid the transmission of HMM parameters with only a little sacrifice of detection performance. Experimental results show that the proposed algorithm can obtain considerable improvement in robustness performance with the same distortion as the traditional one.
View full abstract
-
Min-Jen TSAI, Chang-Hsing SHEN
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1961-1973
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Wavelet tree based watermarking algorithms are using the wavelet coefficient energy difference for copyright protection and ownership verification. WTQ (Wavelet Tree Quantization) algorithm is the representative technique using energy difference for watermarking. According to the cryptanalysis on WTQ, the watermark embedded in the protected image can be removed successfully. In this paper, we present a novel differential energy watermarking algorithm based on the wavelet tree group modulation structure, i. e. WTGM (Wavelet Tree Group Modulation). The wavelet coefficients of host image are divided into disjoint super trees (each super tree containing two sub-super trees). The watermark is embedded in the relatively high-frequency components using the group strategy such that energies of sub-super trees are close. The employment of wavelet tree structure, sum-of-subsets and positive/negative modulation effectively improve the drawbacks of the WTQ scheme for its insecurity. The integration of the HVS (Human Visual System) for WTGM provides a better visual effect of the watermarked image. The experimental results demonstrate the effectiveness of our algorithm in terms of robustness and imperceptibility.
View full abstract
-
Akihiro HAYASAKA, Takuma SHIBAHARA, Koichi ITO, Takafumi AOKI, Hiroshi ...
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1974-1981
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper proposes a three-dimensional (3D) face recognition system using passive stereo vision. So far, the reported 3D face recognition techniques have used active 3D measurement methods to capture high-quality 3D facial information. However, active methods employ structured illumination (structure projection, phase shift, moire topography, etc.) or laser scanning, which is not desirable in many human recognition applications. Addressing this problem, we propose a face recognition system that uses (i) passive stereo vision to capture 3D facial information and (ii) 3D matching using an ICP (Iterative Closest Point) algorithm with its improvement techniques. Experimental evaluation demonstrates efficient recognition performance of the proposed system compared with an active 3D face recognition system and a passive 3D face recognition system employing the original ICP algorithm.
View full abstract
-
Miho KOZUMA, Atsushi SASAKI, Yukihiro KAMIYA, Takeo FUJII, Kenta UMEBA ...
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1982-1989
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
M-ary/SS is a version of Direct Sequence/Spread Spectrum (DS/SS) aiming to improve the spectral efficiency employing orthogonal codes. However, due to the auto-correlation property of the orthogonal codes, it is impossible to detect the symbol timing by observing correlator outputs. Therefore, conventionally, a preamble has been inserted in M-ary/SS, signals. In this paper, we propose a new blind adaptive array antenna for M-ary/SS systems that combines signals over the space axis without any preambles. It is surely an innovative approach for M-ary/SS. The performance is investigated through computer simulations.
View full abstract
-
Koichi ICHIGE, Kazuhiko SAITO, Hiroyuki ARAI
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
1990-1999
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper presents a high resolution Direction-Of-Arrival (DOA) estimation method using unwrapped phase information of MUSIC-based noise subspace. Superresolution DOA estimation methods such as MUSIC, Root-MUSIC and ESPRIT methods are paid great attention because of their brilliant properties in estimating DOAs of incident signals. Those methods achieve high accuracy in estimating DOAs in a good propagation environment, but would fail to estimate DOAs in severe environments like low Signal-to-Noise Ratio (SNR), small number of snapshots, or when incident waves are coming from close angles. In MUSIC method, its spectrum is calculated based on the absolute value of the inner product between array response and noise eigenvectors, means that MUSIC employs only the amplitude characteristics and does not use any phase characteristics. Recalling that phase characteristics plays an important role in signal and image processing, we expect that DOA estimation accuracy could be further improved using phase information in addition to MUSIC spectrum. This paper develops a procedure to obtain an accurate spectrum for DOA estimation using unwrapped and differentiated phase information of MUSIC-based noise subspace. Performance of the proposed method is evaluated through computer simulation in comparison with some conventional estimation methods.
View full abstract
-
Chang-Jun AHN
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
2000-2007
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In MIMO systems, the channel identification is important to distinguish transmitted signals from multiple transmit antennas. One of the most typical channel identification schemes is to employ a code division multiplexing (CDM) based scheme in which a unique spreading code is assigned to distinguish both BS and MS antenna elements. However, by increasing the number of base stations and transmit antenna elements, large spreading codes and pilot symbols are required to distinguish the received power from all the connectable BS, as well as to identify all the CSI for the combination of transmitter and receiver antenna elements. Furthermore, the complexity of maximum likelihood detection (MLD) for implementation of MIMO is a considerable work. To reduce these problems, in this paper, we propose the parallel detection algorithm using multiple QR decompositions with permuted channel matrix (MQRD-PCM) with discrete pilot signal assignment and iterative channel identification for MIMO/OFDM.
View full abstract
-
Kok Ann Donny TEO, Shuichi OHNO
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
2008-2015
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
We study the bit-error rate (BER) for different code lengths and number of users in CDMA system with linear minimum mean squared error (MMSE) and non-linear equalizations. We first show that for a fix channel and a fix number of users, BER of each symbol after linear equalization degrades with a decrease in the code length. Then, we prove that for a fix code length, the BER averaged over random channels improves with a decrease in the number of users. Furthermore, in the nonlinear serial-interference cancellation (SIC) scheme, we prove analytically that the BER improves with each step of symbol cancellation for any channel not just at high signal-to-interference noise ratio (SINR) but at all range of SINR. Simulation results are presented to substantiate our theoretical findings.
View full abstract
-
Alex CARTAGENA GORDILLO, Ryuji KOHNO
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
2016-2024
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this paper, we propose a method for designing a set of pulses whose spectrum is efficiently contained in amplitude and bandwidth. Because these pulses are derived from and have shapes that are either equal or similar to the Hermite pulses, we name our proposed transmit pulses as spectrally efficient Hermite pulses. Given that the proposed set of pulses does not constitute an orthonormal one, we also propose a set of receive templates which permit orthonormal detection of the incoming signals at the receiver. The importance of our proposal is in the potential implementation of
M-ary pulse shape modulation systems, for ultra wideband communications, with sets of pulses that are efficiently contained within a specific bandwidth and limited to a certain amplitude.
View full abstract
-
Masayuki MIYAMA, Yuusuke INOIE, Takafumi KASUGA, Ryouichi INADA, Masas ...
Article type: PAPER
2008 Volume E91.A Issue 8 Pages
2025-2034
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper describes a 158MS/s JPEG 2000 codec with an embedded block coder (EBC) based on bit-plane and pass-parallel architecture. The EBC contains bit-plane coders (BPCs) corresponding to each bit-plane in a code-block. The upper and the lower bit-plane coding overlap in time with a 1-stripe and 1-column gap. The bit-modeling passes in the bit-plane coding also overlap in time with the same gap. These methods increase throughput by 30 times in comparison with the conventional. In addition, the methods support not only vertically causal mode, but also regular mode, which enhances the image quality. Furthermore, speculative decoding is adopted to increase throughput. This codec LSI was designed using 0.18μm process. The core area is 4.7×4.7mm
2 and the frequency is 160MHz. A system including the codec enables image transmission of PC desktop with 8ms delay.
View full abstract
-
Martin MINARCIK, Kamil VRBA
Article type: LETTER
2008 Volume E91.A Issue 8 Pages
2035-2037
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this letter a new structure of multifunctional frequency filter using a universal voltage conveyor (UVC) is presented. The multifunctional circuit can realize a low-pass, high-pass and band-pass filter. All types of frequency filter can be realized as inverting or non-inverting. Advantages of the proposed structure are the independent control of the quality factor at the cut-off frequency and the low output impedance of output terminals. The computer simulations and measuring of particular frequency filters are depicted.
View full abstract
-
Ying-Wen CHANG, Yen-Yu CHEN
Article type: LETTER
2008 Volume E91.A Issue 8 Pages
2038-2040
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Blocking artifact is a major limitation of DCT-based codec at low bit rates. This degradation is likely to influence the judgment of a final user. This work presents a powerful post-processing filter in the DCT frequency domain. The proposed algorithm adopts a shift block within four adjacent DCT blocks to reduce computational complexity. The artifacts resulting from quantized and de-quantized process are eliminated by slightly modifying several DCT coefficients in the shift block. Simulation results indicate that the proposed method produces the best image quality in terms of both objective and subjective metrics.
View full abstract
-
Yoshifumi CHISAKI, Ryouji KAWANO, Tsuyoshi USAGAWA
Article type: LETTER
2008 Volume E91.A Issue 8 Pages
2041-2044
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
A binaural hearing assistance system based on the frequency domain binaural model has been previously proposed. The system can enhance a signal coming from a specific direction. Since the system utilizes a binaural signal, an inter-channel communication between left and right subsystems is required. The bit rate reduction in inter-channel communication is essential for the detachment of the headset from the processing system. In this paper, the performance of a system which uses a differential pulse code modulation codec is examined and the relationship between the bit rate and sound quality is discussed.
View full abstract
-
Hiroaki WATAHIKI, Teruyuki MIYAJIMA
Article type: LETTER
2008 Volume E91.A Issue 8 Pages
2045-2047
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In block transmission systems, performance degrades due to inter-block interference (IBI) when there are multipaths with delays exceeding cyclic prefix (CP) length. An interesting technique to overcome this problem is an array antenna proposed by Hori et al., which restores the CP property by minimizing a cost function. However, its performance has not been theoretically cleared. In this letter, the performance of a method which minimizes the cost function under a unit norm constraint is analyzed. It is shown that the method can suppress IBI and its interference suppression capability depends on a certain parameter. The analytical result is verified through computer simulation.
View full abstract
-
Der-Feng TSENG, Chia-Ming LEE
Article type: LETTER
2008 Volume E91.A Issue 8 Pages
2048-2052
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Space-time trellis coding systems employing orthogonal frequency division multiplexing technique over frequency-selective channels is considered, where fading gains vary within a frame interval. The channel time-evolution of each sub-carrier is modeled by an autoregressive process, while the receiver utilizing a recursive technique combining Kalman filtering with per-survivor processing is studied.
View full abstract
-
Umut YUNUS, Masaru TSUNASAKI, Yiwei HE, Masanobu KOMINAMI, Katsumi YAM ...
Article type: PAPER
Subject area: Engineering Acoustics
2008 Volume E91.A Issue 8 Pages
2053-2061
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Gas or water leaks in pipes that are buried under ground or that are situated in the walls of buildings may occur due to aging or unpredictable accidents, such as earthquakes. Therefore, the detection of leaks in pipes is an important task and has been investigated extensively. In the present paper, we propose a novel leak detection method by means of acoustic wave. We inject an acoustic chirp signal into a target pipeline and then estimate the leak location from the delay time of the compressed pulse by passing the reflected signal through a correlator. In order to distinguish a leak reflection in a complicated pipeline arrangement, the reflection characteristics of leaks are carefully discussed by numerical simulations and experiments. There is a remarkable difference in the reflection characteristics between the leak and other types of discontinuity, and the property can be utilized to distinguish the leak reflection. The experimental results show that, even in a complicated pipe arrangement including bends and branches, the proposed approach can successfully implement the leak detection. Furthermore, the proposed approach has low cost and is easy to implement because only a personal computer and some commonly equipment are required.
View full abstract
-
Hua XIAO, Huai-Zong SHAO, Qi-Cong PENG
Article type: PAPER
Subject area: Speech and Hearing
2008 Volume E91.A Issue 8 Pages
2062-2067
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
View full abstract
-
Nozomi ISHIHARA, Kôki ABE
Article type: PAPER
Subject area: Digital Signal Processing
2008 Volume E91.A Issue 8 Pages
2068-2075
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
A novel two-dimensional discrete wavelet transform (2-DDWT) parallel architecture for higher throughput and lower energy consumption is proposed. The proposed architecture fully exploits full-page burst accesses of DRAM and minimizes the number of DRAM activate and precharge operations. Simulation results revealed that the architecture reduces the number of clock cycles for DRAM memory accesses as well as the DRAM power consumption with moderate cost of internal memory. Evaluation of the VLSI implementation of the architecture showed that the throughput of wavelet filtering was increased by parallelizing row filtering with a minimum area cost, thereby enabling DRAM full-page burst accesses to be exploited.
View full abstract
-
Yuki ISHIKAWA, Daisuke KIMURA, Yasuhide ISHIGE, Toshimichi SAITO
Article type: PAPER
Subject area: Nonlinear Problems
2008 Volume E91.A Issue 8 Pages
2076-2083
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper studies two kinds of simple switched dynamical systems with piecewise constant characteristics. The first one is based on the single buck converter whose periodic/chaotic dynamics are analyzed precisely using the piecewise linear phase map. The second one is based on a paralleled system of the buck converters for lower voltages with higher current capabilities. Referring to the results of the single system, it is clarified that stable multi-phase synchronization is always possible by the proper use of the switching strategies and adjustment of the clock period. Presenting a simple test circuit, typical operations are confirmed experimentally.
View full abstract
-
Liangpeng GUO, Yici CAI, Qiang ZHOU, Xianlong HONG
Article type: PAPER
Subject area: VLSI Design Technology and CAD
2008 Volume E91.A Issue 8 Pages
2084-2090
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Multiple supply voltage (MSV) is an effective scheme to achieve low power. Recent works in MSV are based on physical level and aim at reducing physical overheads, but all of them do not consider level converter, which is one of the most important issues in dual-vdd design. In this work, a logic and layout aware methodology and related algorithms combining voltage assignment and placement are proposed to minimize the number of level converters and to implement voltage islands with minimal physical overheads. Experimental results show that our approach uses much fewer level converters (reduced by 83.23% on average) and improves the power savings by 16% on average compared to the previous approach [1]. Furthermore, the methodology is able to produce feasible placement with a small impact to traditional placement goals.
View full abstract
-
Youngsun HAN, Seok Joong HWANG, Seon Wook KIM
Article type: PAPER
Subject area: VLSI Design Technology and CAD
2008 Volume E91.A Issue 8 Pages
2091-2100
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this paper, we present a reconfigurable processor infrastructure to accelerate Java applications, called Jaguar. The Jaguar infrastructure consists of a compiler framework and a runtime environment support. The compiler framework selects a group of Java methods to be translated into hardware for delivering the best performance under limited resources, and translates the selected Java methods into Verilog synthesizable code modules. The runtime environment support includes the Java virtual machine (JVM) running on a host processor to provide Java execution environment to the generated Java accelerator through communication interface units while preserving Java semantics. Our compiler infrastructure is a tightly integrated and solid compiler-aided solution for Java reconfigurable computing. There is no limitation in generating synthesizable Verilog modules from any Java application while preserving Java semantics. In terms of performance, our infrastructure achieves the speedup by 5.4 times on average and by up to 9.4 times in measured benchmarks with respect to JVM-only execution. Furthermore, two optimization schemes such as an instruction folding and a live buffer removal can reduce 24% on average and up to 39% of the resource consumption.
View full abstract
-
Takuya KITAMOTO, Tetsu YAMAGUCHI
Article type: PAPER
Subject area: Numerical Analysis and Optimization
2008 Volume E91.A Issue 8 Pages
2101-2110
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Let
M(
y) be a matrix whose entries are polynomial in
y, λ(
y) and υ(
y) be a set of eigenvalue and eigenvector of
M(
y). Then, λ(
y) and υ(
y) are algebraic functions of
y, and λ(
y) and υ(
y) have their power, series expansions
λ(
y)=β
0+β
1y+…+β
kyk+…(β
j∈
C), (1)
υ(
y)=γ
0+γ
1y+…+γ
kyk+…(γ
j∈
Cn), (2)
provided that
y=0 is not a singular point of λ(
y) or υ(
y). Several algorithms are already proposed to compute the above power series expansions using Newton's method (the algorithm in [4]) or the Hensel construction (the algorithm in [5], [12]). The algorithms proposed so far compute high degree coefficients, β
k and γ
k, using lower degree coefficien β
j and γ
j(
j=0,1,…,
k-1). Thus with floating point arithmetic, the numerical errors in the coefficients can accumulate as index
k increases. This can cause serious deterioration of the numerical accuracy of high degree coefficients β
k and γ
k, and we need to check the accuracy. In this paper, we assume that given matrix
M(
y) does not have multiple eigenvalues at
y=0 (this implies that
y=0 is not singular point of γ(
y) or υ(
y)), and presents an algorithm to estimate the accuracy of the computed power series β
i, γ
j in (1) and (2). The estimation process employs the idea in [9] which computes a coefficient of a power series with Cauchy's integral formula and numerical integrations. We present an efficient implementation of the algorithm that utilizes Newton's method. We also present a modification of Newton's method to speed up the procedure, introducing tuning parameter
p. Numerical experiments of the paper indicates that we can enhance the performance of the algorithm by 12-16%, choosing the optimal tuning parameter
p.
View full abstract
-
Hidenori OHTA, Toshinori YAMADA, Chikaaki KODAMA, Kunihiro FUJIYOSHI
Article type: PAPER
Subject area: Algorithms and Data Structures
2008 Volume E91.A Issue 8 Pages
2111-2119
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
A 3D-dissection (A rectangular solid dissection) is a dissection of a rectangular solid into smaller rectangular solids by planes. In this paper, we propose an O-sequence, a string of representing any 3D-dissection which is dissected by only non-crossing rectangular planes. We also present a necessary and sufficient condition for a given string to be an O-sequence.
View full abstract
-
Taizo SHIRAI, Kiyomichi ARAKI
Article type: PAPER
Subject area: Cryptography and Information Security
2008 Volume E91.A Issue 8 Pages
2120-2129
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
To design secure blockciphers, estimating immunity against differential attack and linear attack is essential. Recently, Diffusion Switching Mechanism (DSM) is proposed as a design framework to enhance the immunity of Feistel structure against differential attack and linear attack. In this paper, we give novel results on the effect of DSM on three generalized Feistel structures, i. e. Type-I, Type-II and Nyberg's structures. We first show a method for roughly estimating lower bounds of a number of active S-boxes in Type-I and Type-II structures using DSM. Then we propose an improved search algorithm to find lower bounds for generalized structures efficiently. Experimental results obtained by the improved algorithm show that DSM raises lower bounds for all of the structures, and also show that Nyberg's structure has the slowest diffusion effect among them when SP-type F-functions are used.
View full abstract
-
Young-Ho SEO, Hyun-Jun CHOI, Chang-Yeul LEE, Dong-Wook KIM
Article type: PAPER
Subject area: Cryptography and Information Security
2008 Volume E91.A Issue 8 Pages
2130-2137
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper is to propose a digital watermarking to protect the ownership of a video content which is compressed by H.264/AVC main profile. This scheme intends to be performed during the CABAC (Context-based Adaptive Binary Arithmetic Coding) process which is the entropy coding of the main profile. It uses the contexts extracted during the context modeling process of CABAC to position the watermark bits by simply checking the context values and determining the coefficients. The watermarking process is also as simple as replacing the watermark bit with the LSB (Least Significant Bit) of the corresponding coefficient to be watermarked. Experimental results from applying this scheme and attacking in various ways such as blurring, sharpening, cropping, Gaussian noise addition, and geometrical modification showed that the watermark embedded by this scheme has very high imperceptibility and robustness to the attacks. Thus, we expect it to be used as a good watermarking scheme, especially in the area that the watermarking should be performed during the compression process with requiring minimal amount of process for watermarking.
View full abstract
-
Todorka ALEXANDROVA, Hiroyoshi MORITA
Article type: PAPER
Subject area: Cryptography and Information Security
2008 Volume E91.A Issue 8 Pages
2138-2150
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Constructing
ideal (
t,
n) threshold secret sharing schemes leads to some limitations on the maximum number of users, that are able to join the secret sharing scheme. We aim to remove these limitations by reducing the information rate of the constructed threshold secret sharing schemes. In this paper we propose recursive construction algorithms of (
t,
n) threshold secret sharing schemes, based on the generalized vector space construction. Using these algorithms we are able to construct a (
t,
n) threshold secret sharing scheme for any arbitrary
n.
View full abstract
-
Shigeaki KUZUOKA
Article type: PAPER
Subject area: Information Theory
2008 Volume E91.A Issue 8 Pages
2151-2158
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This paper clarifies the adequacy of the linear channel coding approach for the source coding with partial side information at the decoder. A sufficient condition for an ensemble of linear codes which achieves the Wyner's bound is given. Our result reveals that, by combining a good lossy code, an LDPC code ensemble gives a good code for source coding with partial side information at the decoder.
View full abstract
-
Hai-yang LIU, Xiao-yan LIN, Lian-rong MA, Jie CHEN
Article type: PAPER
Subject area: Coding Theory
2008 Volume E91.A Issue 8 Pages
2159-2166
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
The stopping distance and stopping redundancy of a linear code are important concepts in the analysis of the performance and complexity of the code under iterative decoding on a binary erasure channel. In this paper, we studied the stopping distance and stopping redundancy of Finite Geometry LDPC (FG-LDPC) codes, and derived an upper bound of the stopping redundancy of FG-LDPC codes. It is shown from the bound that the stopping redundancy of the codes is less than the code length. Therefore, FG-LDPC codes give a good trade-off between the performance and complexity and hence are a very good choice for practical applications.
View full abstract
-
Morteza HIVADI, Morteza ESMAEILI
Article type: PAPER
Subject area: Coding Theory
2008 Volume E91.A Issue 8 Pages
2167-2173
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Stopping distance and stopping redundancy of product binary linear block codes is studied. The relationship between stopping sets in a few parity-check matrices of a given product code
C and those in the parity-check matrices for the component codes is determined. It is shown that the stopping distance of a particular parity-check matrix of
C, denoted
Hp, is equal to the product of the stopping distances of the associated constituent parity-check matrices. Upper bounds on the stopping redundancy of
C is derived. For each minimum distance
d=2
r,
r≥1, a sequence of [
n,
k,
d] optimal stopping redundancy binary codes is given such
k/
n tends to 1 as
n tends to infinity.
View full abstract
-
Hao LI, Changging XU, Pingzhi FAN
Article type: PAPER
Subject area: Communication Theory and Signals
2008 Volume E91.A Issue 8 Pages
2174-2182
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Sum power iterative water-filling (SPIWF) algorithm provides sum-rate-optimal transmission scheme for wireless multiple-input multiple-output (MIMO) broadcast channels (BC), whereas it suffers from its high complexity. In this paper, we propose a new transmission scheme based on a novel block zero-forcing dirty paper coding (Block ZF-DPC) strategy and multiuser-diversity-achieving user selection procedure. The Block ZF-DPC can be considered as an extension of existing ZF-DPC into MIMO BCs. Two user selection algorithms having linear increasing complexity with the number of users have been proposed. One aims at maximizing the achievable sum rate directly and the other is based on Gram-Schmidt Orthogonalization (GSO) and Frobenius norm. The proposed scheme is shown to achieve a sum rate close to the sum capacity of MIMO BC and obtain optimal multiplexing and multiuser diversity gain. In addition, we also show that both selection algorithms achieve a significant part of the sum rate of the optimal greedy selection algorithm at low computation expenditure.
View full abstract
-
Khoirul ANWAR, Masato SAITO, Takao HARA, Minoru OKADA
Article type: PAPER
Subject area: Communication Theory and Signals
2008 Volume E91.A Issue 8 Pages
2183-2194
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this paper, a new large spreading code set with a uniform low cross-correlation is proposed. The proposed code set is capable of (1) increasing the number of assigned user (capacity) in a multicarrier code division multiple access (MC-CDMA) system and (2) reducing the peak-to-average power ratio (PAPR) of an orthogonal frequency division multiplexing (OFDM) system. In this paper, we derive a new code set and present an example to demonstrate performance improvements of OFDM and MC-CDMA systems. Our proposed code set with code length of
N has
K=2
N+1 number of codes for supporting up to (2
N+1) users and exhibits lower cross correlation properties compared to the existing spreading code sets. Our results with subcarrier
N=16 confirm that the proposed code set outperforms the current pseudo-orthogonal carrier interferometry (POCI) code set with gain of 5dB at bit-error-rate (BER) level of 10
-4 in the additive white Gaussian noise (AWGN) channel and gain of more than 3.6dB in a multipath fading channel.
View full abstract
-
Jong-Ki HAN, Jae-Gon KIM
Article type: PAPER
Subject area: Communication Theory and Signals
2008 Volume E91.A Issue 8 Pages
2195-2204
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this paper, a communication system using vector quantization (VQ) and channel coding is considered. Here, a design scheme has been proposed to optimize source codebooks in the transmitter and the receiver. In the proposed algorithm, the overall distortion including both the quantization error and channel distortion is minimized. The proposed algorithm is different from the previous work by the, facts that the channel encoder is used in the VQ-based communication system, and the source VQ codebook used in the transmitter is different from the one used by the receiver, i. e. asymmetric VQ system. And the bounded-distance decoding (BDD) technique is used to combat the ambiguousness in the channel decoder. We can see from the computer simulations that the optimized system based on the proposed algorithm outperforms a conventional system based on a symmetric VQ codebook. Also, the proposed algorithm enables a reliable image communication over noisy channels.
View full abstract
-
Kyung Seung AHN
Article type: PAPER
Subject area: Communication Theory and Signals
2008 Volume E91.A Issue 8 Pages
2205-2212
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this paper, we investigate the performance of maximum ratio combining (MRC) in the presence of multiple cochannel interferences over a flat Rayleigh fading channel. Closed-form expressions of signal-to-interference-plus-noise ratio (SINK), outage probability, and average symbol error rate (SER) of quadrature amplitude modulation (QAM) with
Mary signaling are obtained for unequal-power interference-to-noise ratio (INR). We also provide an upper-bound for the average SER using moment generating function (MGF) of the SINR. Moreover, we quantify the array gain loss between pure MRC (MRC system in the absence of CCI) and MRC system in the presence of CCI. Finally, we verify our analytical results by numerical simulations.
View full abstract
-
Kuo-Cheng LIU, Chun-Hsien CHOU
Article type: PAPER
Subject area: Image
2008 Volume E91.A Issue 8 Pages
2213-2222
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
The main idea in perceptual image compression is to remove the perceptual redundancy for representing images at the lowest possible bit rate without introducing perceivable distortion. A certain amount of perceptual redundancy is inherent in the color image since human eyes are not perfect sensors for discriminating small differences in color signals. Effectively exploiting the perceptual redundancy will help to improve the coding efficiency of compressing color images. In this paper, a locally adaptive perceptual compression scheme for color images is proposed. The scheme is based on the design of an adaptive quantizer for compressing color images with the nearly lossless visual quality at a low bit rate. An effective way to achieve the nearly lossless visual quality is to shape the quantization error as a part of perceptual redundancy while compressing color images. This method is to control the adaptive quantization stage by the perceptual redundancy of the color image. In this paper, the perceptual redundancy in the form of the noise detection threshold associated with each coefficient in each subband of three color components of the color image is derived based on the finding of perceptually indistinguishable regions of color stimuli in the uniform color space and various masking effects of human visual perception. The quantizer step size for the target coefficient in each color component is adaptively adjusted by the associated noise detection threshold to make sure that the resulting quantization error is not perceivable. Simulation results show that the compression performance of the proposed scheme using the adaptively coefficient-wise quantization is better than that using the band-wise quantization. The nearly lossless visual quality of the reconstructed image can be achieved by the proposed scheme at lower entropy.
View full abstract
-
Shangce GAO, Hongwei DAI, Jianchen ZHANG, Zheng TANG
Article type: PAPER
Subject area: Neural Networks and Bioengineering
2008 Volume E91.A Issue 8 Pages
2223-2231
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
Based on the clonal selection principle proposed by Burnet, in the immune response process there is no crossover of genetic material between members of the repertoire, i. e., there is no knowledge communication during different elite pools in the previous clonal selection models. As a result, the search performance of these models is ineffective. To solve this problem, inspired by the concept of the idiotypic network theory, an expanded lateral interactive clonal selection algorithm (LICS) is put forward. In LICS, an antibody is matured not only through the somatic hypermutation and the receptor editing from the B cell, but also through the stimuli from other antibodies. The stimuli is realized by memorizing some common gene segment on the idiotypes, based on which a lateral interactive receptor editing operator is also introduced. Then, LICS is applied to several benchmark instances of the traveling salesman problem. Simulation results show the efficiency and robustness of LICS when compared to other traditional algorithms.
View full abstract
-
Hongyang CHEN, Kaoru SEZAKI, Ping DENG, Hing Cheung SO
Article type: LETTER
Subject area: Digital Signal Processing
2008 Volume E91.A Issue 8 Pages
2232-2236
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In this paper, we propose a new localization algorithm and improve the DV-Hop algorithm by using a differential error correction scheme that is designed to reduce the location error accumulated over multiple hops. This scheme needs no additional hardware support and can be implemented in a distributed way. The proposed method can improve location accuracy without increasing communication traffic and computing complexity. Simulation results show the performance of the proposed algorithm is superior to that of the DV-Hop algorithm.
View full abstract
-
Jung-Min YANG, Seong-Jin PARK
Article type: LETTER
Subject area: Systems and Control
2008 Volume E91.A Issue 8 Pages
2237-2239
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In networked control systems, uncontrollable events may unexpectedly occur in a plant before a proper control action is applied to the plant due to communication delays. In the area of supervisory control of discrete event systems, Park and Cho [5] proposed the notion of delay-nonconflictingness for the existence of a supervisor achieving a given language specification under communication delays. In this paper, we present the algebraic properties of delay-nonconflicting languages which are necessary for solving supervisor synthesis problems under communication delays. Specifically, we show that the class of prefix-closed and delay-nonconflicting languages is closed under intersection, which leads to the existence of a unique infimal prefix-closed and delay-nonconflicting superlanguage of a given language specification.
View full abstract
-
Tomohiro INAGAKI, Toshimichi SAITO
Article type: LETTER
Subject area: Nonlinear Problems
2008 Volume E91.A Issue 8 Pages
2240-2243
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
This letter studies response of a chaotic spiking oscillator to chaotic spike-train inputs. The circuit can exhibits a variety of synchronous/asynchronous phenomena and we show an interesting phenomenon “consistency”: the circuit can exhibit random response that is identical in steady steady state for various initial values. Presenting a simple test circuit, the consistency is confirmed experimentally.
View full abstract
-
Isao NAKANISHI, Hiroyuki SAKAMOTO, Yoshio ITOH, Yutaka FUKUI
Article type: LETTER
Subject area: Cryptography and Information Security
2008 Volume E91.A Issue 8 Pages
2244-2247
Published: August 01, 2008
Released on J-STAGE: March 01, 2010
JOURNAL
RESTRICTED ACCESS
In on-line signature verification, complexity of signature shape can influence the value of the optimal threshold for individual signatures. Writer-dependent threshold selection has been proposed but it requires forgery data. It is not easy to collect such forgery data in practical applications. Therefore, some threshold equalization method using only genuine data is needed. In this letter, we propose three different threshold equalization methods based on the complexity of signature. Their effectiveness is confirmed in experiments using a multi-matcher DWT on-line signature verification system.
View full abstract