IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508
Volume E92.A , Issue 3
Showing 1-32 articles out of 32 articles from the selected issue
Special Section on Latest Advances in Fundamental Theories of Signal Processing
  • Hitoshi KIYA
    2009 Volume E92.A Issue 3 Pages 687
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Download PDF (61K)
  • Hidemitsu OGAWA
    Type: INVITED PAPER
    2009 Volume E92.A Issue 3 Pages 688-695
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    This paper shows that there is a fruitful world behind sampling theorems. For this purpose, the sampling problem is reformulated from a functional analytic standpoint, and is consequently revealed that the sampling problem is a kind of inverse problem. The sampling problem covers, for example, signal and image restoration including super resolution, image reconstruction from projections such as CT scanners in hospitals, and supervised learning such as learning in artificial neural networks. An optimal reconstruction operator is also given, providing the best approximation to an individual original signal without our knowing the original signal.
    Download PDF (386K)
  • Takahiro SAITO, Takashi KOMATSU
    Type: INVITED PAPER
    2009 Volume E92.A Issue 3 Pages 696-707
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    It is a very important and intriguing problem in digital image-processing to decompose an input image into intuitively convincible image-components such as a structure component and a texture component, which is an inherently nonlinear problem. Recently, several numerical schemes to solve the nonlinear image-decomposition problem have been proposed. The use of the nonlinear image-decomposition as a pre-process of several image-processing tasks will possibly pave the way to solve difficult problems posed by the classic approach of digital image-processing. Since the new image-processing approach via the nonlinear image-decomposition treats each separated component with a processing method suitable to it, the approach will successfully attain target items seemingly contrary to each other, for instance invisibility of ringing artifacts and sharpness of edges and textures, which have not attained simultaneously by the classic image-processing approach. This paper reviews quite recently developed state-of-the-art schemes of the nonlinear image-decomposition, and introduces some examples of the decomposition-and-processing approach.
    Download PDF (2264K)
  • Andrzej CICHOCKI, Anh-Huy PHAN
    Type: INVITED PAPER
    2009 Volume E92.A Issue 3 Pages 708-721
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multi-sensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the alpha and beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multi-layer hierarchical NMF approach [3].
    Download PDF (2012K)
  • Saed SAMADI, Kaveh MOLLAIYAN, Akinori NISHIHARA
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 722-732
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Two discrete-time Wirtinger-type inequalities relating the power of a finite-length signal to that of its circularly-convolved version are developed. The usual boundary conditions that accompany the existing Wirtinger-type inequalities are relaxed in the proposed inequalities and the equalizing sinusoidal signal is free to have an arbitrary phase angle. A measure of this sinusoidal signal's power, when corrupted with additive noise, is proposed. The application of the proposed measure, calculated as a ratio, in the evaluation of the power of a sinusoid of arbitrary phase with the angular frequency π/N, where N is the signal length, is thoroughly studied and analyzed under additive noise of arbitrary statistical characteristic. The ratio can be used to gauge the power of sinusoids of frequency π/N with a small amount of computation by referring to a ratio-versus-SNR curve and using it to make an estimation of the noise-corrupted sinusoid's SNR. The case of additive white noise is also analyzed. A sample permutation scheme followed by sign modulation is proposed for enlarging the class of target sinusoids to those with frequencies Mπ/N, where M and N are mutually prime positive integers. Tandem application of the proposed scheme and ratio offers a simple method to gauge the power of sinusoids buried in noise. The generalization of the inequalities to convolution kernels of higher orders as well as the simplification of the proposed inequalities have also been studied.
    Download PDF (909K)
  • Takahiro MURAKAMI, Toshihisa TANAKA, Yoshihisa ISHIDA
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 733-744
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    An algorithm for blind signal separation (BSS) of convolutive mixtures is presented. In this algorithm, the BSS problem is treated as multidimensional independent component analysis (ICA) by introducing an extended signal vector which is composed of current and previous samples of signals. It is empirically known that a number of conventional ICA algorithms solve the multidimensional ICA problem up to permutation and scaling of signals. In this paper, we give theoretical justification for using any conventional ICA algorithm. Then, we discuss the remaining problems, i.e., permutation and scaling of signals. To solve the permutation problem, we propose a simple algorithm which classifies the signals obtained by a conventional ICA algorithm into mutually independent subsets by utilizing temporal structure of the signals. For the scaling problem, we prove that the method proposed by Koldovský and Tichavský is theoretically proper in respect of estimating filtered versions of source signals which are observed at sensors.
    Download PDF (1235K)
  • Hisayori NODA, Akinori NISHIHARA
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 745-752
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    A fast and accurate method for Generalized Harmonic Analysis is proposed. The proposed method estimates the parameters of a sinusoid and subtracts it from a target signal one by one. The frequency of the sinusoid is estimated around a peak of Fourier spectrum using binary search. The binary search can control the trade-off between the frequency accuracy and the computation time. The amplitude and the phase are estimated to minimize the squared sum of the residue after extraction of estimated sinusoids from the target signal. Sinusoid parameters are recalculated to reduce errors introduced by the peak detection using windowed Discrete-Time Fourier Transform. Audio signals are analyzed by the proposed method, which confirms the accuracy compared to existing methods. The proposed algorithm has high degree of concurrency and is suitable to be implemented on Graphical Processing Unit (GPU). The computational throughput can be made higher than the input audio signal rate.
    Download PDF (520K)
  • Hisako MASUIKE, Akira IKUTA
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 753-761
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    The observed phenomena in actual sound and electromagnetic environment are inevitably contaminated by the background noise of arbitrary distribution type. Therefore, in order to evaluate sound and electromagnetic environment, it is necessary to establish some signal processing methods to remove the undesirable effects of the background noise. In this paper, we propose noise cancellation methods for estimating a specific signal with the existence of background noise of non-Gaussian distribution from two viewpoins of static and dynamic signal processing. By applying the well-known least mean squared method for the moment statistics with several orders, practical methods for estimating the specific signal are derived. The effectiveness of the proposed theoretical methods is experimentally confirmed by applying them to estimation problems in actual sound and magnetic field environment.
    Download PDF (368K)
  • Seisuke KYOCHI, Shizuka HIGAKI, Yuichi TANAKA, Masaaki IKEHARA
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 762-771
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In this paper, a novel design method of critically sampled contourlet transform (CSCT) is proposed. The original CT which consists of Laplacian pyramid and directional filter bank provides efficient frequency plane partition for image representation. However its overcompleteness is not suitable for some applications such as image coding, its critical sampling version has been studied recently. Although several types of the CSCT have been proposed, they have problems on their realization or unnatural frequency plane partition which is different from the original CT. In contrast to the way in conventional design methods based on a “top-down” approach, the proposed method is based on a “bottom-up” one. That is, the proposed CSCT decomposes the frequency plane into small directional subbands, and then synthesizes them up to a target frequency plane partition, while the conventional ones decompose into it directly. By this way, the proposed CSCT can design an efficient frequency division which is the same as the original CT for image representation can be realized. In this paper, its effectiveness is verified by non-linear approximation simulation.
    Download PDF (1658K)
  • Hiroaki TEZUKA, Takao NISHITANI
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 772-778
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    This paper describes a multiresolutional Gaussian mixture model (GMM) for precise and stable foreground segmentation. A multiple block sizes GMM and a computationally efficient fine-to-coarse strategy, which are carried out in the Walsh transform (WT) domain, are newly introduced to the GMM scheme. By using a set of variable size block-based GMMs, a precise and stable processing is realized. Our fine-to-coarse strategy comes from the WT spectral nature, which drastically reduces the computational steps. In addition, the total computation amount of the proposed approach requires only less than 10% of the original pixel-based GMM approach. Experimental results show that our approach gives stable performance in many conditions, including dark foreground objects against light, global lighting changes, and scenery in heavy snow.
    Download PDF (716K)
  • Shigeki TAKAHASHI, Takahiro OGAWA, Hirokazu TANAKA, Miki HASEYAMA
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 779-787
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    A novel error concealment method using a Kalman filter is presented in this paper. In order to successfully utilize the Kalman filter, its state transition and observation models that are suitable for the video error concealment are newly defined as follows. The state transition model represents the video decoding process by a motion-compensated prediction. Furthermore, the new observation model that represents an image blurring process is defined, and calculation of the Kalman gain becomes possible. The problem of the traditional methods is solved by using the Kalman filter in the proposed method, and accurate reconstruction of corrupted video frames is achieved. Consequently, an effective error concealment method using the Kalman filter is realized. Experimental results showed that the proposed method has better performance than that of traditional methods.
    Download PDF (781K)
  • Atsuyuki ADACHI, Shogo MURAMATSU, Hisakazu KIKUCHI
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 788-797
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In this paper, a design method of two-dimensional (2-D) orthogonal symmetric wavelets is proposed by using a lattice structure for multi-dimensional (M-D) linear-phase paraunitary filter banks (LPPUFB), which the authors have proposed as a previous work and then modified by Lu Gan et al. The derivation process for the constraints on the second-order vanishing moments is shown and some design examples obtained through optimization with the constraints are exemplified. In order to verify the significance of the constraints, some experimental results are shown for Lena and Barbara image.
    Download PDF (1002K)
  • Yoshihide TONOMURA, Daisuke SHIRAI, Takayuki NAKACHI, Tatsuya FUJII, H ...
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 798-807
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In this paper, we introduce layered low-density generator matrix (Layered-LDGM) codes for super high definition (SHD) scalable video systems. The layered-LDGM codes maintain the correspondence relationship of each layer from the encoder side to the decoder side. This resulting structure supports partial decoding. Furthermore, the proposed layered-LDGM codes create highly efficient forward error correcting (FEC) data by considering the relationship between each scalable component. Therefore, the proposed layered-LDGM codes raise the probability of restoring the important components. Simulations show that the proposed layered-LDGM codes offer better error resiliency than the existing method which creates FEC data for each scalable component independently. The proposed layered-LDGM codes support partial decoding and raise the probability of restoring the base component. These characteristics are very suitable for scalable video coding systems.
    Download PDF (991K)
  • Lasith YASAKETHU, Steven ADEDOYIN, Anil FERNANDO, Ahmet M. KONDOZ
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 808-815
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In this paper, we propose a rate control technique for H.264/AVC using subjective quality of video for off line video coding. We propose to use Video Quality Metric (VQM) with an evolution strategy algorithm, which is capable of identifying the best possible quantization parameters for each frame/macroblock to encode the video sequence such that it would maximize the subjective quality of the entire video sequence subjected to the target bit rate. Simulation results suggest that the proposed technique can improve the RD performance of the H.264/AVC codec significantly. With the proposed technique, up to 40% bit rate reduction can be achieved at the same video quality. Furthermore, results show that the proposed technique can improve the subjective quality of the encoded video significantly for video sequences especially with high motion.
    Download PDF (436K)
  • Chang-Jun AHN
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 816-823
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 ∼ 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.
    Download PDF (743K)
  • Takahiro MURAKAMI, Toshihisa TANAKA, Yoshihisa ISHIDA
    Type: PAPER
    2009 Volume E92.A Issue 3 Pages 824-831
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    A method for measuring similarity between two variables is presented. Our approach considers the case where available observations are arbitrarily filtered versions of the variables. In order to measure the similarity between the original variables from the observations, we propose an error-minimizing filter (EMF). The EMF is designed so that an error between outputs of the EMF is minimized. In this paper, the EMF is constructed by a finite impulse response (FIR) filter, and the error between the outputs is evaluated by the mean square error (EMF). We show that minimization of the MSE results in an eigenvalue problem, and the optimal solution is given in a closed form. We also reveal that the minimal MSE by the EMF is efficient in the measurement of the similarity from the viewpoint of a correlation coefficient between the originals.
    Download PDF (293K)
  • Kwanghyun LEE, Suyoung PARK, Sanghoon LEE
    Type: LETTER
    2009 Volume E92.A Issue 3 Pages 832-835
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    For the acquisition of visual information, the nonuniform sampling process by photoreceptors on the retina occurs at the earliest stage of visual processing. From objects of interest, the human eye receives high visual resolution through nonuniform distribution of photoreceptors. Therefore, this paper proposes auto exposure and focus algorithms for the real-time video camera system based on the visual characteristic of the human eye. For given moving objects, the visual weight is modeled for quantifying the visual importance and the associated auto exposure and focus parameters are derived by applying the weight to the traditional numerical expression, i.e., the DoM (Difference of Median) and Tenengrad methods for auto focus.
    Download PDF (197K)
  • Hideaki TAMORI, Tsuyoshi YAMAMOTO
    Type: LETTER
    2009 Volume E92.A Issue 3 Pages 836-838
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    We propose an asymmetric fragile watermarking technique that uses a number theoretic transform (NTT). Signature data is extracted from a watermarked image by determining correlation functions that are computed using the NTT. The effectiveness of the proposed method is evaluated by simulated detection of altering.
    Download PDF (830K)
  • Jae-Hoon JANG, Sung-Hak LEE, Kyu-Ik SOHNG
    Type: LETTER
    2009 Volume E92.A Issue 3 Pages 839-842
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    The Processing of high-dynamic-range (HDR) images is embodied by a lot of algorithms. This paper takes notice of one of these algorithms which is presented using the iCAM06. iCAM06 is capable of making color appearance predictions of HDR images based on CIECAM02 color predictions and incorporating spatial process models in the human visual system (HVS) for contrast enhancement. The effect of user controllable factors of iCAM06 was investigated and the best factor which corresponds with Breneman's corresponding color data sets was found. A suggested model improves color matching predictions for the corresponding color data set in Breneman's experiment.
    Download PDF (464K)
  • Jin-Keun SEOK, Sung-Hak LEE, Kyu-Ik SOHNG
    Type: LETTER
    2009 Volume E92.A Issue 3 Pages 843-846
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    When we watch television or computer monitor under a certain viewing condition, we partially adapt to the display and partially to the ambient light. As an illumination level and chromaticity change, the eye's subjective white point changes between the display's white point and the ambient light's white point. In this paper, we propose a model that could predict the white point under a mixed adaptation condition including display and illuminant. Finally we verify this model by experimental results.
    Download PDF (258K)
Regular Section
  • Murat B. BADEM, Rajitha WEERAKKODY, Anil FERNANDO, Ahmet M. KONDOZ
    Type: PAPER
    Subject area: Digital Signal Processing
    2009 Volume E92.A Issue 3 Pages 847-852
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Distributed Video Coding (DVC) is an emerging video coding paradigm that is characterized by a flexible architecture for designing very low cost video encoders. This feature could be very effectively utilized in a number of potential many-to-one type video coding applications. However, the compression efficiency of the latest DVC implementations still falls behind the state-of-the-art in conventional video coding technologies, namely H.264/AVC. In this paper, a novel non-linear quantization algorithm is proposed for DVC in order to improve the rate-distortion (RD) performance. The proposed solution is expected to exploit the dominant contribution to the picture quality from the relatively small coefficients when the high concentration of the coefficients near zero as evident when the residual input video signal for the Wyner-Ziv frames is considered in the transform domain. The performance of the proposed solution incorporating the non-linear quantizer is compared with the performance of an existing transform domain DVC solution that uses a linear quantizer. The simulation results show a consistently improved RD performance at all bitrates when different test video sequences with varying motion levels are considered.
    Download PDF (343K)
  • Jun TSUZURUGI, Shigeru EIHO
    Type: PAPER
    Subject area: Digital Signal Processing
    2009 Volume E92.A Issue 3 Pages 853-861
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Image restoration based on Bayesian estimation in most previous studies has assumed that the noise accumulated in an image was independent for each pixel. However, when we take optical effects into account, it is reasonable to expect spatial correlation in the superimposed noise. In this paper, we discuss the restoration of images distorted by noise which is spatially correlated with translational symmetry in the realm of probabilistic processing. First, we assume that the original image can be produced by a Gaussian model based on only a nearest-neighbor effect and that the noise superimposed at each pixel is produced by a Gaussian model having spatial correlation characterized by translational symmetry. With this model, we can use Fourier transformation to calculate system characteristics such as the restoration error and also minimize the restoration error when the hyperparameters of the probabilistic model used in the restoration process coincides with those used in the formation process. We also discuss the characteristics of image restoration distorted by spatially correlated noise using a natural image. In addition, we estimate the hyperparameters using the maximum marginal likelihood and restore an image distorted by spatially correlated noise to evaluate this method of image restoration.
    Download PDF (675K)
  • Hajoon LEE, Cheol Hoon PARK
    Type: PAPER
    Subject area: Systems and Control
    2009 Volume E92.A Issue 3 Pages 862-870
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    We deal with LTI nonminimum phase (NMP) systems which are difficult to control with conventional methods because of their inherent characteristics of undershoot. In such systems, reducing the undesirable undershoot phenomenon makes the response time of the systems much longer. Moreover, it is impossible to control the magnitude of undershoot in a direct way and to predict the response time. In this paper, we propose a novel two sliding mode control scheme which is capable of stably determining the magnitude of undershoot and thus the response time of NMP systems a priori. To do this, we introduce two sliding lines which are in charge of control in turn. One is used to stabilize the system and achieve asymptotic regulation eventually like the conventional sliding mode methods and the other to stably control the magnitude of undershoot from the beginning of control until the state meets the first sliding line. This control scheme will be proved to have an asymptotic regulation property. The computer simulation shows that the proposed control scheme is very effective and suitable for controlling the NMP systems compared with the conventional ones.
    Download PDF (360K)
  • Yoshihiko SUSUKI, Yu TAKATSUJI, Takashi HIKIHARA
    Type: PAPER
    Subject area: Nonlinear Problems
    2009 Volume E92.A Issue 3 Pages 871-879
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Analysis of cascading outages in power systems is important for understanding why large blackouts emerge and how to prevent them. Cascading outages are complex dynamics of power systems, and one cause of them is the interaction between swing dynamics of synchronous machines and protection operation of relays and circuit breakers. This paper uses hybrid dynamical systems as a mathematical model for cascading outages caused by the interaction. Hybrid dynamical systems can combine families of flows describing swing dynamics with switching rules that are based on protection operation. This paper refers to data on a cascading outage in the September 2003 blackout in Italy and shows a hybrid dynamical system by which propagation of outages reproduced is consistent with the data. This result suggests that hybrid dynamical systems can provide an effective model for the analysis of cascading outages in power systems.
    Download PDF (772K)
  • Peng-Yang HUNG, Ying-Shu LOU, Yih-Lang LI
    Type: PAPER
    Subject area: VLSI Design Technology and CAD
    2009 Volume E92.A Issue 3 Pages 880-889
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    This work presents a full-chip RLC crosstalk budgeting routing flow to generate a high-quality routing design under stringent crosstalk constraints. Based on the cost function addressing the sensitive nets in visited global cells for each net, global routing can lower routing congestion as well as coupling effect. Crosstalk-driven track routing minimizes capacitive coupling effects and decreases inductive coupling effects by avoiding placing sensitive nets on adjacent tracks. To achieve inductive crosstalk budgeting optimization, the shield insertion problem can be solved with a minimum column covering algorithm which is undertaken following track routing to process nets with an excess of inductive crosstalk. The proposed routing flow method can identify the required number of shields more accurately, and process more complex routing problems than the linear programming (LP) methods. Results of this study demonstrate that the proposed approach can effectively and quickly lower inductive crosstalk by up to one-third.
    Download PDF (1055K)
  • Woo Joo KIM, Sung Hee LEE, Sun Young HWANG
    Type: PAPER
    Subject area: VLSI Design Technology and CAD
    2009 Volume E92.A Issue 3 Pages 890-899
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    This paper presents a hierarchical NoC architecture to support GT (Guaranteed Throughput) signals to process multimedia data in embedded systems. The architecture provides a communication environment that meets the diverse conditions of communication constraints among IPs in power and area. With a system based on packet switching, which requires storage/control circuits to support GT signals, it is hard to satisfy design constraints in area, scalability and power consumption. This paper proposes a hierarchical 4 × 4 × 4 mesh-type NoC architecture based on circuit switching, which is capable of processing GT signals requiring high throughput. The proposed NoC architecture shows reduction in area by 50.2% and in power consumption by 57.4% compared with the conventional NoC architecture based on circuit switching. These figures amount to by 72.4% and by 86.1%, when compared with an NoC architecture based on packet switching. The proposed NoC architecture operates in the maximum throughput of 19.2Gb/s.
    Download PDF (756K)
  • Shingo TAKAHASHI, Shuji TSUKIYAMA
    Type: PAPER
    Subject area: VLSI Design Technology and CAD
    2009 Volume E92.A Issue 3 Pages 900-911
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In order to improve the performance of the existing statistical timing analysis, slew distributions must be taken into account and a mechanism to propagate them together with delay distributions along signal paths is necessary. This paper introduces Gaussian mixture models to represent the slew and delay distributions, and proposes a novel algorithm for statistical timing analysis. The algorithm propagates a pair of delay and slew in a given circuit graph, and changes the delay distributions of circuit elements dynamically by propagated slews. The proposed model and algorithm are evaluated by comparing with Monte Carlo simulation. The experimental results show that the accuracy improvement in µ +3σ value of maximum delay is up to 4.5 points from the current statistical timing analysis using Gaussian distributions.
    Download PDF (656K)
  • Hung-Min SUN, Cheng-Ta YANG, Mu-En WU
    Type: PAPER
    Subject area: Cryptography and Information Security
    2009 Volume E92.A Issue 3 Pages 912-918
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In some applications, a short private exponent d is chosen to improve the decryption or signing process for RSA public key cryptosystem. However, in a typical RSA, if the private exponent d is selected first, the public exponent e should be of the same order of magnitude as φ(N). Sun et al. devised three RSA variants using unbalanced prime factors p and q to lower the computational cost. Unfortunately, Durfee & Nguyen broke the illustrated instances of the first and third variants by solving small roots to trivariate modular polynomial equations. They also indicated that the instances with unbalanced primes p and q are more insecure than the instances with balanced p and q. This investigation focuses on designing a new RSA variant with balanced p and q, and short exponents d and e, to improve the security of an RSA variant against the Durfee & Nguyen's attack, and the other existing attacks. Furthermore, the proposed variant (Scheme A) is also extended to another RSA variant (Scheme B) in which p and q are balanced, and a trade-off between the lengths of d and e is enable. In addition, we provide the security analysis and feasibility analysis of the proposed schemes.
    Download PDF (216K)
  • Kenichi YABUTA, Hitoshi KITAZAWA, Toshihisa TANAKA
    Type: PAPER
    Subject area: Image
    2009 Volume E92.A Issue 3 Pages 919-927
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Because of an increasing number of security cameras, it is crucial to establish a system that protects the privacy of objects in the recorded images. To this end, we propose a framework of image processing and data hiding for security monitoring and privacy protection. First, we state the requirements of the proposed monitoring systems and suggest possible implementation that satisfies those requirements. The underlying concept of our proposed framework is as follows: (1) in the recorded images, the objects whose privacy should be protected are deteriorated by appropriate image processing; (2) the original objects are encrypted and watermarked into the output image, which is encoded using an image compression standard; (3) real-time processing is performed such that no future frame is required to generate on output bitstream. It should be noted that in this framework, anyone can observe the decoded image that includes the deteriorated objects that are unrecognizable or invisible. On the other hand, for crime investigation, this system allows a limited number of users to observe the original objects by using a special viewer that decrypts and decodes the watermarked objects with a decoding password. Moreover, the special viewer allows us to select the objects to be decoded and displayed. We provide an implementation example, experimental results, and performance evaluations to support our proposed framework.
    Download PDF (2112K)
  • Lin-Chuan TSAI, Kuo-Chih CHU
    Type: LETTER
    Subject area: Digital Signal Processing
    2009 Volume E92.A Issue 3 Pages 928-931
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    Simple and accurate formulations are employed to represent discrete-time infinite impulse response (IIR) processes of first-order differentiator and integrator. These formulations allow them to be eligible for wide-band applications. Both first-order differentiator and integrator have an almost linear phase. The new differentiator has an error of less than 1% for the range 0-0.8π of normalized frequency and the new integrator has an error of less than 1.1% for the range 0-0.8π of normalized frequency.
    Download PDF (567K)
  • Ik Rae JEONG, Jeong Ok KWON, Dong Hoon LEE
    Type: LETTER
    Subject area: Cryptography and Information Security
    2009 Volume E92.A Issue 3 Pages 932-934
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In 2006, Tanaka has proposed an efficient variant of Maurer-Yacobi's identity-based non-interactive key sharing scheme. In Tanaka's scheme, the computational complexity to generate each user's secret information is much smaller than that of Maurer-Yacobi's scheme. Tanaka's original key sharing scheme does not provide completeness, and so Tanaka has corrected the original scheme to provide completeness. In this paper, we show that Tanaka's corrected key sharing scheme is not secure against collusion attacks. That is, two users can collaborate to factorize a system modulus with their secret information and thus break the key sharing scheme.
    Download PDF (65K)
  • Cheolwoo YOU, Byounggi KIM, Sangjin RYOO, Intae HWANG
    Type: LETTER
    Subject area: Mobile Information Network and Personal Communications
    2009 Volume E92.A Issue 3 Pages 935-939
    Published: March 01, 2009
    Released: March 01, 2009
    JOURNALS RESTRICTED ACCESS
    In this paper, in order to increase system capacity and reduce the transmitting power of the user's equipment, we propose a modified power control scheme consisting of a modified closed-loop power control (CLPC) and open-loop power control (OLPC). The modified CLPC algorithm, combining delay compensation algorithms and pilot diversity, is mainly applied to the ancillary terrestrial component (ATC) link in urban areas, because it is more suitable to the short round-trip delay (RTD). In the case of rural areas, where ATCs are not deployed or where a signal is not received from ATCs, transmit power monitoring equipment and OLPC algorithms using efficient pilot diversity are combined and applied to the link between the user's equipment and the satellite. Two power control algorithms are applied equally to the boundary areas where two kinds of signals are received in order to ensure coverage continuity. Simulation results show that the modified power control scheme has good performance compared to conventional power control schemes in a geostationary earth orbit (GEO) satellite system utilizing ATCs.
    Download PDF (824K)
feedback
Top