IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508
Volume E94.A, Issue 2
Displaying 1-50 of 62 articles from this issue
Special Section on Image Media Quality
  • Mitsuho YAMADA
    2011 Volume E94.A Issue 2 Pages 471-472
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Download PDF (90K)
  • Ichiro KURIKI, Shingo NAKAMURA, Pei SUN, Kenichi UENO, Kazumichi MATSU ...
    Article type: INVITED PAPER
    2011 Volume E94.A Issue 2 Pages 473-479
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Color percept is a subjective experience and, in general, it is impossible for other people to tell someone's color percept. The present study demonstrated that the simple image-classification analysis of brain activity obtained by a functional magnetic resonance imaging (fMRI) technique enables to tell which of four colors the subject is looking at. Our results also imply that color information is coded by the responses of hue-selective neurons in human brain, not by the combinations of red-green and blue-yellow hue components.
    Download PDF (1301K)
  • Takahiro SAITO, Yasutaka UEDA, Takashi KOMATSU
    Article type: INVITED PAPER
    2011 Volume E94.A Issue 2 Pages 480-492
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    As a basic tool for deriving sparse representation of a color image from its atomic-decomposition with a redundant dictionary, the authors have recently proposed a new kind of shrinkage technique, viz. color shrinkage, which utilizes inter-channel color dependence directly in the three primary color space. Among various schemes of color shrinkage, this paper particularly presents the soft color-shrinkage and the hard color-shrinkage, natural extensions of the classic soft-shrinkage and the classic hard-shrinkage respectively, and shows their advantages over the existing shrinkage approaches where the classic shrinkage techniques are applied after a color transformation such as the opponent color transformation. Moreover, this paper presents the applications of our color-shrinkage schemes to color-image processing in the redundant tight-frame transform domain, and shows their superiority over the existing shrinkage approaches.
    Download PDF (4494K)
  • Takuya IWANAMI, Ayano KIKUCHI, Keita HIRAI, Toshiya NAKAGUCHI, Norimic ...
    Article type: PAPER
    Subject area: Vision
    2011 Volume E94.A Issue 2 Pages 493-499
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Recently enhancing the visual experience of the user has been a new trend for TV displays. This trend comes from the fact that changes of ambient illuminations while viewing a Liquid Crystal Display (LCD) significantly affect human impressions. However, psychological effects caused by the combination of displayed video image and ambient illuminations have not been investigated. In the present research, we clarify the relationship between ambient illuminations and psychological effects while viewing video image displayed on the LCD by using a questionnaire based semantic differential (SD) method and a factor analysis method. Six kinds of video images were displayed under different colors and layouts of illumination conditions and rated by 15 observers. According to the analysis, it became clear that the illumination control around the LCD with displayed video image, the feeling of ‘activity’ and ‘evaluating’ were rated higher than the feeling of fluorescent ceiling condition. In particular, simultaneous illumination control around the display and the ceiling enhanced the feeling of ‘activity,’ and ‘evaluating’ with keeping ‘comfort.’ Moreover, the feeling of ‘activity’ under the illumination control around the LCD and the ceiling condition while viewing music video image was rated clearly higher than that with natural scene video image.
    Download PDF (1779K)
  • Kazune AOIKE, Gosuke OHASHI, Yuichiro TOKUDA, Yoshifumi SHIMODAIRA
    Article type: PAPER
    Subject area: Evaluation
    2011 Volume E94.A Issue 2 Pages 500-508
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    An interactive support system for image quality enhancement to adjust display equipments according to the user's own subjectivity is developed. Interactive support system for image quality enhancement enable the parameters based on user's preference to be derived by only selecting user's preference images without adjusting image quality parameters directly. In the interactive support system for image quality enhancement, the more the number of parameters is, the more effective this system is. In this paper, lightness, color and sharpness are used as the image quality parameters and the images are enhanced by increasing the number of parameters. Shape of tone curve is controlled by two image quality adjustment parameters for lightness enhancement. Images are enhanced using two image quality adjustment parameters for color enhancement. The two parameters are controlled in L*a*b* color space. Degree and coarseness of image sharpness enhancement are adjusted by controlling a radius of mask of smoothing filter and weight of adding. To confirm the effectiveness of the proposed method, the image quality and derivation time of the proposed method are compared with a manual adjustment method.
    Download PDF (2653K)
  • Masaharu SATO, Yuukou HORITA
    Article type: PAPER
    Subject area: Evaluation
    2011 Volume E94.A Issue 2 Pages 509-518
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Our research is focused on examining the Image Quality Assessment Model based on the MPEG-7 descriptor and the No Reference model. The model retrieves a reference image using image search and evaluate its subject score as a pseudo Reduced Reference model. The MPEG-7 descriptor was originally used for content retrieval, but we discovered that the MPEG-7 descriptor can also be used for image quality assessment. We examined the performance of the proposed model and the results revealed that this method has a higher performance rating than the SSIM.
    Download PDF (1804K)
  • Kenji SUGIYAMA, Naoya SAGARA, Ryo OKAWA
    Article type: PAPER
    Subject area: Evaluation
    2011 Volume E94.A Issue 2 Pages 519-524
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    The non-reference method is widely useful for picture quality estimation on the decoder side. In other work, we discussed pure non-reference estimation using only the decoded picture, and we proposed quantitative estimation methods for mosquito noise and block artifacts. In this paper, we discuss the estimation method as it applies to the degradation of the temporal domain. In the proposed method, motion compensated inter-picture differences and motion vector activity are the basic parameters of temporal degradation. To obtain these parameters, accurate but unstable motion estimation is used with a 1/16 reduction of processing power. Similar values of the parameters in the pictures can be seen in the stable original picture, but temporal degradation caused by the coding increases them. For intra-coded pictures, the values increase significantly. However, for inter-coded pictures, the values are the same or decrease. Therefore, by taking the ratio of the peak frame and other frames, the absolute value of the temporal degradation can be estimated. In this case, the peak frame may be intra-coded. Finally, we evaluate the proposed method using coded pictures with different quantization.
    Download PDF (991K)
  • Takao JINNO, Kazuya MOURI, Masahiro OKUDA
    Article type: PAPER
    Subject area: Processing
    2011 Volume E94.A Issue 2 Pages 525-532
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In this paper we propose a new tone mapping method for HDR video. Two types of gamma tone mapping are blended to preserve local contrast in the entire range of luminance. Our method achieves high quality tone mapping especially for the HDR video that has a nonlinear response to scene radiance. Additionally, we apply it to an object-aware tone mapping method for camera surveillance. This method achieves high visibility of target objects in the tone mapped HDR video. We examine the validity of our methods through simulation and comparison with conventional work.
    Download PDF (3100K)
  • Zisheng LI, Jun-ichi IMAI, Masahide KANEKO
    Article type: PAPER
    Subject area: Processing
    2011 Volume E94.A Issue 2 Pages 533-541
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In many real-world face recognition applications, there might be only one training image per person available. Moreover, the test images may vary in facial expressions and illuminations, or may be partially occluded. However, most classical face recognition techniques assume that multiple images per person are available for training, and they are difficult to deal with extreme expressions, illuminations and occlusions. This paper proposes a novel block-based bag of words (BBoW) method to solve those problems. In our approach, a face image is partitioned into multiple blocks, dense SIFT features are then calculated and vector quantized into different visual words on each block respectively. Finally, histograms of codeword distribution on each local block are concatenated to represent the face image. Our method is able to capture local features on each block while maintaining holistic spatial information of different facial components. Without any illumination compensation or image alignment processing, the proposed method achieves excellent face recognition results on AR and XM2VTS databases. Experimental results show that only using one neutral expression frame per person for training, our method can obtain the best performance ever on face images of AR database with extreme expressions, variant illuminations, and partial occlusions. We also test our method on the standard and darkened sets of XM2VTS database, and achieve the average rates of 100% and 96.10% on the standard and darkened sets of XM2VTS database, respectively.
    Download PDF (911K)
  • Yusuke HORIE, Yuta KAWAMURA, Akiyuki SEITA, Mitsuho YAMADA
    Article type: LETTER
    Subject area: Vision
    2011 Volume E94.A Issue 2 Pages 542-547
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    The purpose of this study was to clarify whether viewers can perceive a digitally deteriorated image while pursuing a speedily moving digitally compressed image. We studied the perception characteristics of false contours among the various digital deteriorations for the four types of displays i.e. CRT, PDP, EL, LCD by changing the gradation levels and the speed of moving image as parameters. It is known that 8 bits is not high enough resolution for still images, and it is assumed that 8 bits is also not enough for an image moving at less than 5deg/sec since the tracking accuracy of smooth pursuit eye movement (SPEM) is very high for a target moving at less than 5deg/sec. Given these facts, we focused on images moving at more than 5deg/sec. In our results, the images deteriorated by a false contour at a gradation level less than 32 were perceived by every subject at almost all velocities, from 5degrees/sec to 30degrees/sec, for all four types of displays we used. However, the perception rate drastically decreased when the gradation levels reached 64, with almost no subjects detecting deterioration for gradation levels more than 64 at any velocity. Compared to other displays, LCDs yielded relatively high recognition rates for gradation levels of 64, especially at lower velocities.
    Download PDF (568K)
  • Naoya SAGARA, Yousuke KASHIMURA, Kenji SUGIYAMA
    Article type: LETTER
    Subject area: Evaluation
    2011 Volume E94.A Issue 2 Pages 548-551
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    DCT encoding of images leads to block artifact and mosquito noise degradations in the decoded pictures. We propose an estimation to determine the mosquito noise block and level; however, this technique lacks sufficient linearity. To improve its performance, we use the sub-divided block for edge effect suppression. The subsequent results are mostly linear with the quantization.
    Download PDF (286K)
  • Atsushi YAGUCHI, Tadaaki HOSAKA, Takayuki HAMAMOTO
    Article type: LETTER
    Subject area: Processing
    2011 Volume E94.A Issue 2 Pages 552-554
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In reconstruction-based super resolution, a high-resolution image is estimated using multiple low-resolution images with sub-pixel misalignments. Therefore, when only one low-resolution image is available, it is generally difficult to obtain a favorable image. This letter proposes a method for overcoming this difficulty for single- image super resolution. In our method, after interpolating pixel values at sub-pixel locations on a patch-by-patch basis by support vector regression, in which learning samples are collected within the given image based on local similarities, we solve the regularized reconstruction problem with a sufficient number of constraints. Evaluation experiments were performed for artificial and natural images, and the obtained high-resolution images indicate the high-frequency components favorably along with improved PSNRs.
    Download PDF (267K)
Special Section on Analog Circuit Techniques and Related Topics
  • Yasuyuki MATSUYA
    2011 Volume E94.A Issue 2 Pages 555
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Download PDF (103K)
  • Noboru ISHIHARA, Shuhei AMAKAWA, Kazuya MASU
    Article type: INVITED PAPER
    2011 Volume E94.A Issue 2 Pages 556-567
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    As great advancements have been made in CMOS process technology over the past 20 years, RF CMOS circuits operating in the microwave band have rapidly developed from component circuit levels to multiband/multimode transceiver levels. In the next ten years, it is highly likely that the following devices will be realized: (i) versatile transceivers such as those used in software-defined radios (SDR), cognitive radios (CR), and reconfigurable radios (RR); (ii) systems that operate in the millimeter-wave or terahertz-wave region and achieve high speed and large-capacity data transmission; and (iii) microminiaturized low-power RF communication systems that will be extensively used in our everyday lives. However, classical technology for designing analog RF circuits cannot be used to design circuits for the abovementioned devices since it can be applied only in the case of continuous voltage and continuous time signals; therefore, it is necessary to integrate the design of high-speed digital circuits, which is based on the use of discrete voltages and the discrete time domain, with analog design, in order to both achieve wideband operation and compensate for signal distortions as well as variations in process, power supply voltage, and temperature. Moreover, as it is thought that small integration of the antenna and the interface circuit is indispensable to achieve miniaturized micro RF communication systems, the construction of the integrated design environment with the Micro Electro Mechanical Systems (MEMS) device etc. of the different kind devices becomes more important. In this paper, the history and the current status of the development of RF CMOS circuits are reviewed, and the future status of RF CMOS circuits is predicted.
    Download PDF (2044K)
  • Qing LIU, Yusuke TAKIGAWA, Satoshi KURACHI, Nobuyuki ITOH, Toshihiko Y ...
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 568-573
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A novel resonant circuit consisting of transformer-based switched variable inductors and switched accumulation MOS (AMOS) varactors is proposed to realize an ultrawide tuning range voltage-controlled-oscillator (VCO). The VCO IC is designed and fabricated using 0.11µm CMOS technology and fully evaluated on-wafer. The VCO exhibits a frequency tuning range as high as 92.6% spanning from 1.20GHz to 3.27GHz at an operation voltage of 1.5V. The measured phase noise of -120dBc/Hz at 1MHz offset from the 3.1GHz carrier is obtained.
    Download PDF (806K)
  • Jinhua LIU, Guican CHEN, Hong ZHANG
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 574-582
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper presents a systemic analysis for phase noise performances of the series quadrature oscillator (QOSC) by using the time-variant impulse sensitivity function (ISF) model. The effective ISF for each noise source in the oscillator is derived mathematically. According to these effective ISFs, the explicit closed-form expression for phase noise due to the total thermal noise in the series QOSC is derived, and the phase noise contribution from the flicker noise in the regenerative and coupling transistors is also figured out. The phase noise contributions from the thermal noise and the flicker noise are verified by SpectreRF simulations.
    Download PDF (613K)
  • Tuya WUREN, Takashi OHIRA
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 583-591
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper presents Q factor analysis for FET oscillators employing distributed-constant elements. We replace the inductor of a lumped constant Colpitts circuit by a shorted microstrip transmission line for high frequency applications. Involving the FET's transconductance and the transmission line's loss due to both conducting metal and dielectric substrate, we deduce the Q factor formula for the entire circuit in the steady oscillation state. We compared the computed results from the oscillator employing an uniform shorted microstrip line with that of the original LC oscillator. For obtaining even higher Q factor, we modify the shape of transmission line into nonuniform, i.e., step-, tapered-, and partially-tapered stubs. Non-uniformity causes some complexity in the impedance analysis. We exploit a piecewise uniform approximation for tapered part of the microstrip stub, and then involve the asymptotic expressions obtained from both stub's impedance and its frequency derivatives into the active Q factor formula. Applying these formulations, we calculate out the value of capacitance for tuning, the necessary FET's transconductance and achievable active Q factor, and then finally explore oscillator performances with a microstrip stub in different shapes and sizes.
    Download PDF (2090K)
  • Shouhei KOUSAI, Daisuke MIYASHITA, Junji WADATSUMI, Rui ITO, Takahiro ...
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 592-602
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A wideband, low noise, and highly linear transmitter for multi-mode radio is presented. Envelope injection scheme with a CMOS amplifier is developed to obtain sufficient linearity for complex modulation schemes such as OFDM, and to achieve low noise for concurrent operation of more than one standard. Active matching technique with doubly terminated LPF topology is also presented to realize wide bandwidth, low power consumption, and to eliminate off-chip components without increasing die area. A multi-mode transmitter is implemented in a 0.13µm CMOS technology with an active area of 1.13mm2. Third-order intermodulation product is improved by 17dB at -3dBm output by the envelope injection scheme. The transmitter achieves EVM of less than -29.5dB at -3dBm output from 0.2 to 7.2GHz while consuming only 69mW. The transmitter is also tested with multiple standards of UMTS, 802.11b, WiMax, 802.11a, and 802.11n, and satisfies EVM, ACLR, and spectrum specifications.
    Download PDF (2107K)
  • Jiangtao SUN, Qing LIU, Yong-Ju SUH, Takayuki SHIBATA, Toshihiko YOSHI ...
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 603-610
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A broadband balanced frequency doubler has been demonstrated in 0.25-µm SOI SiGe BiCMOS technology to operate from 22GHz to 30GHz. The measured fundamental frequency suppression of greater than 30dBc is achieved by an internal low pass LC filter. In addition, a pair of matching circuits in parallel with the LO inputs results in high suppression with low input drive power. Maximum measured conversion gain of -6dB is obtained at the input drive power as low as -1dBm. The results presented indicate that the proposed frequency doubler can operate in broadband and achieve high fundamental frequency suppression with low input drive power.
    Download PDF (1819K)
  • Masayoshi TAKAHASHI, Keiichi YAMAMOTO, Norio CHUJO, Ritsurou ORIHASHI
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 611-616
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A 2GHz gain equalizer for analog signal transmission using a novel gain compensation method is described in this paper. This method is based on feedforward compensation by a low-pass filter, which improves the gain-equalizing performance by subtracting low-pass filtered signals from the directly passed signal at the end of a transmission line. The advantage of the proposed method over the conventional one is that the gain is equalized with a smaller THD at higher frequencies by using a low-pass instead of a high-pass filter. In this circuit, the peak gain is adjustable from 0 to 2.4dB and the frequency of the peak gain can be controlled up to 2GHz by varying the value of an external capacitor. Also this circuit achieves THD with 5dB better than the conventional circuits.
    Download PDF (2489K)
  • Yosuke TAKEUCHI, Koichi ICHIGE, Koichi MIYAMOTO, Yoshio EBINE
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 617-624
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper presents a novel automated microwave filter tuning method based on successive optimization of phase and amplitude characteristics. We develop an optimization procedure to determine how much the adjusting screws of a filter should be rotated. The proposed filter tuning method consists of two stages; coarse and fine tuning stages. In the first stage, called coarse tuning, the phase response error of the target filter is minimized so that the filter roughly approximates almost ideal bandpass characteristics. Then in the second stage, called fine tuning, two different amplitude response errors are minimized in turn and then the resulting filter well approximate the ideal characteristics. Performance of the proposed tuning procedure is evaluated through some experiments of actual filter tuning.
    Download PDF (1205K)
  • Retdian NICODIMUS, Shigetaka TAKAGI
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 625-632
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper proposes a technique to reduce the capacitance spread in switched-capacitor (SC) filters. The proposed technique is based on a simple charge distribution and partial charge transfer which is applicable to various integrator topologies. An implementation example on an existing integrator topology and a design example of a 2nd-order SC low-pass filter are given to demonstrate the performance of the proposed technique. A design example of an SC filter show that the filter designed using the proposed technique has an approximately 23% less total capacitance than the one of SC low-pass filter with conventional capacitance spread reduction technique.
    Download PDF (560K)
  • Young-Chan JANG
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 633-638
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A self-calibrating per-pin phase adjuster, which does not require any feedback from the slave chip and a multi-phase clock in the master and slave chips, is proposed for a high speed parallel chip-to-chip interface with a source synchronous double data rate (DDR) signaling. It achieves not only per-pin phase adjustment but also 90° phase shift of a strobe signal for a source synchronous DDR signaling. For this self-calibration, the phase adjuster measures and compensates the only relative mismatched delay among channels by utilizing on-chip time-domain reflectometry (TDR). Thus, variable delay lines, finite state machines, and a test signal generator are additionally required for the proposed phase adjuster. In addition, the power-gating receiver is used to reduce the discontinuity effect of the channel including parasitic components of chip package. To verify the proposed self-calibrating per-pin phase adjuster, the transceivers with 16 data, strobe, and clock signals for the interface with a source synchronous DDR signaling were implemented by using a 60nm 1-poly 3-metal CMOS DRAM process with a 1.5V supply. Each phase skew between Strobe and 16 Data was corrected within 0.028UI at 1.6-Gb/s data rate in a point-to-point channel.
    Download PDF (2174K)
  • Mohammad YAVARI
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 639-645
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper presents a novel time-domain design procedure for fast-settling three-stage nested-Miller compensated (NMC) amplifiers. In the proposed design methodology, the amplifier is designed to settle within a definite time period with a given settling accuracy by optimizing both the power consumption and silicon die area. Detailed design equations are presented and the circuit level simulation results are provided to verify the usefulness of the proposed design procedure with respect to the previously reported design schemes.
    Download PDF (517K)
  • Yanzhao MA, Hongyi WANG, Guican CHEN
    Article type: PAPER
    2011 Volume E94.A Issue 2 Pages 646-652
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper presents a step-up/step-down DC-DC converter with three operation modes to achieve high efficiency and small output ripple voltage. A constant time buck-boost mode, which is inserted between buck mode and boost mode, is proposed to achieve smooth transition. With the proposed mode, the output ripple voltage is significantly reduced when the input voltage is approximate to the output voltage. Besides, the novel control scheme minimizes the conduction loss by reducing the average inductor current and the switching loss by making the converter operate like a buck or boost converter. The small signal model of the step-up/step-down DC-DC converter is also derived to guide the compensation network design. The step-up/step-down converter is designed with a 0.5µm CMOS n-well process, and can regulate an output voltage within the input voltage ranged from 2.5V to 5.5V with a maximum power efficiency of 96%. The simulation results show that the proposed converter exhibits an output ripple voltage of 28mV in the transition mode.
    Download PDF (1280K)
Regular Section
  • Nagato UEDA, Eiji WATANABE, Akinori NISHIHARA
    Article type: PAPER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 653-660
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper proposes a synthesis method of 2-channel IIR paraunitary filter banks by successive extraction of 2-port lattice sections. When a power symmetry transfer function is given, a filter bank is realized as cascade of paraunitary 2-port lattice sections. The method can synthesize both odd- and even-order filters with Butterworth or elliptic characteristics. The number of multiplications per second can also be reduced.
    Download PDF (866K)
  • Masayoshi NAKAMOTO, Kohei SAYAMA, Mitsuji MUNEYASU, Tomotaka HARANO, S ...
    Article type: PAPER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 661-670
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    For copyright protection, a watermark signal is embedded in host images with a secret key, and a correlation is applied to judge the presence of watermark signal in the watermark detection. This paper treats a discrete wavelet transform (DWT)-based image watermarking method under specified false positive probability. We propose a new watermarking method to improve the detection performance by using not only positive correlation but also negative correlation. Also we present a statistical analysis for the detection performance with taking into account the false positive probability and prove the effectiveness of the proposed method. By using some experimental results, we verify the statistical analysis and show this method serves to improve the robustness against some attacks.
    Download PDF (2630K)
  • Taichi YOSHIDA, Seisuke KYOCHI, Masaaki IKEHARA
    Article type: PAPER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 671-679
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In this paper, we propose a novel lattice structure of two dimensional (2D) nonseparable linear-phase paraunitary filter banks (LPPUFBs) called 2D GenLOT. Muramatsu et al. have previously proposed a lattice structure of 2D nonseparable LPPUFBs which have efficient frequency response. However, the proposed structure requires less number of design parameters and computational costs than the conventional one. Through some design examples and simulation results, we show that both filter banks have comparable frequency response and coding gain.
    Download PDF (744K)
  • Yu GAO, Kil To CHONG
    Article type: PAPER
    Subject area: Systems and Control
    2011 Volume E94.A Issue 2 Pages 680-687
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A min-max model predictive controller is developed in this paper for tracking control of wheeled mobile robots (WMRs) subject to the violation of nonholonomic constraints in an environment without obstacles. The problem is simplified by neglecting the vehicle dynamics and considering only the steering system. The linearized tracking-error kinematic model with the presence of uncertain disturbances is formed in the frame of the robot. And then, the control policy is derived from the worst-case optimization of a quadratic cost function, which penalizes the tracking error and control variables in each sampling time over a finite horizon. As a result, the input sequence must be feasible for all possible disturbance realizations. The performance of the control algorithm is verified via the computer simulations with a predefined trajectory and is compared to a common discrete-time sliding mode control law. The result shows that the proposed method has a better tracking performance and convergence.
    Download PDF (448K)
  • Shao-Chang HUANG, Ke-Horng CHEN
    Article type: PAPER
    Subject area: VLSI Design Technology and CAD
    2011 Volume E94.A Issue 2 Pages 688-695
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    The cascode NMOS architecture has been tested by the Human Body Model (HBM), Machine Model (MM) and Transmission Line Pulse Generator (TLP) in this paper. For the TLP, detailed silicon data have been analyzed well in many parameters, such as the first triggering-on voltage (Vt1), the first triggering-on current (It1), the holding voltage (Vh), and the TLP I-V curve. Besides the above three kinds of Electrostatic Discharge (ESD) events, the device gate oxide breakdown voltage is also taken into consideration and the correlations between HBM, MM and TLP are also observed. In order to explain the bipolar transistor turning-on mechanisms, two kinds of models have been proposed in this paper. In typical cases, substrate resistance decreases as the technology advances. On the one hand, for processes older than the 0.35µm process, such as 0.5µm and 1µm, ESD designers can use pick-up insertions to trigger integrated circuits (IC) turn on uniformly. The NPN Side Model can dominate ESD performances in such old processes. On the other hand, in 0.18µm and newer processes, such as 0.15µm, 0.13µm, 90nm, etc., ESD designers must use non-pick-up insertion structures. The NPN Central Model can dominate ESD performances in such processes. After combining both models together, the bipolar turning-on mechanisms can be explained as “ESD currents occur from side regions to central regions.” Besides ESD parasitic bipolar transistor turning-on concerns, another reason that ESD designers should use non-pick-up insertions in deep sub-micron processes is the decreasing of the gate oxide breakdown voltage. As IC size scales down, the gate oxide thickness lessens. The thinner gate oxide thickness will encounter a smaller gate oxide breakdown voltage. In order to avoid gate oxide damage under ESD stresses, ESD designers should endeavor to decrease ESD device turn-on resistances. ESD protecting devices with low turn-on resistances can endure larger currents for the same TLP voltage. In this paper, silicon data show that the non-pick-up insertion cascode NMOS transistor's turning on resistance is smaller than the pick-up insertion cascode NMOS transistor's turning on resistance. Although this paper discovers NPN turning-on mechanisms based on the cascode NMOS structure, ESD designers can adopt the same theories for other kinds of ESD protecting structures, such as one single poly Gate-Grounded NMOS transistor (GGNMOST). ESD designers can use pick-up insertion architecture for NMOS transistors in the low-end processes, but utilize the non-pick-up insertion architecture for GGNMOST in the high-end processes. Then they can obtain the optimized ESD performances.
    Download PDF (1142K)
  • Tasuku NISHIHARA, Takeshi MATSUMOTO, Masahiro FUJITA
    Article type: PAPER
    Subject area: VLSI Design Technology and CAD
    2011 Volume E94.A Issue 2 Pages 696-705
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Bounded model checking is a widely used formal technique in both hardware and software verification. However, it cannot be applied if the bounds (number of time frames to be analyzed) become large, and deep bugs which are observed only through very long counter-examples cannot be detected. This paper presents a method concatenating multiple bounded model checking results efficiently with symbolic simulation. A bounded model checking with a large bound is recursively decomposed into multiple ones with smaller bounds, and symbolic simulation on each counterexample supports smooth connections to the others. A strong heuristic for the proposed method that targets deep bugs is also presented, and can be applied together with other efficient bounded model checking methods since it does not touch the basic bounded model checking algorithm.
    Download PDF (1413K)
  • Chia-Chun TSAI, Chung-Chieh KUO, Trong-Yen LEE
    Article type: PAPER
    Subject area: VLSI Design Technology and CAD
    2011 Volume E94.A Issue 2 Pages 706-716
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    As the VLSI manufacturing technology shrinks to 65nm and below, reducing the yield loss induced by via failures is a critical issue in design for manufacturability (DFM). Semiconductor foundries highly recommend using the double-via insertion (DVI) method to improve yield and reliability of designs. This work applies the DVI method in the post-stage of an X-architecture clock routing for double-via insertion rate improvement. The proposed DVI-X algorithm constructs the bipartite graphs of the partitioned clock routing layout with single vias and redundant-via candidates (RVCs). Then, DVI-X applies the augmenting path approach associated with the construction of the maximal cliques to obtain the matching solution from the bipartite graphs. Experimental results on benchmarks show that DVI-X can achieve higher double-via insertion rate by 3% and less running time by 68% than existing works. Moreover, a skew tuning technique is further applied to achieve zero skew because the inserted double vias affect the clock skew.
    Download PDF (2159K)
  • Dae Hyun YUM, Pil Joong LEE
    Article type: PAPER
    Subject area: Cryptography and Information Security
    2011 Volume E94.A Issue 2 Pages 717-724
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A sanitizable signature scheme allows a semi-trusted party, designated by a signer, to modify pre-determined parts of a signed message without interacting with the original signer. To date, many sanitizable signature schemes have been proposed based on various cryptographic techniques. However, previous works are usually built upon the paradigm of dividing a message into submessages and applying a cryptographic primitive to each submessage. This methodology entails the computation time (and often signature length) in linear proportion to the number of sanitizable submessages. We present a new approach to constructing sanitizable signatures with constant overhead for signing and verification, irrespective of the number of submessages, both in computational cost and in signature size.
    Download PDF (272K)
  • Rafael DOWSLEY, Jörn MÜLLER-QUADE, Akira OTSUKA, Goichiro HA ...
    Article type: PAPER
    Subject area: Cryptography and Information Security
    2011 Volume E94.A Issue 2 Pages 725-734
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper presents a non-interactive verifiable secret sharing scheme (VSS) tolerating a dishonest majority based on data pre-distributed by a trusted authority. As an application of this VSS scheme we present very efficient unconditionally secure protocols for performing multiplication of shares based on pre-distributed data which generalize two-party computations based on linear pre-distributed bit commitments. The main results of this paper are a non-interactive VSS, a simplified multiplication protocol for shared values based on pre-distributed random products, and non-interactive zero knowledge proofs for arbitrary polynomial relations. The security of the schemes is proved using the UC framework.
    Download PDF (554K)
  • Mototsugu NISHIOKA, Naohisa KOMATSU
    Article type: PAPER
    Subject area: Cryptography and Information Security
    2011 Volume E94.A Issue 2 Pages 735-760
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In [1], Bellare, Boldyreva, and Micali addressed the security of public-key encryptions (PKEs) in a multi-user setting (called the BBM model in this paper). They showed that although the indistinguishability in the BBM model is induced from that in the conventional model, its reduction is far from tight in general, and this brings a serious key length problem. In this paper, we discuss PKE schemes in which the IND-CCA security in the BBM model can be obtained tightly from the IND-CCA security. We call such PKE schemes IND-CCA secure in the BBM model with invariant security reductions (briefly, SR-invariant IND-CCABBM secure). These schemes never suffer from the underlying key length problem in the BBM model. We present three instances of an SR-invariant IND-CCABBM secure PKE scheme: the first is based on the Fujisaki-Okamoto PKE scheme [7], the second is based on the Bellare-Rogaway PKE scheme [3], and the last is based on the Cramer-Shoup PKE scheme [5].
    Download PDF (510K)
  • Xiaoni DU, Zhixiong CHEN
    Article type: PAPER
    Subject area: Information Theory
    2011 Volume E94.A Issue 2 Pages 761-765
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Some new generalized cyclotomic sequences defined by C. Ding and T. Helleseth are proven to exhibit a number of good randomness properties. In this paper, we determine the defining pairs of these sequences of length pm (p prime, m å 2) with order two, then from which we obtain their trace representation. Thus their linear complexity can be derived using Key's method.
    Download PDF (209K)
  • Yifeng TU, Pingzhi FAN, Li HAO, Xiyang LI
    Article type: PAPER
    Subject area: Information Theory
    2011 Volume E94.A Issue 2 Pages 766-772
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Sequences with good correlation properties are of substantial interest in many applications. By interleaving a perfect array with shift sequences, a new method of constructing binary array set with zero correlation zone (ZCZ) is presented. The interleaving operation can be performed not only row-by-row but also column-by-column on the perfect array. The resultant ZCZ binary array set is optimal or almost optimal with respect to the theoretical bound. The new method provides a flexible choice for the rectangular ZCZ and the set size.
    Download PDF (360K)
  • Chia-Yu LIN, Chih-Chun WEI, Mong-Kai KU
    Article type: PAPER
    Subject area: Coding Theory
    2011 Volume E94.A Issue 2 Pages 773-780
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In this paper, an efficient encoding scheme for dual-diagonal LDPC codes is proposed. Our two-way parity bit correction algorithm breaks up the data dependency within the encoding process to achieve higher throughput, lower latency and better hardware utilization. The proposed scheme can be directly applied to dual-diagonal codes without matrix modifications. FPGA encoder prototypes are implemented for IEEE 802.11n and 802.16e codes. Results show that the proposed architecture outperforms in terms of throughput and throughput/area ratio.
    Download PDF (1302K)
  • Hao NI, Dongju LI, Tsuyoshi ISSHIKI, Hiroaki KUNIEDA
    Article type: PAPER
    Subject area: Image
    2011 Volume E94.A Issue 2 Pages 781-788
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    It is theoretically impossible to restore the original fingerprint image from a sequence of line images captured by a line sensor. However, in this paper we propose a unique fingerprint-image-generation algorithm, which derives fingerprint images from sequences of line images captured at different swipe speeds by the line sensor. A continuous image representation, called trajectory, is used in modeling distortion of raw fingerprint images. Sequences of line images captured from the same finger are considered as sequences of points, which are sampled on the same trajectory in N-dimensional vector space. The key point here is not to reconstruct the original image, but to generate identical images from the trajectory, which are independent of the swipe speed of the finger. The method for applying the algorithm in a practical application is also presented. Experimental results on a raw fingerprint image database from a line sensor show that the generated fingerprint images are independent of swipe speed, and can achieve remarkable matching performance with a conventional minutiae matcher.
    Download PDF (1789K)
  • Yoshinobu MAEDA, Kentaro TANI, Nao ITO, Michio MIYAKAWA
    Article type: PAPER
    Subject area: Human Communications
    2011 Volume E94.A Issue 2 Pages 789-794
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In this paper we show that the performance workload of button-input interfaces do not monotonically increase with the number of buttons, but there is an optimal number of buttons in the sense that the performance workload is minimized. As the number of buttons increases, it becomes more difficult to search for the target button, and, as such, the user's cognitive workload is increased. As the number of buttons decreases, the user's cognitive workload decreases but his operational workload increases, i.e., the amount of operations becomes larger because one button has to be used for plural functions. The optimal number of buttons emerges by combining the cognitive and operational workloads. The experiments used to measure performance were such that we were able to describe a multiple regression equation using two observable variables related to the cognitive and operational workloads. As a result, our equation explained the data well and the optimal number of buttons was found to be about 8, similar to the number adopted by commercial cell phone manufacturers. It was clarified that an interface with a number of buttons close to the number of letters in the alphabet was not necessarily easy to use.
    Download PDF (670K)
  • Shangce GAO, Qiping CAO, Masahiro ISHII, Zheng TANG
    Article type: PAPER
    Subject area: Neural Networks and Bioengineering
    2011 Volume E94.A Issue 2 Pages 795-805
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This paper proposes a probabilistic modeling learning algorithm for the local search approach to the Multiple-Valued Logic (MVL) networks. The learning model (PMLS) has two phases: a local search (LS) phase, and a probabilistic modeling (PM) phase. The LS performs searches by updating the parameters of the MVL network. It is equivalent to a gradient decrease of the error measures, and leads to a local minimum of error that represents a good solution to the problem. Once the LS is trapped in local minima, the PM phase attempts to generate a new starting point for LS for further search. It is expected that the further search is guided to a promising area by the probability model. Thus, the proposed algorithm can escape from local minima and further search better results. We test the algorithm on many randomly generated MVL networks. Simulation results show that the proposed algorithm is better than the other improved local search learning methods, such as stochastic dynamic local search (SDLS) and chaotic dynamic local search (CDLS).
    Download PDF (659K)
  • Youngsuk SHIN
    Article type: PAPER
    Subject area: Neural Networks and Bioengineering
    2011 Volume E94.A Issue 2 Pages 806-812
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Blood pressure is the measurement of the force exerted by blood against the walls of the arteries. Hypertension is a major risk factor of cardiovascular diseases. The systolic and diastolic blood pressures obtained from the oscillometric method could carry clues about hypertension. However, blood pressure is influenced by individual traits such as physiology, the geometry of the heart, body figure, gender and age. Therefore, consideration of individual traits is a requisite for reliable hypertension monitoring. The oscillation waveforms extracted from the cuff pressure reflect individual traits in terms of oscillation patterns that vary in size and amplitude over time. Thus, uniform features for individual traits from the oscillation patterns were extracted, and they were applied to evaluate systolic and diastolic blood pressures using two feedforward neural networks. The measurements of systolic and diastolic blood pressures from two neural networks were compared with the average values of systolic and diastolic blood pressures obtained by two nurses using the auscultatory method. The recognition performance was based on the difference between the blood pressures measured by the auscultation method and the proposed method with two neural networks. The recognition performance for systolic blood pressure was found to be 98.2% for ±20mmHg, 93.5% for ±15mmHg, and 82.3% for ±10mmHg, based on maximum negative amplitude. The recognition performance for diastolic blood pressure was found to be 100% for ±20mmHg, 98.8% for ±15mmHg, and 88.2% for ±10mmHg based on maximum positive amplitude. In our results, systolic blood pressure showed more fluctuation than diastolic blood pressure in terms of individual traits, and subjects with prehypertension or hypertension (systolic blood pressure) showed a stronger steep-slope pattern in 1/3 section of the feature windows than normal subjects. The other side, subjects with prehypertension or hypertension (diastolic blood pressure) showed a steep-slope pattern in front of the feature windows (2/3 section) than normal subjects. This paper presented a novel blood pressure measurement system that can monitor hypertension using personalized traits. Our study can serve as a foundation for reliable hypertension diagnosis and management based on consideration of individual traits.
    Download PDF (1129K)
  • Shoichi KITAGAWA, Yoshinobu KAJIKAWA
    Article type: LETTER
    Subject area: Engineering Acoustics
    2011 Volume E94.A Issue 2 Pages 813-816
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In this letter, the compensation ability of nonlinear distortions for loudspeaker systems is demonstrated using dynamic distortion measurement. Two linearization methods using a Volterra filter and a Mirror filter are compared. The conventional evaluation utilizes swept multi-sinusoidal waves. However, it is unsatisfactory because wideband signals such as those of music and voices are usually applied to loudspeaker systems. Hence, the authours use dynamic distortion measurement employing a white noise. Experimental results show that the two linearization methods can effectively reduce nonlinear distortions for wideband signals.
    Download PDF (325K)
  • Victor GOLIKOV, Olga LEBEDEVA
    Article type: LETTER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 817-822
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This work extends the constant false alarm rate (CFAR) detection methodology to detection in the presence of two independent interference sources with unknown powers. The proposed detector is analyzed on the assumption that clutter and jammer covariance structures are known and have relatively low rank properties. The limited-dimensional subspace-based approach leads to a robust false alarm rate (RFAR) detector. The RFAR detection algorithm is developed by an adaptation and extension of Hotelling's principal-component method. The detector performance loss and false alarm stability loss to unknown clutter and jammer powers have been evaluated for example scenario.
    Download PDF (466K)
  • Hing Cheung SO, Kenneth Wing Kin LUI
    Article type: LETTER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 823-825
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    Frequency estimation of a complex single-tone in additive white Gaussian noise from irregularly-spaced samples is addressed. In this Letter, we study the periodogram and weighted phase averager, which are standard solutions in the uniform sampling scenarios, for tackling the problem. It is shown that the estimation performance of both approaches can attain the optimum benchmark of the Cramér-Rao lower bound, although the former technique has a smaller threshold signal-to-noise ratio.
    Download PDF (147K)
  • Victor GOLIKOV, Olga LEBEDEVA, Andres CASTILLEJOS-MORENO, Volodymyr PO ...
    Article type: LETTER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 826-828
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This Letter presents the matched subspace detection in the presence of Gaussian background with known covariance structure but different variance for hypothesis H0 and H1. The performance degradation has been evaluated when there are the following mismatches between the actual and designed parameters: background variance in the case of hypothesis H1 and one-lag correlation coefficient of background. It has been shown that the detectability depends strongly on the fill factor of targets in the case of the mode signal matrix with high rank for a prescribed false alarm probability and a given signal-to-background ratio. These results have been also justified via Monte Carlo simulations for an example scenario.
    Download PDF (344K)
  • Chun-Hsien WU
    Article type: LETTER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 829-832
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    This letter presents a method to enable the precoder design for intrablock MMSE equalization with previously proposed oblique projection framework. The joint design of the linear transceiver with optimum block delay detection is built. Simulation results validate the proposed approach and show the superior BER performance of the optimized transceiver.
    Download PDF (156K)
  • Mohd Hairi HALMI, Mohd Yusoff ALIAS, Teong Chee CHUAH
    Article type: LETTER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 833-837
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    A method for estimating channel covariance from the uplink received signal power for downlink transmit precoding in multiple-input multiple-output (MIMO) frequency division duplex (FDD) wireless systems is proposed. Unlike other MIMO precoding schemes, the proposed scheme does not require a feedback channel or pilot symbols, i.e. knowledge of the channel covariance is made available at the downlink transmitter through direct estimation from the uplink received signal power. This leads to low complexity and improved system efficiency. It is shown that the proposed scheme performs better or on par with other practical schemes and only suffers a slight performance degradation when compared with systems with perfect knowledge of the channel covariance.
    Download PDF (349K)
  • Eu-Suk SHIM, Young-Hwan YOU
    Article type: LETTER
    Subject area: Digital Signal Processing
    2011 Volume E94.A Issue 2 Pages 838-841
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    In this letter, we propose a low-complexity coarse frequency offset estimation scheme in an orthogonal frequency division multiplexing (OFDM) system using non-uniform phased pilot symbols. In our approach, the pilot symbol used for frequency estimation is grouped into a number of pilot subsets so that the phase of pilots in each subset is unique. We show via simulations that such a design achieves not only a low computational load but also comparable performance, when compared to the conventional estimator.
    Download PDF (374K)
  • Wan Yeon LEE, Kyong Hoon KIM
    Article type: LETTER
    Subject area: Systems and Control
    2011 Volume E94.A Issue 2 Pages 842-845
    Published: February 01, 2011
    Released on J-STAGE: February 01, 2011
    JOURNAL RESTRICTED ACCESS
    The proposed scheduling scheme minimizes the mean energy consumption of a real-time parallel task, where the task has the probabilistic computation amount and can be executed concurrently on multiple cores. The scheme determines a pertinent number of cores allocated to the task execution and the instant frequency supplied to the allocated cores. Evaluation shows that the scheme saves manifest amount of the energy consumed by the previous method minimizing the mean energy consumption on a single core.
    Download PDF (234K)
feedback
Top