-
Yoshinori HATORI
2012 Volume E95.A Issue 8 Pages
1223
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
-
Michael A. KRISS
Article type: INVITED PAPER
2012 Volume E95.A Issue 8 Pages
1224-1229
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ∼$120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
View full abstract
-
Kazuya HAYASE, Hiroshi FUJII, Yukihiro BANDOH, Hirohisa JOZAWA
Article type: INVITED PAPER
2012 Volume E95.A Issue 8 Pages
1230-1239
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Scalable video coding offers efficient video transmission to a variety of display devices over heterogeneous and error-prone networks. Scalable video coding has been strenuously researched in recent years and state-of-the-art international coding with scalability has been standardized as SVC, which is an extension of H.264/AVC. This paper summarizes the recent advanced research that has been done for improving the quality and reducing the complexity of scalable video coding (including SVC), as well as for improving the quality assessment techniques. It is intended to give researchers a critical, technical overview of what is required to develop more efficient scalable video coding in the future.
View full abstract
-
Toru YAMADA, Takao NISHITANI
Article type: PAPER
Subject area: Quality Metrics
2012 Volume E95.A Issue 8 Pages
1240-1246
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
This paper presents a no-reference (NR) based video-quality estimation method for compressed videos which apply inter-frame prediction. The proposed method does not need bitstream information. Only pixel information of decoded videos is used for the video-quality estimation. An activity value which indicates a variance of luminance values is calculated for every given-size pixel block. The activity difference between an intra-coded frame and its adjacent frame is calculated and is employed for the video-quality estimation. In addition, a blockiness level and a blur level are also estimated at every frame by analyzing pixel information only. The estimated blockiness level and blur level are also taken into account to improve quality-estimation accuracy in the proposed method. Experimental results show that the proposed method achieves accurate video-quality estimation without the original video which does not include any artifacts by the video compression. The correlation coefficient between subjective video quality and estimated quality is 0.925. The proposed method is suitable for automatic video-quality checks when service providers cannot access the original videos.
View full abstract
-
Osamu SUGIMOTO, Sei NAITO, Yoshinori HATORI
Article type: PAPER
Subject area: Quality Metrics
2012 Volume E95.A Issue 8 Pages
1247-1255
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In this paper, we propose a novel method of measuring the perceived picture quality of H.264 coded video based on parametric analysis of the coded bitstream. The parametric analysis means that the proposed method utilizes only bitstream parameters to evaluate video quality, while it does not have any access to the baseband signal (pixel level information) of the decoded video. The proposed method extracts quantiser-scale, macro block type and transform coefficients from each macroblock. These parameters are used to calculate spatiotemporal image features to reflect the perception of coding artifacts which have a strong relation to the subjective quality. A computer simulation shows that the proposed method can estimate the subjective quality at a correlation coefficient of 0.923 whereas the PSNR metric, which is referred to as a benchmark, correlates the subjective quality at a correlation coefficient of 0.793.
View full abstract
-
Naoya SAGARA, Takayuki SUZUKI, Kenji SUGIYAMA
Article type: LETTER
Subject area: Quality Metrics
2012 Volume E95.A Issue 8 Pages
1256-1258
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
The non-reference method is widely useful to estimation picture quality on the decoder side. In this paper, we discuss the estimation method for spatial blur that divides the frequency zones by the absolute value of 64 coefficients with an 8-by-8 DCT and compares them. It is recognized that absolute blur estimation is possible with the decoded picture only.
View full abstract
-
Masaharu SATO, Yuukou HORITA
Article type: LETTER
Subject area: Quality Metrics
2012 Volume E95.A Issue 8 Pages
1259-1263
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Our research is focused on examining the video quality assessment model based on the MPEG-7 descriptor. Video quality is estimated by using several features based on the predicted frame quality such as average value, worst value, best value, standard deviation, and the predicted frame rate obtained from descriptor information. As a result, assessment of video quality can be conducted with a high prediction accuracy with correlation coefficient=0.94, standard deviation of error=0.24, maximum error=0.68 and outlier ratio=0.23.
View full abstract
-
Masaharu SATO, Yuukou HORITA
Article type: LETTER
Subject area: Quality Metrics
2012 Volume E95.A Issue 8 Pages
1264-1269
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Our research is focused on examining a stereoscopic quality assessment model for stereoscopic images with disparate quality in left and right images for glasses-free stereo vision. In this paper, we examine the objective assessment model of 3-D images, considering the difference in image quality between each view-point generated by the disparity-compensated coding. A overall stereoscopic image quality can be estimated by using only predicted values of left and right 2-D image qualities based on the MPEG-7 descriptor information without using any disparity information. As a result, the stereoscopic still image quality is assessed with high prediction accuracy with correlation coefficient=0.98 and average error=0.17.
View full abstract
-
Minghui WANG, Xun HE, Xin JIN, Satoshi GOTO
Article type: PAPER
Subject area: Coding & Processing
2012 Volume E95.A Issue 8 Pages
1270-1279
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.
View full abstract
-
Ning JIANG, Jiu XU, Satoshi GOTO
Article type: PAPER
Subject area: Coding & Processing
2012 Volume E95.A Issue 8 Pages
1280-1287
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In recent years, local pattern based features have attracted increasing interest in object detection and recognition systems. Local Binary Pattern (LBP) feature is widely used in texture classification and face detection. But the original definition of LBP is not suitable for human detection. In this paper, we propose a novel feature named gradient local binary patterns (GLBP) for human detection. In this feature, original 256 local binary patterns are reduced to 56 patterns. These 56 patterns named uniform patterns are used for generating a 56-bin histogram. And gradient value of each pixel is set as the weight which is always same in LBP based features in histogram calculation to computing the values in 56 bins for histogram. Experiments are performed on INRIA dataset, which shows the proposal GLBP feature is discriminative than histogram of orientated gradient (HOG), Semantic Local Binary Patterns (S-LBP) and histogram of template (HOT). In our experiments, the window size is fixed. That means the performance can be improved by boosting methods. And the computation of GLBP feature is parallel, which make it easy for hardware acceleration. These factors make GLBP feature possible for real-time pedestrian detection.
View full abstract
-
Chen LIU, Xin JIN, Tianruo ZHANG, Satoshi GOTO
Article type: PAPER
Subject area: Coding & Processing
2012 Volume E95.A Issue 8 Pages
1288-1296
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
High-definition (HD) videos become more and more popular on portable devices these years. Due to the resolution mismatch between the HD video sources and the relative low-resolution screens of portable devices, the HD videos are usually fully decoded and then down-sampled (FDDS) for the displays, which not only increase the cost of both computational power and memory bandwidth, but also lose the details of video contents. In this paper, an encoder-unconstrained partial decoding scheme for H.264/AVC is presented to solve the problem by only decoding the object of interest (OOI) related region, which is defined by users. A simplified compression domain tracking method is utilized to ensure that the OOI locates in the center of the display area. The decoded partial area (DPA) adaptation, the reference block relocation (RBR) and co-located temporal Intra prediction (CTIP) methods are proposed to improve the visual quality for the DPA with low complexity. The simulation results show that the proposed partial decoding scheme provides an average of 50.16% decoding time reduction comparing to the fully decoding process. The displayed region also presents the original HD granularity of OOI. The proposed partial decoding scheme is especially useful for displaying HD video on the devices of which the battery life is a crucial factor.
View full abstract
-
Seok-Min CHAE, Sung-Hak LEE, Hyuk-Ju KWON, Kyu-Ik SOHNG
Article type: LETTER
Subject area: Coding & Processing
2012 Volume E95.A Issue 8 Pages
1297-1301
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Recently, a new image appearance model, named iCAM06, was developed for High-Dynamic-Range (HDR) image rendering. The dynamic range of a HDR image needs to be mapped onto the range of the output device where it will be displayed, this is called tone reproduction or tone mapping. iCAM06, a representative HDR rendering algorithm also uses tone compression for image reproduction on the dynamic range of output devices. However, iCAM06 causes a white point shift during its tone compression process. Therefore, we propose a compensation method for white point shifts using corrected channel gain. Experiment results show that the proposed method has better performance than iCAM06.
View full abstract
-
Hiroshi KATAYAMA, Danya SUGAI, Takayuki HAMAMOTO
Article type: LETTER
Subject area: Coding & Processing
2012 Volume E95.A Issue 8 Pages
1302-1305
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In this paper, we propose a high accuracy motion estimation method based on the spatio-temporal gradient method using high frame-rate images. In the method, we adopt spatial gradients with low estimated errors by the previous motion vectors. In addition, we evaluate the proposed method and confirm the effectiveness. Finally, we apply the method to super-resolution as an application of the proposed method.
View full abstract
-
Yuta KAWAMURA, Yusuke HORIE, Keisuke SANO, Hiroya KODAMA, Naoki TSUNOD ...
Article type: LETTER
Subject area: Vision
2012 Volume E95.A Issue 8 Pages
1306-1309
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Three-dimensional (3D) movies have become very popular in movie theaters and for home viewing, To date, there has been no report of the effects of the continual vergence eye movement that occurs when viewing 3D movies from the beginning to the end. First, we analyzed the influence of viewing a 3D movie for several hours on vergence eye movement. At the same time, we investigated the influence of long viewing on the human body, using the Simulator Sickness Questionnaire (SSQ) and critical fusion frequency (CFF). It was suggested that the vergence stable time after saccade when viewing a long movie was influenced by the viewing time and that the vergence stable time after saccade depended on the content of the movie. Also the differences were seen in the SSQ and CFF between the movie's beginning and its ending when viewing a 3D movie.
View full abstract
-
Yosuke SUGIURA, Arata KAWAMURA, Youji IIGUNI
Article type: PAPER
Subject area: Digital Signal Processing
2012 Volume E95.A Issue 8 Pages
1310-1316
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
This paper proposes a comb filter design method which utilizes two linear phase FIR filters for flexibly adjusting the comb filter's frequency response. The first FIR filter is used to individually adjust the notch gains, which denote the local minimum gains of the comb filter's frequency response. The second FIR filter is used to design the elimination bandwidths for individual notch gains. We also derive an efficient comb filter by incorporating these two FIR filters with an all-pass filter which is used in a conventional comb filter to accurately align the nulls with the undesired harmonic frequencies. Several design examples of the derived comb filter show the effectiveness of the proposed comb filter design method.
View full abstract
-
Takashi MATSUBARA, Hiroyuki TORIKAI
Article type: PAPER
Subject area: Nonlinear Problems
2012 Volume E95.A Issue 8 Pages
1317-1328
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
A generalized version of sequential logic circuit based neuron models is presented, where the dynamics of the model is modeled by an asynchronous cellular automaton. Thanks to the generalizations in this paper, the model can exhibit various neuron-like waveforms of the membrane potential in response to excitatory and inhibitory stimulus. Also, the model can reproduce four groups of biological and model neurons, which are classified based on existence of bistability and subthreshold oscillations, as well as their underlying bifurcations mechanisms.
View full abstract
-
Akihito MATSUO, Hiroyuki ASAHARA, Takuji KOUSAKA
Article type: PAPER
Subject area: Nonlinear Problems
2012 Volume E95.A Issue 8 Pages
1329-1336
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
This paper clarifies the bifurcation structure of the chaotic attractor in an interrupted circuit with switching delay from theoretical and experimental view points. First, we introduce the circuit model and its dynamics. Next, we define the return map in order to investigate the bifurcation structure of the chaotic attractor. Finally, we discuss the dynamical effect of switching delay in the existence region of the chaotic attractor compared with that of a circuit with ideal switching.
View full abstract
-
Jun Gyu LEE, Zule XU, Shoichi MASUI
Article type: PAPER
Subject area: VLSI Design Technology and CAD
2012 Volume E95.A Issue 8 Pages
1337-1346
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
We propose a methodology of loop design optimization for fourth-order fractional-N phase locked loop (PLL) frequency synthesizers featuring a short settling time of 5µsec for applications in an active RFID (radio frequency identification) and automobile smart-key systems. To establish the optimized design flow, equations presenting the relationship between the specification and PLL loop parameters in terms of settling time, loop bandwidth, phase margin, and phase noise are summarized. The proposed design flow overcomes the settling time inaccuracy in conventional second-order approximation methods by obtaining the accurate relationship between settling time and loop bandwidth with the MATLAB Control System Toolbox for the fourth-order PLLs. The proposed flow also features the worst-case design by taking account of the process, voltage, and temperature (PVT) variations in loop filter components, and considers the tradeoff between phase noise and area. The three-step optimization process consists of 1) the derivation of the accurate relationship between the settling time and loop bandwidth for various PVT conditions, 2) the derivation of phase noise and area as functions of area-dominant filter capacitance, and 3) the derivation of all PLL loop components values. The optimized design result is compared with circuit simulations using an actually designed fourth-order fractional-N PLL in a 1.8V 0.18µm CMOS technology. The error between the design and simulation for the setting time is reduced from 0.63µsec in the second-order approximation to 0.23µsec in the fourth-order optimization that proves the validity of the proposed method for the high-speed settling operations.
View full abstract
-
Xin MAN, Takashi HORIYAMA, Shinji KIMURA
Article type: PAPER
Subject area: VLSI Design Technology and CAD
2012 Volume E95.A Issue 8 Pages
1347-1358
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Clock gating is supported by commercial tools as a power optimization feature based on the guard signal described in HDL (structural method). However, the identification of control signals for gated registers is hard and designer-intensive work. Besides, since the clock gating cells also consume power, it is imperative to minimize the number of inserted clock gating cells and their switching activities for power optimization. In this paper, we propose an automatic multi-stage clock gating algorithm with ILP (Integer Linear Programming) formulation, including clock gating control candidate extraction, constraints construction and optimum control signal selection. By multi-stage clock gating, unnecessary clock pulses to clock gating cells can be avoided by other clock gating cells, so that the switching activity of clock gating cells can be reduced. We find that any multi-stage control signals are also single-stage control signals, and any combination of signals can be selected from single-stage candidates. The proposed method can be applied to 3 or more cascaded stages. The multi-stage clock gating optimization problem is formulated as constraints in LP format for the selection of cascaded clock-gating order of multi-stage candidate combinations, and a commercial ILP solver (IBM CPLEX) is applied to obtain the control signals for each register with minimum switching activity. Those signals are used to generate a gate level description with guarded registers from original design, and a commercial synthesis and layout tools are applied to obtain the circuit with multi-stage clock gating. For a set of benchmark circuits and a Low Density Parity Check (LDPC) Decoder (6.6k gates, 212 F.F.s), the proposed method is applied and actual power consumption is estimated using Synopsys NanoSim after layout. On average, 31% actual power reduction has been obtained compared with original designs with structural clock gating, and more than 10% improvement has been achieved for some circuits compared with single-stage optimization method. CPU time for optimum multi-stage control selection is several seconds for up to 25k variables in LP format. By applying the proposed clock gating, area can also be reduced since the multiplexors controlling register inputs are eliminated.
View full abstract
-
Shusuke YOSHIMOTO, Takuro AMASHITA, Shunsuke OKUMURA, Koji NII, Masahi ...
Article type: PAPER
Subject area: Reliability, Maintainability and Safety Analysis
2012 Volume E95.A Issue 8 Pages
1359-1365
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
This paper presents measurement results of bit error rate (BER) and soft error rate (SER) improvement on 150-nm FD-SOI 7T/14T (7-transistor/14-transistor) SRAM test chips. The reliability of the 7T/14T SRAM can be dynamically changed by a control signal depending on an operating condition and application. The 14T dependable mode allocates one bit in a 14T cell and improves the BER in a read operation and SER in a retention state, simultaneously. We investigate its error rate mitigating mechanisms using Synopsys TCAD simulator. In our measurements, the minimum operating voltage was improved by 100mV, the alpha-induced SER was suppressed by 80.0%, and the neutron-induced SER was decreased by 34.4% in the 14T dependable mode over the 7T normal mode.
View full abstract
-
Jung Hee CHEON, Stanislaw JARECKI, Jae Hong SEO
Article type: PAPER
Subject area: Cryptography and Information Security
2012 Volume E95.A Issue 8 Pages
1366-1378
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
Secure computation of the set intersection functionality allows
n parties to find the intersection between their datasets without revealing anything else about them. An efficient protocol for such a task could have multiple potential applications in commerce, health care, and security. However, all currently known secure set intersection protocols for
n>2 parties have computational costs that are quadratic in the (maximum) number of entries in the dataset contributed by each party, making secure computation of the set intersection only practical for small datasets. In this paper, we describe the first multi-party protocol for securely computing the set intersection functionality with both the communication and the computation costs that are quasi-linear in the size of the datasets. For a fixed security parameter, our protocols require
O(
n2k) bits of communication and
Õ(
n2k) group multiplications per player in the malicious adversary setting, where
k is the size of each dataset. Our protocol follows the basic idea of the protocol proposed by Kissner and Song, but we gain efficiency by using different representations of the polynomials associated with users' datasets and careful employment of algorithms that interpolate or evaluate polynomials on multiple points more efficiently. Moreover, the proposed protocol is robust. This means that the protocol outputs the desired result even if some corrupted players leave during the execution of the protocol.
View full abstract
-
Dukjae MOON, Deukjo HONG, Daesung KWON, Seokhie HONG
Article type: PAPER
Subject area: Cryptography and Information Security
2012 Volume E95.A Issue 8 Pages
1379-1389
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
We assume that the domain extender is the Merkle-Damgård (MD) scheme and he message is padded by a ‘1’, and minimum number of ‘0’s, followed by a fixed size length information so that the length of padded message is multiple of block length. Under this assumption, we analyze securities of the hash mode when the compression function follows the Davies-Meyer (DM) scheme and the underlying block cipher is one of the plain Feistel or Misty scheme or the generalized Feistel or Misty schemes with Substitution-Permutation (SP) round function. We do this work based on Meet-in-the-Middle (MitM) preimage attack techniques, and develop several useful initial structures.
View full abstract
-
Seiko ARITA
Article type: PAPER
Subject area: Cryptography and Information Security
2012 Volume E95.A Issue 8 Pages
1390-1401
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In resetting attacks against a proof system, a prover or a verifier is reset and enforced to use the same random tape on various inputs as many times as an adversary may want. Recent deployment of cloud computing gives these attacks a new importance. This paper shows that argument systems for any NP language that are both resettably-sound and resettable zero-knowledge are possible by a constant-round protocol in the BPK model. For that sake, we define and construct a resettably-extractable
conditional commitment scheme.
View full abstract
-
Shota NAKANO, Shingo YAMAGUCHI
Article type: PAPER
Subject area: Concurrent Systems
2012 Volume E95.A Issue 8 Pages
1402-1411
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
There are various existing methods translating timed Petri nets to timed automata. However, there is a trade-off between the amount of description and the size of state space. The amount of description and the size of state space affect the feasibility of modeling and analysis like model checking. In this paper, we propose a new translation method from timed Petri nets to timed automata. Our method translates from a timed Petri net to an automaton with the following features: (i) The number of location is 1; (ii) Each edge represents the firing of transition; (iii) Each state implemented as clocks and variables represents a state of the timed Petri net one-to-one correspondingly. Through these features, the amount of description is linear order and the size of state space is the same order as that of the Petri net. We applied our method to three Petri net models of signaling pathways and compared our method with existing methods from the view points of the amount of description and the size of state space. And the comparison results show that our method keeps a good balance between the amount of description and the size of state space. These results also show that our method is effective when checking properties of timed Petri nets.
View full abstract
-
Changxing LIN, Jian ZHANG, Beibei SHAO
Article type: LETTER
Subject area: Digital Signal Processing
2012 Volume E95.A Issue 8 Pages
1412-1415
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
This letter presents the architecture of multi-gigabit parallel demodulator suitable for demodulating high order QAM modulated signal and easy to implement on FPGA platform. The parallel architecture is based on frequency domain implementation of matched filter and timing phase correction. Parallel FIFO based delete-keep algorithm is proposed for timing synchronization, while a kind of reduced constellation phase-frequency detector based parallel decision feedback PLL is designed for carrier synchronization. A fully pipelined parallel adaptive blind equalization algorithm is also proposed. Their parallel implementation structures suitable for FPGA platform are investigated. Besides, in the demonstration of 2Gbps demodulator for 16QAM modulation, the architecture is implemented and validated on a Xilinx V6 FPGA platform with performance loss less than 2dB.
View full abstract
-
Gu-Min JEONG, Chanwoo MOON, Hyun-Sik AHN
Article type: LETTER
Subject area: Systems and Control
2012 Volume E95.A Issue 8 Pages
1416-1419
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
This letter investigates an iterative learning control with advanced output data (ADILC) scheme for non-minimum phase (NMP) systems when the number of NMP zeros is unknown. ADILC has a simple learning structure that can be applied to both minimum phase and NMP systems. However, in the latter case, it is assumed that the number of NMP zeros is already known. In this paper, we propose an ADILC scheme in which the number of NMP zeros is unknown. Based on input-to-output mapping, the learning starts from the relative degree. When the input becomes larger than a certain upper bound, we redesign the input update law which consists of the relative degree and the estimated value for the number of NMP zeros.
View full abstract
-
Weiqin YING, Xing XU, Yuxiang FENG, Yu WU
Article type: LETTER
Subject area: Numerical Analysis and Optimization
2012 Volume E95.A Issue 8 Pages
1420-1425
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
A conical area evolutionary algorithm (CAEA) is presented to further improve computational efficiencies of evolutionary algorithms for bi-objective optimization. CAEA partitions the objective space into a number of conical subregions and then solves a scalar subproblem in each subregion that uses a conical area indicator as its scalar objective. The local Pareto optimality of the solution with the minimal conical area in each subregion is proved. Experimental results on bi-objective problems have shown that CAEA offers a significantly higher computational efficiency than the multi-objective evolutionary algorithm based on decomposition (MOEA/D) while CAEA competes well with MOEA/D in terms of solution quality.
View full abstract
-
Sangchoon KIM
Article type: LETTER
Subject area: Communication Theory and Signals
2012 Volume E95.A Issue 8 Pages
1426-1429
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In this letter, a post-detection signal to noise ratio (SNR) is considered for transmit antenna selection, when a sorted QR decomposition (SQRD) algorithm is used for signal detection in spatial multiplexing (SM) ultra-wideband (UWB) multiple input multiple output systems. The post-detection SNR expression is obtained using a QR factorization algorithm based on a sorted Gram-Schmidt process. The employed antenna selection criterion is to utilize the largest minimum post-detection SNR value. It is shown via simulations that the antenna selection significantly enhances the BER performance of the SQRD-based SM UWB systems on a log-normal multipath fading channel.
View full abstract
-
Li LI, Changqing XU, Pingzhi FAN, Jian HE
Article type: LETTER
Subject area: Communication Theory and Signals
2012 Volume E95.A Issue 8 Pages
1430-1434
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In this paper, the resource allocation problem for proportional fairness in hybrid Cognitive Radio (CR) systems is studied. In OFDMA-based CR systems, traditional resource allocation algorithms can not guarantee proportional rates among CR users (CRU) in each OFDM symbol because the number of available subchannels might be smaller than that of CRUs in some OFDM symbols. To deal with this time-varying nature of available spectrum resource, a hybrid CR scheme in which CRUs are allowed to use subchannels in both spectrum holes and primary users (PU) bands is adopted and a resource allocation algorithm is proposed to guarantee proportional rates among CRUs with no undue interference to PUs.
View full abstract
-
Nikhil JOSHI, Adrish BANERJEE, Jeong Woo LEE
Article type: LETTER
Subject area: Communication Theory and Signals
2012 Volume E95.A Issue 8 Pages
1435-1438
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
The convergence behavior of turbo APPM (TAPPM) decoding is analyzed by using a three-dimensional extrinsic information transfer (EXIT) chart and the decoding trajectory. The signal-to-noise ratio (SNR) threshold, below which iterative decoding fails to converge, is predicted by using the 3-D EXIT chart analysis. Bit error rate performances of TAPPM schemes validate the EXIT-chart-based SNR threshold predictions. Outer constituent codes of TAPPM are chosen to show the lowest SNR threshold with the aid of EXIT chart analysis.
View full abstract
-
Kyowon JEONG, Jungwoo LEE
Article type: LETTER
Subject area: Mobile Information Network and Personal Communications
2012 Volume E95.A Issue 8 Pages
1439-1443
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In this paper, we propose low complexity channel parameter tracking methods for adaptive OFDM MMSE channel estimation. Even though the MMSE estimation is one of the most accurate channel estimation methods, it requires several channel information including Doppler frequency, RMS (root mean squared) delay spread, and SNR. To implement the MMSE estimation, tracking of such parameters should be preceded. We propose methods to track the above 3 channel parameters. As for Doppler frequency estimation, we propose an extremum method with a parabolic model, which is a key contribution of this paper. We also analyze the computational complexity of the proposed algorithms. Simulations show that the proposed tracking algorithm tracks the parameters well, and performs better than the conventional fixed-parameter algorithm in terms of BER performance. The BER performance of the adaptive MMSE estimation is better than that of a fixed-parameter (robust) MMSE estimator by about 5dB.
View full abstract
-
Won-Jae SHIN, Young-Hwan YOU, Moo-Young KIM
Article type: LETTER
Subject area: Mobile Information Network and Personal Communications
2012 Volume E95.A Issue 8 Pages
1444-1447
Published: August 01, 2012
Released on J-STAGE: August 01, 2012
JOURNAL
RESTRICTED ACCESS
In this letter, an improved residual symbol timing offset (STO) estimation scheme is suggested in an orthogonal frequency division multiplexing (OFDM) based digital radio mondiale plus (DRM+) system with cyclic delay diversity (CDD). The robust residual STO estimator is derived by properly selecting the amount of cyclic delay and a pilot pattern in the presence of frequency selectivity. Via computer simulation, it is shown that the proposed STO estimation scheme is robust to the frequency selectivity of the channel, with a performance better than the conventional scheme.
View full abstract