IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508
Volume E99.A, Issue 12
Displaying 1-50 of 61 articles from this issue
Special Section on Information Theory and Its Applications
  • Motohiko ISAKA
    2016 Volume E99.A Issue 12 Pages 2106
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS
    Download PDF (333K)
  • Shunsuke IHARA
    Article type: PAPER
    Subject area: Shannon Theory
    2016 Volume E99.A Issue 12 Pages 2107-2115
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    We investigate the coding scheme and error probability in information transmission over continuous-time additive Gaussian noise channels with feedback. As is known, the error probability can be substantially reduced by using feedback, namely, under the average power constraint, the error probability may decrease more rapidly than the exponential of any order. Recently Gallager and Nakiboğlu proposed, for discrete-time additive white Gaussian noise channels, a feedback coding scheme such that the resulting error probability Pe(N) at time N decreases with an exponential order αN which is linearly increasing with N. The multiple-exponential decay of the error probability has been studied mostly for white Gaussian channels, so far. In this paper, we treat continuous-time Gaussian channels, where the Gaussian noise processes are not necessarily white nor stationary. The aim is to prove a stronger result on the multiple-exponential decay of the error probability. More precisely, for any positive constant α, there exists a feedback coding scheme such that the resulting error probability Pe(T) at time T decreases more rapidly than the exponential of order αT as T→∞.

    Download PDF (430K)
  • Tetsunao MATSUTA, Tomohiko UYEMATSU
    Article type: PAPER
    Subject area: Source Coding and Data Compression
    2016 Volume E99.A Issue 12 Pages 2116-2129
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper, we deal with the fixed-length lossy compression, where a fixed-length sequence emitted from the information source is encoded into a codeword, and the source sequence is reproduced from the codeword with a certain distortion. We give lower and upper bounds on the minimum number of codewords such that the probability of exceeding a given distortion level is less than a given probability. These bounds are characterized by using the α-mutual information of order infinity. Further, for i.i.d. binary sources, we provide numerical examples of tight upper bounds which are computable in polynomial time in the blocklength.

    Download PDF (459K)
  • Ken-ichi IWATA, Mitsuharu ARIMURA
    Article type: PAPER
    Subject area: Source Coding and Data Compression
    2016 Volume E99.A Issue 12 Pages 2130-2135
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    A generalization of compression via substring enumeration (CSE) for k-th order Markov sources with a finite alphabet is proposed, and an upper bound of the codeword length of the proposed method is presented. We analyze the worst case maximum redundancy of CSE for k-th order Markov sources with a finite alphabet. The compression ratio of the proposed method asymptotically converges to the optimal one for k-th order Markov sources with a finite alphabet if the length n of a source string tends to infinity.

    Download PDF (388K)
  • Hiroyuki ENDO, Te Sun HAN, Masahide SASAKI
    Article type: PAPER
    Subject area: Information Theoretic Security
    2016 Volume E99.A Issue 12 Pages 2136-2146
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    The wiretap channel is now a fundamental model for information-theoretic security. After introduced by Wyner, Csiszár and Körner have generalized this model by adding an auxiliary random variable. Recently, Han, Endo and Sasaki have derived the exponents to evaluate the performance of wiretap channels with cost constraints on input variable plus such an auxiliary random variable. Although the constraints on two variables were expected to provide larger-valued (or tighter) exponents, some non-trivial theoretical problems had been left open. In this paper, we investigate these open problems, especially concerning the concavity property of the exponents. Furthermore, we compare the exponents derived by Han et al. with the counterparts derived by Gallager to reveal that the former approach has a significantly wider applicability in contrast with the latter one.

    Download PDF (909K)
  • Tadashi WADAYAMA, Taisuke IZUMI, Kazushi MIMURA
    Article type: PAPER
    Subject area: Coding Theory and Techniques
    2016 Volume E99.A Issue 12 Pages 2147-2154
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    The main contribution of this paper is a non-trivial expression, that is called dual expression, of the posterior values for non-adaptive group testing problems. The dual expression is useful for exact bitwise MAP estimation. We assume a simplest non-adaptive group testing scenario including N-objects with binary status and M-tests. If a group contains one or more positive object, the test result for the group is assumed to be one; otherwise, the test result becomes zero. Our inference problem is to evaluate the posterior probabilities of the objects from the observation of M-test results and the prior probabilities for objects. The derivation of the dual expression of posterior values can be naturally described based on a holographic transformation to the normal factor graph (NFG) representing the inference problem. In order to handle OR constraints in the NFG, we introduce a novel holographic transformation that converts an OR function to a function similar to an EQUAL function.

    Download PDF (798K)
  • Hiroki MORI, Tadashi WADAYAMA
    Article type: PAPER
    Subject area: Coding Theory and Techniques
    2016 Volume E99.A Issue 12 Pages 2155-2161
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper, we will present analysis on the fault erasure BP decoders based on the density evolution. In the fault BP decoder, the messages exchanged in a BP process are stochastically corrupted due to unreliable logic gates and flip-flops; i.e., we assume circuit components with transient faults. We derived a set of the density evolution equations for the fault erasure BP processes. Our density evolution analysis reveals the asymptotic behaviors of the estimation error probability of the fault erasure BP decoders. In contrast to the fault free cases, it is observed that the error probabilities of the fault erasure BP decoder converge to positive values, and that there exists a discontinuity in an error curve corresponding to the fault BP threshold. It is also shown that an message encoding technique provides higher fault BP thresholds than those of the original decoders at the cost of increased circuit size.

    Download PDF (818K)
  • Sen MORIYA, Kana KIKUCHI, Hiroshi SASANO
    Article type: PAPER
    Subject area: Coding Theory and Techniques
    2016 Volume E99.A Issue 12 Pages 2162-2169
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this study, we consider techniques to search for high-rate punctured convolutional code (PCC) encoders using dual code encoders. A low-rate R=1/n convolutional code (CC) has a dual code that is identical to a PCC with rate R=(n-1)/n. This implies that a rate R=1/n convolutional code encoder can assist in searches for high-rate PCC encoders. On the other hand, we can derive a rate R=1/n CC encoder from good PCC encoders with rate R=(n-1)/n using dual code encoders. This paper proposes a method to obtain improved high-rate PCC encoders, using exhaustive search results of PCC encoders with rate R=1/3 original encoders, and dual code encoders. We also show some PCC encoders obtained by searches that utilized our method.

    Download PDF (808K)
  • Shunsuke HORII, Toshiyasu MATSUSHIMA, Shigeichi HIRASAWA
    Article type: PAPER
    Subject area: Coding Theory and Techniques
    2016 Volume E99.A Issue 12 Pages 2170-2178
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this study, we develop a new algorithm for decoding binary linear codes for symbol-pair read channels. The symbol-pair read channel was recently introduced by Cassuto and Blaum to model channels with higher write resolutions than read resolutions. The proposed decoding algorithm is based on linear programming (LP). For LDPC codes, the proposed algorithm runs in time polynomial in the codeword length. It is proved that the proposed LP decoder has the maximum-likelihood (ML) certificate property, i.e., the output of the decoder is guaranteed to be the ML codeword when it is integral. We also introduce the fractional pair distance dfp of the code, which is a lower bound on the minimum pair distance. It is proved that the proposed LP decoder corrects up to ⌈dfp/2⌉-1 errors.

    Download PDF (416K)
  • Makoto TAKITA, Masanori HIROTOMO, Masakatu MORII
    Article type: PAPER
    Subject area: Coding Theory and Techniques
    2016 Volume E99.A Issue 12 Pages 2179-2191
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper, we discuss an algebraic decoding of BCH codes over symbol-pair read channels. The channels output overlapping pairs of symbols in storage applications. The pair distance and pair error are used in the channels. We define a polynomial that represents the positions of the pair errors as the error-locator polynomials and a polynomial that represents the positions of the pairs of a received pair vector in conflict as conflict-locator polynomial. In this paper, we propose algebraic methods for correcting two-pair and three-pair errors for BCH codes. First, we show the relation between the error-locator polynomials and the conflict-locator polynomial. Second, we show the relation among these polynomials and the syndromes. Finally, we provide how to correct the pair errors by solving equations including the relational expression by algebraic methods.

    Download PDF (393K)
  • Keigo TAKEUCHI
    Article type: PAPER
    Subject area: Communication Theory and Systems
    2016 Volume E99.A Issue 12 Pages 2192-2201
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Faster-than-Nyquist (FTN) signaling is investigated for quasi-static flat fading massive multiple-input multiple-output (MIMO) systems. In FTN signaling, pulse trains are sent at a symbol rate higher than the Nyquist rate to increase the transmission rate. As a result, inter-symbol interference occurs inevitably for flat fading channels. This paper assesses the information-theoretically achievable rate of MIMO FTN signaling based on the optimum joint equalization and multiuser detection. The replica method developed in statistical physics is used to evaluate the achievable rate in the large-system limit, where the dimensions of input and output signals tend to infinity at the same rate. An analytical expression of the achievable rate is derived for general modulation schemes in the large-system limit. It is shown that FTN signaling does not improve the channel capacity of massive MIMO systems, and that FTN signaling with quadrature phase-shift keying achieves the channel capacity for all signal-to-noise ratios as the symbol period tends to zero.

    Download PDF (475K)
  • Ryota SEKIYA, Brian M. KURKOSKI
    Article type: PAPER
    Subject area: Communication Theory and Systems
    2016 Volume E99.A Issue 12 Pages 2202-2210
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Write once memory (WOM) codes allow reuse of a write-once medium. This paper focuses on applying WOM codes to the binary symmetric asymmetric multiple access channel (BS-AMAC). At one specific rate pair, WOM codes can achieve the BS-AMAC maximum sum-rate. Further, any achievable rate pair for a two-write WOM code is also an achievable rate pair for the BS-AMAC. Compared to the uniform input distribution of linear codes, the non-uniform WOM input distribution is helpful for a BS-AMAC. In addition, WOM codes enable “symbol-wise estimation”, resulting in the decomposition to two distinct channels. This scheme does not achieve the BS-AMAC maximum sum-rate if the channel has errors, however leads to reduced-complexity decoding by enabling independent decoding of two codewords. Achievable rates for this decomposed system are also given. The AMAC has practical application to the relay channel and we briefly discuss the relay channel with block Markov encoding using WOM codes. This scheme may be effective for cooperative wireless communications despite the fact that WOM codes are designed for data storage.

    Download PDF (856K)
  • Yuki TAKEDA, Yuichi KAJI, Minoru ITO
    Article type: PAPER
    Subject area: Networks and Network Coding
    2016 Volume E99.A Issue 12 Pages 2211-2217
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    An information flow problem is a graph-theoretical formalization of the transportation of information over a complicated network. It is known that a linear network code plays an essential role in a certain type of information flow problems, but it is not understood clearly how contributing linear network codes are for other types of information flow problems. One basic problem concerning this aspect is the linear solvability of information flow problems, which is to decide if there is a linear network code that is a solution to the given information flow problem. Lehman et al. characterize the linear solvability of information flow problems in terms of constraints on the sets of source and sink nodes. As an extension of Lehman's investigation, this study introduces a hierarchy constraint of messages, and discusses the computational complexity of the linear solvability of information flow problems with the hierarchy constraints. Nine classes of problems are newly defined, and classified to one of three categories that were discovered by Lehman et al.

    Download PDF (665K)
  • Akiyuki YANO, Tadashi WADAYAMA
    Article type: PAPER
    Subject area: Networks and Network Coding
    2016 Volume E99.A Issue 12 Pages 2218-2225
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In the field of computer science, the network reliability problem for evaluating the network failure probability has been extensively investigated. For a given undirected graph G, the network failure probability is the probability that edge failures (i.e., edge erasures) make G unconnected. Edge failures are assumed to occur independently with the same probability. The main contributions of the present paper are the upper and lower bounds on the expected network failure probability. We herein assume a simple random graph ensemble that is closely related to the Erdős-Rényi random graph ensemble. These upper and lower bounds exhibit the typical behavior of the network failure probability. The proof is based on the fact that the cut-set space of G is a linear space over $\Bbb F_2$ spanned by the incident matrix of G. The present study shows a close relationship between the ensemble analysis of the expected network failure probability and the ensemble analysis of the error detection probability of LDGM codes with column weight 2.

    Download PDF (390K)
  • Yasuyuki NOGAMI, Satoshi UEHARA, Kazuyoshi TSUCHIYA, Nasima BEGUM, Hir ...
    Article type: PAPER
    Subject area: Sequences
    2016 Volume E99.A Issue 12 Pages 2226-2237
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This paper proposes a new multi-value sequence generated by utilizing primitive element, trace, and power residue symbol over odd characteristic finite field. In detail, let p and k be an odd prime number as the characteristic and a prime factor of p-1, respectively. Our proposal generates k-value sequence T={ti | ti=fk(Tr(ωi)+A)}, where ω is a primitive element in the extension field $\F{p}{m}$, Tr(⋅) is the trace function that maps $\F{p}{m} \rightarrow \f{p}$, A is a non-zero scalar in the prime field $\f{p}$, and fk(⋅) is a certain mapping function based on k-th power residue symbol. Thus, the proposed sequence has four parameters as p, m, k, and A. Then, this paper theoretically shows its period, autocorrelation, and cross-correlation. In addition, this paper discusses its linear complexity based on experimental results. Then, these features of the proposed sequence are observed with some examples.

    Download PDF (1819K)
  • Hiroshi FUJISAKI
    Article type: PAPER
    Subject area: Fundamentals of Information Theory
    2016 Volume E99.A Issue 12 Pages 2238-2247
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    We define the topological entropy of the discretized Markov transformations. Previously, we obtained the topological entropy of the discretized dyadic transformation. In this research, we obtain the topological entropy of the discretized golden mean transformation. We also generalize this result and give the topological entropy of the discretized Markov β-transformations with the alphabet Σ={0,1,…,k-1} and the set F={(k-1)c,…,(k-1)(k-1)}(1≤ck-1) of (k-c) forbidden blocks, whose underlying transformations exhibit a wide class of greedy β-expansions of real numbers.

    Download PDF (464K)
  • Hidetoshi SAITO
    Article type: PAPER
    Subject area: Signal Processing for Storage
    2016 Volume E99.A Issue 12 Pages 2248-2255
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This paper proposes an effective signal processing scheme using a modulation code with two-dimensional (2D) run-length limited (RLL) constraints for bit-patterned media magnetic recording (BPMR). This 2D signal processing scheme is applied to be one of two-dimensional magnetic recording (TDMR) schemes for shingled magnetic recording on bit patterned media (BPM). A TDMR scheme has been pointed out an important key technology for increasing areal density toward 10Tb/in2. From the viewpoint of 2D signal processing for TDMR, multi-track joint decoding scheme is desirable to increase an effective transfer rate because this scheme gets readback signals from several adjacent parallel tracks and detect recorded data written in these tracks simultaneously. Actually, the proposed signal processing scheme for BPMR gets mixed readback signal sequences from the parallel tracks using a single reading head and these readback signal sequences are equalized to a frequency response given by a desired 2D generalized partial response system. In the decoding process, it leads to an increase in the effective transfer rate by using a single maximum likelihood (ML) sequence detector because the recorded data on the parallel tracks are decoded for each time slot. Furthermore, a new joint pattern-dependent noise-predictive (PDNP) sequence detection scheme is investigated for multi-track recording with media noise. This joint PDNP detection is embed in a ML detector and can be useful to eliminate media noise. Using computer simulation, it is shown that the joint PDNP detection scheme is able to compensate media noise in the equalizer output which is correlated and data-dependent.

    Download PDF (832K)
  • Kotoku OMURA, Shoichiro YAMASAKI, Tomoko K. MATSUSHIMA, Hirokazu TANAK ...
    Article type: PAPER
    Subject area: Video Coding
    2016 Volume E99.A Issue 12 Pages 2256-2265
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Many studies have applied the three-dimensional discrete wavelet transform (3D DWT) to video coding. It is known that corruptions of the lowest frequency sub-band (LL) coefficients of 3D DWT severely affect the visual quality of video. Recently, we proposed an error resilient 3D DWT video coding method (the conventional method) that employs dispersive grouping and an error concealment (EC). The EC scheme of our conventional method adopts a replacement technique of the lost LL coefficients. In this paper, we propose a new 3D DWT video transmission method in order to enhance error resilience. The proposed method adopts an error correction scheme using invertible codes to protect LL coefficients. We use half-rate Reed-Solomon (RS) codes as invertible codes. Additionally, to improve performance by using the effect of interleave, we adopt a new configuration scheme at the RS encoding stage. The evaluation by computer simulation compares the performance of the proposed method with that of other EC methods, and indicates the advantage of the proposed method.

    Download PDF (4432K)
  • Weiwei PAN, Qinhua HU
    Article type: PAPER
    Subject area: Machine Learning
    2016 Volume E99.A Issue 12 Pages 2266-2274
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Ordinal classification is a class of special tasks in machine learning and pattern recognition. As to ordinal classification, there is an ordinal structure among different decision values. The monotonicity constraint between features and decision should be taken into account as the fundamental assumption. However, in real-world applications, this assumption may be not true. Only some candidate features, instead of all, are monotonic with decision. So the existing feature selection algorithms which are designed for nominal classification or monotonic classification are not suitable for ordinal classification. In this paper, we propose a feature selection algorithm for ordinal classification based on considering the non-monotonic and monotonic features separately. We first introduce an assumption of hybrid monotonic classification consistency and define a feature evaluation function to calculate the relevance between the features and decision for ordinal classification. Then, we combine the reported measure and genetic algorithm (GA) to search the optimal feature subset. A collection of numerical experiments are implemented to show that the proposed approach can effectively reduce the feature size and improve the classification performance.

    Download PDF (978K)
  • Shota SAITO, Toshiyasu MATSUSHIMA
    Article type: LETTER
    Subject area: Shannon Theory
    2016 Volume E99.A Issue 12 Pages 2275-2280
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This letter deals with the Slepian-Wolf coding problem for general sources. The second-order achievable rate region is derived using quantity which is related to the smooth max-entropy and the conditional smooth max-entropy. Moreover, we show the relationship of the functions which characterize the second-order achievable rate region in our study and previous study.

    Download PDF (108K)
  • Mitsuharu ARIMURA
    Article type: LETTER
    Subject area: Source Coding and Data Compression
    2016 Volume E99.A Issue 12 Pages 2281-2285
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Average coding rate of a multi-shot Tunstall code, which is a variation of variable-to-fixed length (VF) lossless source codes, for stationary memoryless sources is investigated. A multi-shot VF code parses a given source sequence to variable-length blocks and encodes them to fixed-length codewords. If we consider the situation that the parsing count is fixed, overall multi-shot VF code can be treated as a one-shot VF code. For this setting of Tunstall code, the compression performance is evaluated using two criterions. The first one is the average coding rate which is defined as the codeword length divided by the average block length. The second one is the expectation of the pointwise coding rate. It is proved that both of the above average coding rate converge to the entropy of a stationary memoryless source under the assumption that the geometric mean of the leaf counts of the multi-shot Tunstall parsing trees goes to infinity.

    Download PDF (109K)
  • Shota SAITO, Toshiyasu MATSUSHIMA
    Article type: LETTER
    Subject area: Source Coding and Data Compression
    2016 Volume E99.A Issue 12 Pages 2286-2290
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    We treat lossless fixed-to-variable length source coding under general sources for finite block length setting. We evaluate the threshold of the overflow probability for prefix and non-prefix codes in terms of the smooth max-entropy. We clarify the difference of the thresholds between prefix and non-prefix codes for finite block length. Further, we discuss our results under the asymptotic block length setting.

    Download PDF (134K)
  • Hirosuke YAMAMOTO, Yuka KUWAORI
    Article type: LETTER
    Subject area: Source Coding and Data Compression
    2016 Volume E99.A Issue 12 Pages 2291-2295
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper, we propose two schemes, which enable any VF code to realize direct- or fast-access decoding for any long source sequence. Direct-access decoding means that any source symbol of any position can be directly decoded within constant time, not depending on the length of source sequence N, without decoding the whole codeword sequence. We also evaluate the memory size necessary to realize direct-access decoding or fast-access decoding with decoding delay O(log log N), O(log N), and so on, in the proposed schemes.

    Download PDF (121K)
  • Aiwei SUN, Tao LIANG, Hui TIAN
    Article type: LETTER
    Subject area: Information Theoretic Security
    2016 Volume E99.A Issue 12 Pages 2296-2300
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This letter investigates the physical layer security for a buffer-aided underlay cooperative cognitive radio network in the presence of an eavesdropper, wherein, the relay is equipped with a buffer so that it can store packets received from the secondary source. To improve the secure performance of cognitive radio networks, we propose a novel cognitive secure link selection scheme which incorporates the instantaneous strength of the wireless links as well as the status of relay's buffer, the proposed scheme adapts the link selection decision on the strongest available link by dynamically switching between relay reception and transmission. Closed-form expressions of secrecy outage probability (SOP) for cognitive radio network is obtained based on the Markov chain. Numerical results demonstrate that the proposed scheme can significantly enhance the secure performance compared to the conventional relay selection scheme.

    Download PDF (302K)
Special Section on VLSI Design and CAD Algorithms
  • Makoto IKEDA
    2016 Volume E99.A Issue 12 Pages 2301
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS
    Download PDF (239K)
  • Yusuke MATSUNAGA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2302-2309
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This paper presents a test pattern compaction algorithm applicable for large scale circuits. The proposed methods formalizes the test pattern compaction problem as a problem finding minimum set of compatible fault groups. Also, an efficient algorithm checking compatibility of fault group is proposed. The experimental results show that the proposed algorithm achieves similar or better results against a couple of existing methods, especially for middle circuits.

    Download PDF (328K)
  • Fuqiang LI, Xiaoqing WEN, Kohei MIYASE, Stefan HOLST, Seiji KAJIHARA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2310-2319
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Excessive IR-drop in capture mode during at-speed scan testing may cause timing errors for defect-free circuits, resulting in undue test yield loss. Previous solutions for achieving capture-power-safety adjust the switching activity around logic paths, especially long sensitized paths, in order to reduce the impact of IR-drop. However, those solutions ignore the impact of IR-drop on clock paths, namely test clock stretch; as a result, they cannot accurately achieve capture-power-safety. This paper proposes a novel scheme, called LP-CP-aware ATPG, for generating high-quality capture-power-safe at-speed scan test vectors by taking into consideration the switching activity around both logic and clock paths. This scheme features (1) LP-CP-aware path classification for characterizing long sensitized paths by considering the IR-drop impact on both logic and clock paths; (2) LP-CP-aware X-restoration for obtaining more effective X-bits by backtracing from both logic and clock paths; (3) LP-CP-aware X-filling for using different strategies according to the positions of X-bits in test cubes. Experimental results on large benchmark circuits demonstrate the advantages of LP-CP-aware ATPG, which can more accurately achieve capture-power-safety without significant test vector count inflation and test quality loss.

    Download PDF (2012K)
  • Cheng-Yu HAN, Yu-Ching LI, Hao-Tien KAN, James Chien-Mo LI
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2320-2327
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    SUMMARY This paper proposes a power-supply-noise-aware timing analysis and test pattern regeneration framework suitable for testing 3D IC. The proposed framework analyzes timing with reasonable accuracy at much faster speed than existing tools. This technique is very scalable because it is based on analytical functions, instead of solving nonlinear equations. The experimental results show, for small circuits, the error is less than 2% compared with SPICE. For large circuits, we achieved 272 times speed up compared with a commercial tool. For a large benchmark circuit (638K gates), we identified 88 risky patterns out of 31K test patterns. We propose a test pattern regeneration flow to replace those risky patterns with very little (or even no) penalty in fault coverage. Our test sets are shorter than commercial power-aware ATPG while the fault coverage is almost the same as power-unaware ATPG.

    Download PDF (1171K)
  • Takashi KISHIMOTO, Wataru TAKAHASHI, Kazutoshi WAKABAYASHI, Hiroyuki O ...
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2328-2334
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper, we propose a novel placement algorithm for mixed-grained reconfigurable architectures (MGRAs). MGRA consists of coarse-grained and fine-grained clusters, in order to implement a combined digital systems of high-speed data paths with multi-bit operands and random logic circuits for state machines and bit-wise operations. For accelerating simulated annealing based FPGA placement algorithm, range limiter has been proposed to control the distance of two blocks to be interchanged. However, it is not applicable to MGRAs due to the heterogeneous structure of MGRAs. Proposed range limiter using connection bounding box effectively keeps the size of range limiter to encourage moves across fine-grain blocks in non-adjacent clusters. From experimental results, the proposed method achieved 47.8% reduction of cost in the best case compared with conventional methods.

    Download PDF (1704K)
  • Masaru OYA, Noritaka YAMASHITA, Toshihiko OKAMURA, Yukiyasu TSUNOO, Ma ...
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2335-2347
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Since digital ICs are often designed and fabricated by third parties at any phases today, we must eliminate risks that malicious attackers may implement Hardware Trojans (HTs) on them. In particular, they can easily insert HTs during design phase. This paper proposes an HT rank which is a new quantitative analysis criterion against HTs at gate-level netlists. We have carefully analyzed all the gate-level netlists in Trust-HUB benchmark suite and found out several Trojan net features in them. Then we design the three types of Trojan points: feature point, count point, and location point. By assigning these points to every net and summing up them, we have the maximum Trojan point in a gate-level netlist. This point gives our HT rank. The HT rank can be calculated just by net features and we do not perform any logic simulation nor random test. When all the gate-level netlists in Trust-HUB, ISCAS85, ISCAS89 and ITC99 benchmark suites as well as several OpenCores designs, HT-free and HT-inserted AES netlists are ranked by our HT rank, we can completely distinguish HT-inserted ones (which HT rank is ten or more) from HT-free ones (which HT rank is nine or less). The HT rank is the world-first quantitative criterion which distinguishes HT-inserted netlists from HT-free ones in all the gate-level netlists in Trust-HUB, ISCAS85, ISCAS89, and ITC99.

    Download PDF (1932K)
  • Ryosuke KITAYAMA, Takashi TAKENAKA, Masao YANAGISAWA, Nozomu TOGAWA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2348-2362
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Power analysis for IoT devices is strongly required to protect attacks from malicious attackers. It is also very important to reduce power consumption itself of IoT devices. In this paper, we propose a highly-adaptable and small-sized in-field power analyzer for low-power IoT devices. The proposed power analyzer has the following advantages: (A) The proposed power analyzer realizes signal-averaging noise reduction with synchronization signal lines and thus it can reduce wide frequency range of noises; (B) The proposed power analyzer partitions a long-term power analysis process into several analysis segments and measures voltages and currents of each analysis segment by using small amount of data memories. By combining these analysis segments, we can obtain long-term analysis results; (C) The proposed power analyzer has two amplifiers that amplify current signals adaptively depending on their magnitude. Hence maximum readable current can be increased with keeping minimum readable current small enough. Since all of (A), (B) and (C) do not require complicated mechanisms nor circuits, the proposed power analyzer is implemented on just a 2.5cm×3.3cm board, which is the smallest size among the other existing power analyzers for IoT devices. We have measured power and energy consumption of the AES encryption process on the IoT device and demonstrated that the proposed power analyzer has only up to 1.17% measurement errors compared to a high-precision oscilloscope.

    Download PDF (2722K)
  • Ahmed AWAD, Atsushi TAKAHASHI, Chikaaki KODAMA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2363-2374
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    With being pushed into sub-16nm regime, advanced technology nodes printing in optical micro-lithography relies heavily on aggressive Optical Proximity Correction (OPC) in the foreseeable future. Although acceptable pattern fidelity is utilized under process variations, mask design time and mask manufacturability form crucial parameters whose tackling in the OPC recipe is highly demanded by the industry. In this paper, we propose an intensity based OPC algorithm to find a highly manufacturable mask solution for a target pattern with acceptable pattern fidelity under process variations within a short computation time. This is achieved through utilizing a fast intensity estimation model in which intensity is numerically correlated with local mask density and kernel type to estimate the intensity in a short time and with acceptable estimation accuracy. This estimated intensity is used to guide feature shifting, alignment, and concatenation following linearly interpolated variational intensity error model to achieve high mask manufacturability with preserving acceptable pattern fidelity under process variations. Experimental results show the effectiveness of our proposed algorithm on the public benchmarks.

    Download PDF (1772K)
  • Heming SUN, Dajiang ZHOU, Shuping ZHANG, Shinji KIMURA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2375-2387
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper, we present a low-power system for the de-quantization and inverse transform of HEVC. Firstly, we present a low-delay circuit to process the coded results of the syntax elements, and then reduce the number of multipliers from 16 to 4 for the de-quantization process of each 4x4 block. Secondly, we give two efficient data mapping schemes for the memory between de-quantization and inverse transform, and the memory for transpose. Thirdly, the zero information is utilized through the whole system. For two memory parts, the write and read operation of zero blocks/ rows/ coefficients can all be skipped to save the power consumption. The results show that up to 86% power consumption can be saved for the memory part under the configuration of “Random-access” and common QPs. For the logical part, the proposed architecture for de-quantization can reduce 77% area consumption. Overall, our system can support real-time coding for 8K x 4K 120fps video sequences and the normalized area consumption can be reduced by 68% compared with the latest work.

    Download PDF (3173K)
  • Wei-Kai CHENG, Jui-Hung HUNG, Yi-Hsuan CHIU
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2388-2397
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    As the increasing complexity of chip design, reducing both power consumption and clock skew becomes a crucial research topic in clock network synthesis. Among various clock network synthesis approaches, clock tree has less power consumption in comparison with clock mesh structure. In contrast, clock mesh has a higher tolerance of process variation and hence is easier to satisfy the clock skew constraint. To reduce the power consumption of clock mesh network, an effective way is to minimize the wire capacitance of stub wires. In addition, integration of clock gating and register clustering techniques on clock mesh network can further reduce dynamic power consumption. In this paper, under both enable timing constraint and clock skew constraint, we propose a methodology to reduce the switching capacitance by non-uniform clock mesh synthesis, clock gate insertion and register clustering. In comparison with clock mesh synthesis and clock gating technique individually, experimental results show that our methodology can improve both the clock skew and switching capacitance efficiently.

    Download PDF (1587K)
  • Tatsuro KOJO, Masashi TAWADA, Masao YANAGISAWA, Nozomu TOGAWA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2398-2411
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Non-volatile memories are paid attention to as a promising alternative to memory design. Data stored in them still may be destructed due to crosstalk and radiation. We can restore the data by using error-correcting codes which require extra bits to correct bit errors. Further, non-volatile memories consume ten to hundred times more energy than normal memories in bit-writing. When we configure them using error-correcting codes, it is quite necessary to reduce writing bits. In this paper, we propose a method to generate a bit-write-reducing code with error-correcting ability. We first pick up an error-correcting code which can correct t-bit errors. We cluster its codeswords and generate a cluster graph satisfying the S-bit flip conditions. We assign a data to be written to each cluster. In other words, we generate one-to-many mapping from each data to the codewords in the cluster. We prove that, if the cluster graph is a complete graph, every data in a memory cell can be re-written into another data by flipping at most S bits keeping error-correcting ability to t bits. We further propose an efficient method to cluster error-correcting codewords. Experimental results show that the bit-write-reducing and error-correcting codes generated by our proposed method efficiently reduce energy consumption. This paper proposes the world-first theoretically near-optimal bit-write-reducing code with error-correcting ability based on the efficient coding theories.

    Download PDF (1663K)
  • Tieyuan PAN, Lian ZENG, Yasuhiro TAKASHIMA, Takahiro WATANABE
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2412-2424
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper, we propose a fast Maximal Empty Rectangle (MER) enumeration algorithm for online task placement on reconfigurable Field-Programmable Gate Arrays (FPGAs). On the assumption that each task utilizes rectangle-shaped resources, the proposed algorithm can manage the free space on FPGAs by an MER list. When assigning or removing a task, a series of MERs are selected and cut into segments according to the task and its assignment location. By processing these segments, the MER list can be updated quickly with low memory consumption. Under the proof of the upper limit of the number of the MERs on the FPGA, we analyze both the time and space complexity of the proposed algorithm. The efficiency of the proposed algorithm is verified by experiments.

    Download PDF (1781K)
  • Naoya YOKOYAMA, Daiki AZUMA, Shuji TSUKIYAMA, Masahiro FUKUI
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2425-2434
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In statistical methods, such as statistical static timing analysis, Gaussian mixture model (GMM) is a useful tool for representing a non-Gaussian distribution and handling correlation easily. In order to repeat various statistical operations such as summation and maximum for GMMs efficiently, the number of components should be restricted around two. In this paper, we propose a method for reducing the number of components of a given GMM to two (2-GMM). Moreover, since the distribution of each component is represented often by a linear combination of some explanatory variables, we propose a method to compute the covariance between each explanatory variable and the obtained 2-GMM, that is, the sensitivity of 2-GMM to each explanatory variable. In order to evaluate the performance of the proposed methods, we show some experimental results. The proposed methods minimize the normalized integral square error of probability density function of 2-GMM by the sacrifice of the accuracy of sensitivities of 2-GMM.

    Download PDF (1160K)
  • Mitsutoshi SUGAWARA, Kenji MORI, Zule XU, Masaya MIYAHARA, Kenichi OKA ...
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2435-2443
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    We propose a synthesis and automatic layout method for mixed-signal circuits with high regularity. As the first step of this research, a resistive digital-to-analog converter (RDAC) is presented. With a size calculation routine, the area of this RDAC is minimized while satisfying the required matching precision without any optimization loops. We propose to partition the design into slices comprising of both analog and digital cells. These cells are programmed to be synthesized as similar as custom P-Cells based on the calculation above, and automatically laid out to form one slice cell. To synthesize digital circuits, without using digital standard cell library, we propose a versatile unit digital block consisting of 8 transistors. With one or several blocks, the transistors' interconnections are programmed in the units to realize various logic gates. By using this block, the slice shapes are aligned so that the layout space in between the slices are minimized. The proposed mixed-signal slice-based partition facilitates the place-and-route of the whole RDAC. The post-layout simulation shows that the generated 9-bit RDAC achieves 1GHz sampling frequency, -0.11/0.09 and -0.30/0.75 DNL and INL, respectively, 3.57mW power consumption, and 0.0038mm2 active area.

    Download PDF (2315K)
  • Masato TAMURA, Makoto IKEDA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2444-2452
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This paper presents the optimal implementation methods for 256-bit elliptic curve digital signature algorithm (ECDSA) signature generation processors with high speed Montgomery multipliers. We have explored the radix of the data path of the Montgomery multiplier from 2-bit to 256-bit operation and proposed the use of pipelined Montgomery multipliers for signature generation speed, area, and energy optimization. The key factor in the design optimization is how to perform modular multiplication. The high radix Montgomery multiplier is known to be an efficient implementation for high-speed modular multiplication. We have implemented ECDSA signature generation processors with high radix Montgomery multipliers using 65-nm SOTB CMOS technology. Post-layout results show that the fastest ECDSA signature generation time of 63.5µs with radix-256-bit, a two-module four-streams pipeline architecture, and an area of 0.365mm2 (which is the smallest) with a radix-16-bit zero-pipeline architecture, and the smallest signature generation energy of 9.51µJ with radix-256-bit zero-pipeline architecture.

    Download PDF (1716K)
  • Kazuhito ITO
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2453-2462
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    A Reed-Solomon (RS) decoder is designed based on the pipelined recursive Euclidean algorithm in the key equation solution. While the Euclidean algorithm uses less Galois multipliers than the modified Euclidean (ME) and reformulated inversionless Berlekamp-Massey (RiBM) algorithms, division between two elements in Galois field is required. By implementing the division with a multi-cycle Galois inverter and a serial Galois multiplier, the proposed key equation solver architecture achieves lower complexity than the conventional ME and RiBM based architectures. The proposed RS (255,239) decoder reduces the hardware complexity by 25.9% with 6.5% increase in decoding latency.

    Download PDF (1524K)
  • Tatsuya KAMAKARI, Jun SHIOMI, Tohru ISHIHARA, Hidetoshi ONODERA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2463-2472
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In synchronous LSI circuits, memory subsystems such as Flip-Flops and SRAMs are essential components and latches are the base elements of the common memory logics. In this paper, a stability analysis method for latches operating in a low voltage region is proposed. The butterfly curve of latches is a key for analyzing a retention failure of latches. This paper discusses a modeling method for retention stability and derives an analytical stability model for latches. The minimum supply voltage where the latches can operate with a certain yield can be accurately derived by a simple calculation using the proposed model. Monte-Carlo simulation targeting 65nm and 28nm process technology models demonstrates the accuracy and the validity of the proposed method. Measurement results obtained by a test chip fabricated in a 65nm process technology also demonstrate the validity. Based on the model, this paper shows some strategies for variation tolerant design of latches.

    Download PDF (3086K)
  • Yu HOU, Zhijie CHEN, Masaya MIYAHARA, Akira MATSUZAWA
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2473-2482
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This paper proposes a SAR-VCO hybrid 1-1 MASH ADC architecture, where a fully-passive 1st-order noise-shaping SAR ADC is implemented in the first stage to eliminate Op-amp. A VCO-based ADC quantizes the residue of the SAR ADC with one additional order of noise shaping in the second stage. The inter-stage gain error can be suppressed by a foreground calibration technique. The proposed ADC architecture is expected to accomplish 2nd-order noise shaping without Op-amp, which makes both high SNDR and low power possible. A prototype ADC is designed in a 65nm CMOS technology to verify the feasibility of the proposed ADC architecture. The transistor-level simulation results show that 75.7dB SNDR is achieved in 5MHz bandwidth at 60MS/s. The power consumption is 748.9µW under 1.0V supply, which results in a FoM of 14.9fJ/conversion-step.

    Download PDF (2132K)
  • Hiroyuki NAKAMOTO, Hong GAO, Hiroshi YAMAZAKI
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2483-2490
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This paper presents a wide-input-voltage-range and high-efficiency boost converter that is assisted by a transformer-based oscillator. The oscillator can provide a sufficient amount of power to drive a following switched-inductor boost converter at low voltages. Moreover, it adopts a novel amplitude-regulation circuit (ARC) without using high power-consuming protective devices to suppress the expansion of the oscillation amplitude at high input voltages. Therefore, it can avoid over-voltage problems without sacrificing the power efficiency. Additionally, a power-down circuit (PDC) is implemented to turn off the oscillator, when the boost converter can be driven by its own output power, thus, eliminating the power consumption by the oscillator and improving the power efficiency. We implemented the ARC and the PDC with discrete components rather than one-chip integration for the proof of concept. The experimental results showed that the proposed circuit became possible to operate from an input voltage of 60mV to 3V while maintaining high peak efficiency up to 92%. To the best of our knowledge, this converter provides a wider input range in comparison with the previously-published converters. We are convinced that the proposed approach by inserting an appropriate start-up circuit in a commercial converter will be effective for rapid design proposals in order to respond promptly to customer needs as Internet of things (IoT) devices with energy harvester.

    Download PDF (2999K)
  • Toshihiro OZAKI, Tetsuya HIROSE, Takahiro NAGAI, Keishi TSUBAKI, Nobut ...
    Article type: PAPER
    2016 Volume E99.A Issue 12 Pages 2491-2499
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    This paper presents a fully integrated voltage boost converter consisting of a charge pump (CP) and maximum power point tracking (MPPT) controller for ultra-low power energy harvesting. The converter is based on a conventional CP circuit and can deliver a wide range of load current by using nMOS and pMOS driver circuits for highly efficient charge transfer operation. The MPPT controller we propose dissipates nano-watt power to extract maximum power regardless of the harvester's power generation conditions and load current. The measurement results demonstrated that the circuit converted a 0.49-V input to a 1.46-V output with 73% power conversion efficiency when the output power was 348µW. The circuit can operate at an extremely low input voltage of 0.21V.

    Download PDF (4068K)
  • Motoki AMAGASAKI, Ryo ARAKI, Masahiro IIDA, Toshinori SUEYOSHI
    Article type: LETTER
    2016 Volume E99.A Issue 12 Pages 2500-2506
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Most modern field programmable gate arrays (FPGAs) use a lookup table (LUT) as their basic logic cell. LUT resource requirements increase as O(2k) with an increasing number of inputs, k, so LUTs with more than six inputs negatively affect the overall FPGA performance. To address this problem, we propose a scalable logic module (SLM), which is a logic cell with less configuration memory, by using partial functions of the Shannon expansion for logics that appear frequently. In addition, we develop a technology mapping tool for SLM. The key feature of our tool is to combine a function decomposition process with traditional cut-based mapping. Experimental results show that an SLM-based FPGA with our mapping method uses much fewer configuration memory bits and has a smaller area than conventional LUT-based FPGAs.

    Download PDF (773K)
  • Kazuhito ITO, Hiroki HAYASHI
    Article type: LETTER
    2016 Volume E99.A Issue 12 Pages 2507-2510
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In this paper a hardware-efficient local extrema detection (LED) method used for scale-space extrema detection in the SIFT algorithm is proposed. By reformulating the reuse of the intermediate results in taking the local maximum and minimum, the necessary operations in LED are reduced without degrading the detection accuracy. The proposed method requires 25% to 35% less logic resources than the conventional method when implemented in an FPGA with a slight increase in latency.

    Download PDF (289K)
Regular Section
  • Masaki KOBAYASHI
    Article type: PAPER
    Subject area: Nonlinear Problems
    2016 Volume E99.A Issue 12 Pages 2511-2516
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    In recent years, applications of neural networks with Clifford algebra have become widespread. Hyperbolic numbers are useful Clifford algebra to deal with hyperbolic geometry. It is difficult when Hopfield neural network is extended to hyperbolic versions, though several models have been proposed. Multistate or continuous hyperbolic Hopfield neural networks are promising models. However, the connection weights and domain of activation function are limited to the right quadrant of hyperbolic plane, and the learning algorithms are restricted. In this work, the connection weights and activation function are extended to the entire hyperbolic plane. In addition, the energy is defined and it is proven that the energy does not increase.

    Download PDF (303K)
  • Guangbo WANG, Jianhua WANG, Zhencheng GUO
    Article type: PAPER
    Subject area: Cryptography and Information Security
    2016 Volume E99.A Issue 12 Pages 2517-2526
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Self-updating encryption (SUE) is a new cryptographic scheme produced in the recent work of Lee, Choi, Lee, Park and Yung (Asiacrypt 2013) to achieve a time-updating mechanism for revocation. In SUE, a ciphetext and a private key are associated with the time and a user can decrypt a ciphertext only if its time is earlier than that of his private key. But one drawback is the encryption computational overhead scales with the size of the time which makes it a possible bottleneck for some applications. To address this problem, we provide a new technique for the SUE that splits the encryption algorithm into two phases: an offline phase and an online phase. In the offline phase, an intermediate ciphertext header is generated before it knows the concrete encryption time. Then an online phase is implemented to rapidly generate an SUE ciphertext header when the time becomes known by making use of the intermediate ciphertext header. In addition, two different online encryption constructions are proposed in view of different time level taking 50% as the boundary. At last, we prove the security of our scheme and provide the performance analysis which shows that the vast majority of computational overhead can be moved to the offline phase. One motivating application for this technique is resource-constrained mobile devices: the preparation work can be done when the mobile devices are plugged into a power source, then they can later rapidly perform SUE operations on the move without significantly consuming the battery.

    Download PDF (1078K)
  • Routo TERADA, Ewerton R. ANDRADE
    Article type: PAPER
    Subject area: Cryptography and Information Security
    2016 Volume E99.A Issue 12 Pages 2527-2538
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    Patarin proposed a crytographic trapdoor called Hidden Field Equation (HFE), a trapdoor based on the Multivariate Quadratic (MQ) and the Isomorphism of Polynomials (IP) problems. The MQ problem was proved by Patarin et al.'s to be NP-complete. Although the basic HFE has been proved to be vulnerable to attacks, its variants obtained by some modifications have been proved to be stronger against attacks. The Quartz digital signature scheme based on the HFEv- trapdoor (a variant of HFE) with particular choices of parameters, has been shown to be stronger against algebraic attacks to recover the private key. Furthermore, it generates reasonably short signatures. However, Joux et al. proved (based on the Birthday Paradox Attack) that Quartz is malleable in the sense that, if an adversary gets a valid pair of message and signature, a valid signature to another related message is obtainable with 250 computations and 250 queries to the signing oracle. Currently, the recommended minimum security level is 2112. Our signature scheme is also based on Quartz but we achieve a 2112 security level against Joux et al.'s attack. It is also more efficient in signature verification and vector initializations. Furthermore, we implemented both the original and our improved Quartz signature and run empirical comparisons.

    Download PDF (719K)
  • Jun ZHANG, Jinglu HU
    Article type: PAPER
    Subject area: Image
    2016 Volume E99.A Issue 12 Pages 2539-2546
    Published: December 01, 2016
    Released on J-STAGE: December 01, 2016
    JOURNAL RESTRICTED ACCESS

    The three dimensional (3D) reconstruction of a medical image sequence can provide intuitive morphologies of a target and help doctors to make more reliable diagnosis and give a proper treatment plan. This paper aims to reconstruct the surface of a renal corpuscle from the microscope renal biopsy image sequence. First, the contours of renal corpuscle in all slices are extracted automatically by using a context-based segmentation method with a coarse registration. Then, a new coevolutionary-based strategy is proposed to realize a fine registration. Finally, a Gauss-Seidel iteration method is introduced to achieve a non-rigid registration. Benefiting from the registrations, a smooth surface of the target can be reconstructed easily. Experimental results prove that the proposed method can effectively register the contours and give an acceptable surface for medical doctors.

    Download PDF (3348K)
feedback
Top