IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508
Volume E97.A, Issue 6
Displaying 1-35 of 35 articles from this issue
Special Section on Discrete Mathematics and Its Applications
  • Kazuyuki AMANO
    2014 Volume E97.A Issue 6 Pages 1162
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    Download PDF (252K)
  • Katsuhisa YAMANAKA, Shin-ichi NAKANO
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1163-1170
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    A ladder lottery, known as the “Amidakuji” in Japan, is a network with n vertical lines and many horizontal lines each of which connects two consecutive vertical lines. Each ladder lottery corresponds to a permutation. Ladder lotteries are frequently used as natural models in many areas. Given a permutation π, an algorithm to enumerate all ladder lotteries of π with the minimum number of horizontal lines is known. In this paper, given a permutation π and an integer k, we design an algorithm to enumerate all ladder lotteries of π with exactly k horizontal lines.
    Download PDF (735K)
  • Yuma INOUE, Takahisa TODA, Shin-ichi MINATO
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1171-1179
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    Pattern-avoiding permutations are permutations where none of the subsequences matches the relative order of a given pattern. Pattern-avoiding permutations are related to practical and abstract mathematical problems and can provide simple representations for such problems. For example, some floorplans, which are used for optimizing very-large-scale integration (VLSI) circuit design, can be encoded into pattern-avoiding permutations. The generation of pattern-avoiding permutations is an important topic in efficient VLSI design and mathematical analysis of patten-avoiding permutations. In this paper, we present an algorithm for generating pattern-avoiding permutations, and extend this algorithm beyond classical patterns to generalized patterns with more restrictions. Our approach is based on the data structure πDDs, which can represent a permutation set compactly and has useful set operations. We demonstrate the efficiency of our algorithm by computational experiments.
    Download PDF (1357K)
  • Kung-Jui PAI, Jou-Ming CHANG, Yue-Li WANG, Ro-Yu WU
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1180-1186
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    A queue layout of a graph G consists of a linear order of its vertices, and a partition of its edges into queues, such that no two edges in the same queue are nested. The queuenumber qn(G) is the minimum number of queues required in a queue layout of G. The Cartesian product of two graphs G1 = (V1,E1) and G2 = (V2,E2), denoted by G1 × G2, is the graph with {<v1,v2>:v1V1 and v2V2} as its vertex set and an edge (<u1,u2>,<v1,v2>) belongs to G1×G2 if and only if either (u1,v1) ∈ E1 and u2 = v2 or (u2,v2) ∈ E2 and u1 = v1. Let Tk1,k2,...,kn denote the n-dimensional toroidal grid defined by the Cartesian product of n cycles with varied lengths, i.e., Tk1,k2,...,kn = Ck1 × Ck2 × … × Ckn, where Cki is a cycle of length ki3. If k1 = k2 = … = kn = k, the graph is also called the k-ary n-cube and is denoted by Qnk. In this paper, we deal with queue layouts of toroidal grids and show the following bound: qn(Tk1,k2,...,kn)2n-2 if n ≥ 2 and ki3 for all i = 1,2,...,n. In particular, for n = 2 and k1,k23, we acquire qn(Tk1,k2) = 2. Recently, Pai et al. (Inform. Process. Lett. 110 (2009) pp.50-56) showed that qn(Qnk)2n-1 if n ≥1 and k ≥9. Thus, our result improves the bound of qn(Qnk) when n ≥2 and k ≥9.
    Download PDF (1065K)
  • Wen-Yin HUANG, Jia-Jie LIU, Jou-Ming CHANG, Ro-Yu WU
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1187-1191
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    An n-dimensional folded hypercube, denoted by FQn, is an enhanced n-dimensional hypercube with one extra link between nodes that have the furthest Hamming distance. Let FFv (respectively, FFe) denote the set of faulty nodes (respectively, faulty links) in FQn. Under the assumption that every fault-free node in FQn is incident to at least two fault-free links, Hsieh et al. (Inform. Process. Lett. 110 (2009) pp.41-53) showed that if |FFv|+|FFe| ≤ 2n-4 for n ≥ 3, then FQn-FFv-FFe contains a fault-free cycle of length at least 2n-2|FFv|. In this paper, we show that, under the same conditional fault model, FQn with n ≥ 5 can tolerate more faulty elements and provides the same lower bound of the length of a longest fault-free cycle, i.e., FQn-FFv-FFe contains a fault-free cycle of length at least 2n-2|FFv| if |FFv|+|FFe| ≤ 2n-3 for n ≥ 5.
    Download PDF (647K)
  • Tamaki NAKAJIMA, Yuuki TANAKA, Toru ARAKI
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1192-1199
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    A twin dominating set of a digraph D is a subset S of vertices if, for every vertex u ∉ S, there are vertices x,y ∈ S such that ux and yu are arcs of D. A digraph D is round if the vertices can be labeled as v0,v1,...,vn-1 so that, for each vertex vi, the out-neighbors of vi appear consecutively following vi and the in-neighbors of vi appear consecutively preceding vi. In this paper, we give polynomial time algorithms for finding a minimum weight twin dominating set and a minimum weight total twin dominating set for a weighted round digraph. Then we show that there is a polynomial time algorithm for deciding whether a locally semicomplete digraph has an independent twin dominating set. The class of locally semicomplete digraphs contains round digraphs as a special case.
    Download PDF (781K)
  • Hiroshi FUJIWARA, Yasuhiro KONNO, Toshihiro FUJITO
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1200-1205
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    The multislope ski-rental problem is an extension of the classical ski-rental problem, where the player has several options of paying both of a per-time fee and an initial fee, in addition to pure renting and buying options. Damaschke gave a lower bound of 3.62 on the competitive ratio for the case where arbitrary number of options can be offered. In this paper we propose a scheme that for the number of options given as an input, provides a lower bound on the competitive ratio, by extending the method of Damaschke. This is the first to establish a lower bound for each of the 5-or-more-option cases, for example, a lower bound of 2.95 for the 5-option case, 3.08 for the 6-option case, and 3.18 for the 7-option case. Moreover, it turns out that our lower bounds for the 3- and 4-option cases respectively coincide with the known upper bounds. We therefore conjecture that our scheme in general derives a matching lower and upper bound.
    Download PDF (573K)
  • Zachary ABEL, Erik D. DEMAINE, Martin L. DEMAINE, Takashi HORIYAMA, Ry ...
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1206-1212
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    We prove NP-completeness of deciding whether a given loop of colored right isosceles triangles, hinged together at edges, can be folded into a specified rectangular three-color pattern. By contrast, the same problem becomes polynomially solvable with one color or when the target shape is a tree-shaped polyomino.
    Download PDF (2381K)
  • Erik D. DEMAINE, Yoshio OKAMOTO, Ryuhei UEHARA, Yushi UNO
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1213-1219
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    Shakashaka is a pencil-and-paper puzzle proposed by Guten and popularized by the Japanese publisher Nikoli (like Sudoku). We determine the computational complexity by proving that Shakashaka is NP-complete, and furthermore that counting the number of solutions is #P-complete. Next we formulate Shakashaka as an integer-programming (IP) problem, and show that an IP solver can solve every instance from Nikoli's website within a second.
    Download PDF (1573K)
  • Jinhee CHUN, Akiyoshi SHIOURA, Truong MINH TIEN, Takeshi TOKUYAMA
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1220-1230
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    We give a unified view to greedy geometric routing algorithms in ad hoc networks. For this, we first present a general form of greedy routing algorithm using a class of objective functions which are invariant under congruent transformations of a point set. We show that several known greedy routing algorithms such as Greedy Routing, Compass Routing, and Midpoint Routing can be regarded as special cases of the generalized greedy routing algorithm. In addition, inspired by the unified view of greedy routing, we propose three new greedy routing algorithms. We then derive a sufficient condition for our generalized greedy routing algorithm to guarantee packet delivery on every Delaunay graph. This condition makes it easier to check whether a given routing algorithm guarantees packet delivery, and it is closed under convex linear combination of objective functions. It is shown that Greedy Routing, Midpoint Routing, and the three new greedy routing algorithms proposed in this paper satisfy the sufficient condition, i.e., they guarantee packet delivery on Delaunay graphs. We also discuss merits and demerits of these methods.
    Download PDF (1232K)
  • Seth PHILLIPS, Ivan FAIR
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1231-1239
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    In holographic data storage, information is recorded within the volume of a holographic medium. Typically, the data is presented as an array of pixels with modulation in amplitude and/or phase. In the 4-f orientation, the Fourier domain representation of the data array is produced optically, and this image is recorded. If the Fourier image contains large peaks, the recording material can saturate, which leads to errors in the read-out data array. In this paper, we present a coding process that produces sparse ternary data arrays. Ternary modulation is used because it inherently provides Fourier domain smoothing and allows more data to be stored per array in comparison to binary modulation. Sparse arrays contain fewer on-pixels than dense arrays, and thus contain less power overall, which reduces the severity of peaks in the Fourier domain. The coding process first converts binary data to a sequence of ternary symbols via a high-rate block code, and then uses guided scrambling to produce a set of candidate codewords, from which the most sparse is selected to complete the encoding process. Our analysis of the guided scrambling division and selection processes demonstrates that, with primitive scrambling polynomials, a sparsity greater than 1/3 is guaranteed for all encoded arrays, and that the probability of this worst-case sparsity decreases with increasing block size.
    Download PDF (1065K)
  • Kazuto OGAWA, Go OHTAKE, Arisa FUJII, Goichiro HANAOKA
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1240-1258
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    For the sake of privacy preservation, services that are offered with reference to individual user preferences should do so with a sufficient degree of anonymity. We surveyed various tools that meet requirements of such services and decided that group signature schemes with weakened anonymity (without unlinkability) are adequate. Then, we investigated a theoretical gap between unlinkability of group signature schemes and their other requirements. We show that this gap is significantly large. Specifically, we clarify that if unlinkability can be achieved from any other property of group signature schemes, it becomes possible to construct a chosen-ciphertext secure cryptosystem from any one-way function. This result implies that the efficiency of group signature schemes can be drastically improved if unlinkability is not taken into account. We also demonstrate a way to construct a scheme without unlinkability that is significantly more efficient than the best known full-fledged scheme.
    Download PDF (1294K)
  • Atsushi TAKAYASU, Noboru KUNIHIRO
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1259-1272
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    At CaLC 2001, Howgrave-Graham proposed the polynomial time algorithm for solving univariate linear equations modulo an unknown divisor of a known composite integer, the so-called partially approximate common divisor problem. So far, two forms of multivariate generalizations of the problem have been considered in the context of cryptanalysis. The first is simultaneous modular univariate linear equations, whose polynomial time algorithm was proposed at ANTS 2012 by Cohn and Heninger. The second is modular multivariate linear equations, whose polynomial time algorithm was proposed at Asiacrypt 2008 by Herrmann and May. Both algorithms cover Howgrave-Graham's algorithm for univariate cases. On the other hand, both multivariate problems also become identical to Howgrave-Graham's problem in the asymptotic cases of root bounds. However, former algorithms do not cover Howgrave-Graham's algorithm in such cases. In this paper, we introduce the strategy for natural algorithm constructions that take into account the sizes of the root bounds. We work out the selection of polynomials in constructing lattices. Our algorithms are superior to all known attacks that solve the multivariate equations and can generalize to the case of arbitrary number of variables. Our algorithms achieve better cryptanalytic bounds for some applications that relate to RSA cryptosystems.
    Download PDF (682K)
  • Noboru KUNIHIRO, Naoyuki SHINOHARA, Tetsuya IZU
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1273-1284
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    We discuss how to recover RSA secret keys from noisy key bits with erasures and errors. There are two known algorithms recovering original secret keys from noisy keys. At Crypto 2009, Heninger and Shacham proposed a method for the case where an erroneous version of secret keys contains only erasures. Subsequently, Henecka et al. proposed a method for an erroneous version containing only errors at Crypto 2010. For physical attacks such as side-channel and cold boot attacks, we need to study key recovery from a noisy secret key containing both erasures and errors. In this paper, we propose a method to recover a secret key from such an erroneous version and analyze the condition for error and erasure rates so that our algorithm succeeds in finding the correct secret key in polynomial time. We also evaluate a theoretical bound to recover the secret key and discuss to what extent our algorithm achieves this bound.
    Download PDF (586K)
  • Noboru KUNIHIRO, Naoyuki SHINOHARA, Tetsuya IZU
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1285-1295
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    In this paper, we present a lattice based method on small secret exponent attack on the RSA scheme. Boneh and Durfee reduced the attack to finding the small roots of the bivariate modular equation: x(N+1+y)+1 ≡ 0 (mod e), where N is an RSA modulus and e is the RSA public key and proposed a lattice based algorithm for solving the problem. When the secret exponent d is less than N0.292, their method breaks the RSA scheme. Since the lattice used in the analysis is not full-rank, the analysis is not easy. Blömer and May proposed an alternative algorithm that uses a full-rank lattice, even though it gives a bound (dN0.290) that is worse than Boneh-Durfee. However, the proof for their bound is still complicated. Herrmann and May, however, have given an elementary proof for the Boneh-Durfee's bound: dN0.292. In this paper, we first give an elementary proof for achieving Blömer-May's bound: dN0.290. Our proof employs the unravelled linearization technique introduced by Herrmann and May and is rather simpler than that of Blömer-May's proof. We then provide a unified framework — which subsumes the two previous methods, the Herrmann-May and the Blömer-May methods, as a special case — for constructing a lattice that can be are used to solve the problem. In addition, we prove that Boneh-Durfee's bound: dN0.292 is still optimal in our unified framework.
    Download PDF (536K)
  • Shingo HASEGAWA, Shuji ISOBE
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1296-1306
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    Lossy identification schemes are used to construct tightly secure signature schemes via the Fiat-Shamir heuristic in the random oracle model. Several lossy identification schemes are instantiated by using the short discrete logarithm assumption, the ring-LWE assumption and the subset sum assumption, respectively. For assumptions concerning the integer factoring, Abdalla, Ben Hamouda and Pointcheval [3] recently presented lossy identification schemes based on the φ-hiding assumption, the QR assumption and the DCR assumption, respectively. In this paper, we propose new instantiations of lossy identification schemes. We first construct a variant of the Schnorr's identification scheme, and show its lossiness under the subgroup decision assumption. We also construct a lossy identification scheme which is based on the DCR assumption. Our DCR-based scheme has an advantage relative to the ABP's DCR-based scheme since our scheme needs no modular exponentiation in the response phase. Therefore our scheme is suitable when it is transformed to an online/offline signature.
    Download PDF (367K)
  • Atsushi FUJIOKA, Taiichi SAITO, Keita XAGAWA
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1307-1317
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    This paper proposes a generic construction of hierarchical identity-based identification (HIBI) protocols secure against impersonation under active and concurrent attacks in the standard model. The proposed construction converts a digital signature scheme existentially unforgeable against chosen message attacks, where the scheme has a protocol for showing possession of a signing key, not a signature. Our construction is based on the so-called certificate-based construction of hierarchical identity-based cryptosystems, and utilizes a variant of the well-known OR-proof technique to ensure the security against impersonation under active and concurrent attacks. We also present several concrete examples of our construction employing the Waters signature (EUROCRYPT 2005), and other signatures. As results, its concurrent security of each instantiation is proved under the computational Diffie-Hellman (CDH) assumption, the RSA assumption, or their variants in the standard model. Chin, Heng, and Goi proposed an HIBI protocol passively and concurrently secure under the CDH and one-more CDH assumption, respectively (FGIT-SecTech 2009). However, its security is proved in the random oracle model.
    Download PDF (669K)
  • Atsushi FUJIOKA, Eiichiro FUJISAKI, Keita XAGAWA
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1318-1334
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    We study non-malleability of multiple public-key encryption (ME) schemes. The main difference of ME from the threshold public-key encryption schemes is that there is no dealer to share a secret among users; each user can independently choose their own public-keys; and a sender can encrypt a message under ad-hoc multiple public keys of his choice. In this paper we tackle non-malleability of ME. We note that the prior works only consider confidentiality of messages and treat the case that all public keys are chosen by honest users. In the multiple public-key setting, however, some application naturally requires non-malleability of ciphertexts under multiple public keys including malicious users'. Therefore, we study the case and have obtained the following results:
    ·We present three definitions of non-malleability of ME, simulation-based, comparison-based, and indistinguishability-based ones. These definitions can be seen as an analogue of those of non-malleable public-key encryption (PKE) schemes. Interestingly, our definitions are all equivalent even for the “invalid-allowing” relations. We note that the counterparts of PKE are not equivalent for the relations.
    ·The previous strongest security notion for ME, “indistinguishability against strong chosen-ciphertext attacks (sMCCA)” [1], does not imply our notion of non-malleability against chosen-plaintext attacks.
    ·Non-malleability of ME guarantees that the single message indistinguishability-based notion is equivalent to the multiple-message simulation-based notion, which provides designers a fundamental benefit.
    ·We define new, stronger decryption robustness for ME. A non-malleable ME scheme is meaningful in practice if it also has the decryption robustness.
    ·We present a constant ciphertext-size ME scheme (meaning that the length of a ciphertext is independent of the number of public-keys) that is secure in our strongest security notion of non-malleability. Indeed, the ciphertext overhead (i.e., the length of a ciphertext minus that of a plaintext) is the combined length of two group elements plus one hash value, regardless of the number of public keys. Then, the length of the partial decryption of one user consists of only two group elements, regardless of the length of the plaintext.
    Download PDF (735K)
  • Kazuki YONEYAMA
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1335-1344
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    ID-based authenticated key exchange (ID-AKE) is a cryptographic tool to establish a common session key between parties with authentication based on their IDs. If IDs contain some hierarchical structure such as an e-mail address, hierarchical ID-AKE (HID-AKE) is especially suitable because of scalability. However, most of existing HID-AKE schemes do not satisfy advanced security properties such as forward secrecy, and the only known strongly secure HID-AKE scheme is inefficient. In this paper, we propose a new HID-AKE scheme which achieves both strong security and efficiency. We prove that our scheme is eCK-secure (which ensures maximal-exposure-resilience including forward secrecy) without random oracles, while existing schemes is proved in the random oracle model. Moreover, the number of messages and pairing operations are independent of the hierarchy depth; that is, really scalable and practical for a large-system.
    Download PDF (259K)
  • Koutarou SUZUKI, Kazuki YONEYAMA
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1345-1355
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    This paper studies Tripartite Key Exchange (3KE) which is a special case of Group Key Exchange. Though general one-round GKE satisfying advanced security properties such as forward secrecy and maximal-exposure-resilience (MEX-resilience) is not known, it can be efficiently constructed with the help of pairings in the 3KE case. In this paper, we introduce the first one-round 3KE which is MEX-resilient in the standard model, though existing one-round 3KE schemes are proved in the random oracle model (ROM), or not MEX-resilient. Each party broadcasts 4 group elements, and executes 14 pairing operations. Complexity is only three or four times larger in computation and communication than the existing most efficient MEX-resilient 3KE scheme in the ROM; thus, our protocol is adequately practical.
    Download PDF (357K)
  • Yichao LU, Xiao PENG, Guifen TIAN, Satoshi GOTO
    Article type: PAPER
    2014 Volume E97.A Issue 6 Pages 1356-1364
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    Majority-logic algorithms are devised for decoding non-binary LDPC codes in order to reduce computational complexity. However, compared with conventional belief propagation algorithms, majority-logic algorithms suffer from severe bit error performance degradation. This paper presents a low-complexity reliability-based algorithm aiming at improving error correcting ability of majority-logic algorithms. Reliability measures for check nodes are novelly introduced to realize mutual update between variable message and check message, and hence more efficient reliability propagation can be achieved, similar to belief-propagation algorithm. Simulation results on NB-LDPC codes with different characteristics demonstrate that our algorithm can reduce the bit error ratio by more than one order of magnitude and the coding gain enhancement over ISRB-MLGD can reach 0.2-2.0dB, compared with both the ISRB-MLGD and IISRB-MLGD algorithms. Moreover, simulations on typical LDPC codes show that the computational complexity of the proposed algorithm is closely equivalent to ISRB-MLGD algorithm, and is less than 10% of Min-max algorithm. As a result, the proposed algorithm achieves a more efficient trade-off between decoding computational complexity and error performance.
    Download PDF (1503K)
  • Hirotoshi HONMA, Yoko NAKAJIMA, Yuta IGARASHI, Shigeru MASUYAMA
    Article type: LETTER
    2014 Volume E97.A Issue 6 Pages 1365-1369
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    Consider a simple undirected graph G = (V,E) with vertex set V and edge set E. Let G-u be a subgraph induced by the vertex set V-{u}. The distance δG(x,y) is defined as the length of the shortest path between vertices x and y in G. The vertex uV is a hinge vertex if there exist two vertices x,yV-{u} such that δG-u(x,y)G(x,y). Let U be a set consisting of all hinge vertices of G. The neighborhood of u is the set of all vertices adjacent to u and is denoted by N(u). We define d(u) = max{δG-u(x,y) | δG-u(x,y)G(x,y),x,yN(u)} for uU as detour degree of u. A maximum detour hinge vertex problem is to find a hinge vertex u with maximum d(u) in G. In this paper, we proposed an algorithm to find the maximum detour hinge vertex on an interval graph that runs in O(n2) time, where n is the number of vertices in the graph.
    Download PDF (239K)
  • Shinsuke ODAGIRI, Hiroyuki GOTO
    Article type: LETTER
    2014 Volume E97.A Issue 6 Pages 1370-1374
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    For a fixed number of nodes, we focus on directed acyclic graphs in which there is not a shortcut. We find the case where the number of paths is maximized and its corresponding count of maximal paths. Considering this case is essential in solving large-scale scheduling problems using a PERT chart.
    Download PDF (237K)
  • Tatsuya FUJIMOTO, Tsunehiro YOSHINAGA, Makoto SAKAMOTO
    Article type: LETTER
    2014 Volume E97.A Issue 6 Pages 1375-1377
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    A cooperating system of finite automata (CS-FA) has more than one finite automata (FA's) and an input tape. These FA's operate independently on the input tape and can communicate with each other on the same cell of the input tape. For each k ≥ 1, let L[CS-1DFA(k)] (L[CS-1UFA(k)]) be the class of sets accepted by CS-FA's with k one-way deterministic finite automata (alternating finite automata with only universal states). We show that L[CS-1DFA(k+1)] - L[CS-1UFA(k)] ≠ ∅ and L[CS-1UFA(2)] - ∪1≤k<∞L[CS-1DFA(k)] ≠ ∅.
    Download PDF (83K)
  • Ryuichi HARASAWA, Yutaka SUEYOSHI, Aichi KUDO
    Article type: LETTER
    2014 Volume E97.A Issue 6 Pages 1378-1381
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    In the paper [4], the authors generalized the Cipolla-Lehmer method [2][5] for computing square roots in finite fields to the case of r-th roots with r prime, and compared it with the Adleman-Manders-Miller method [1] from the experimental point of view. In this paper, we compare these two methods from the theoretical point of view.
    Download PDF (221K)
  • Ei ANDO
    Article type: LETTER
    2014 Volume E97.A Issue 6 Pages 1382-1384
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    In this paper, we show a connection between #P and computing the (real) value of the high order derivative at the origin. Consider, as a problem instance, an integer b and a sufficiently often differentiable function F(x) that is given as a string. Then we consider computing the value F(b)(0) of the b-th derivative of F(x) at the origin. By showing a polynomial as an example, we show that we have FP = #P if we can compute log 2F(b)(0) up to certain precision. The previous statement holds even if F(x) is limited to a function that is analytic at any xR. It implies the hardness of computing the b-th value of a number sequence from the closed form of its generating function.
    Download PDF (82K)
  • Keehang KWON
    Article type: LETTER
    2014 Volume E97.A Issue 6 Pages 1385-1387
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    This paper proposes a new approach to defining and expressing algorithms: the notion of task logical algorithms. This notion allows the user to define an algorithm for a task T as a set of agents who can collectively perform T. This notion considerably simplifies the algorithm development process and can be seen as an integration of the sequential pseudocode and logical algorithms. This observation requires some changes to algorithm development process. We propose a two-step approach: the first step is to define an algorithm for a task T via a set of agents that can collectively perform T. The second step is to translate these agents into (higher-order) computability logic.
    Download PDF (74K)
Regular Section
  • Rongchun LI, Yong DOU, Jie ZHOU, Chen CHEN
    Article type: PAPER
    Subject area: Digital Signal Processing
    2014 Volume E97.A Issue 6 Pages 1388-1395
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    The parallel interference cancellation (PIC) multiple input multiple output (MIMO) detection algorithm has bit error ratio (BER) performance comparable to the maximum likelihood (ML) algorithm but with complexity close to the simple linear detection algorithm such as zero forcing (ZF), minimum mean squared error (MMSE), and successive interference cancellation (SIC), etc. However, the throughput of PIC MIMO detector on central processing unit (CPU) cannot meet the requirement of wireless protocols. In order to reach the throughput required by the standards, the graphics processing unit (GPU) is exploited in this paper as the modem processor to accelerate the processing procedure of PIC MIMO detector. The parallelism of PIC algorithm is analyzed and the two-stage PIC detection is carefully developed to efficiently match the multi-core architecture. Several optimization methods are employed to enhance the throughput, such as the memory optimization and asynchronous data transfer. The experiment shows that our MIMO detector has excellent BER performance and the peak throughput is 337.84 Mega bits per second (Mbps), about 7x to 16x faster than that of CPU implementation with SSE2 optimization methods. The implemented MIMO detector has better computing throughput than recent GPU-based implementations.
    Download PDF (1564K)
  • Daisaburo YOSHIOKA, Akio TSUNEDA
    Article type: PAPER
    Subject area: Nonlinear Problems
    2014 Volume E97.A Issue 6 Pages 1396-1404
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    Since substitution boxes (S-boxes) are the only nonlinear portion of most block ciphers, the design of cryptographically strong and low-complexity S-boxes is of great importance in cryptosystems. In this paper, a new kind of S-boxes obtained by iterating a discretized piecewise linear map is proposed. The S-box has an implementation efficiency both in software and hardware. Moreover, the results of performance test show that the proposed S-box has good cryptographic properties.
    Download PDF (829K)
  • Jian LIU, Lusheng CHEN, Xuan GUANG
    Article type: PAPER
    Subject area: Cryptography and Information Security
    2014 Volume E97.A Issue 6 Pages 1405-1417
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    In this paper, we provide several methods to construct nonlinear resilient functions with multiple good cryptographic properties, including high nonlinearity, high algebraic degree, and non-existence of linear structures. Firstly, we present an improvement on a known construction of resilient S-boxes such that the nonlinearity and the algebraic degree will become higher in some cases. Then a construction of highly nonlinear t-resilient Boolean functions without linear structures is given, whose algebraic degree achieves n-t-1, which is optimal for n-variable t-resilient Boolean functions. Furthermore, we construct a class of resilient S-boxes without linear structures, which possesses the highest nonlinearity and algebraic degree among all currently known constructions.
    Download PDF (859K)
  • Hiroyuki MIURA, Yasufumi HASHIMOTO, Tsuyoshi TAKAGI
    Article type: PAPER
    Subject area: Cryptography and Information Security
    2014 Volume E97.A Issue 6 Pages 1418-1425
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    It is well known that solving randomly chosen Multivariate Quadratic equations over a finite field (MQ-Problem) is NP-hard, and the security of Multivariate Public Key Cryptosystems (MPKCs) is based on the MQ-Problem. However, this problem can be solved efficiently when the number of unknowns n is sufficiently greater than that of equations m (This is called “Underdefined”). Indeed, the algorithm by Kipnis et al. (Eurocrypt'99) can solve the MQ-Problem over a finite field of even characteristic in a polynomial-time of n when n ≥ m(m+1). Therefore, it is important to estimate the hardness of the MQ-Problem to evaluate the security of Multivariate Public Key Cryptosystems. We propose an algorithm in this paper that can solve the MQ-Problem in a polynomial-time of n when n ≥ m(m+3)/2, which has a wider applicable range than that by Kipnis et al. We will also compare our proposed algorithm with other known algorithms. Moreover, we implemented this algorithm with Magma and solved the MQ-Problem of m=28 and n=504, and it takes 78.7 seconds on a common PC.
    Download PDF (903K)
  • Wentao LV, Junfeng WANG, Wenxian YU, Zhen TAN
    Article type: LETTER
    Subject area: Digital Signal Processing
    2014 Volume E97.A Issue 6 Pages 1426-1429
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    In compressed sensing, the design of the measurement matrix is a key work. In order to achieve a more precise reconstruction result, the columns of the measurement matrix should have better orthogonality or linear incoherence. A random matrix, like a Gaussian random matrix (GRM), is commonly adopted as the measurement matrix currently. However, the columns of the random matrix are only statistically-orthogonal. By substituting an orthogonal basis into the random matrix to construct a semi-random measurement matrix and by optimizing the mutual coherence between dictionary columns to approach a theoretical lower bound, the linear incoherence of the measurement matrix can be greatly improved. With this optimization measurement matrix, the signal can be reconstructed from its measures more precisely.
    Download PDF (243K)
  • Hongyu HAN, Daiyuan PENG, Xing LIU
    Article type: LETTER
    Subject area: Coding Theory
    2014 Volume E97.A Issue 6 Pages 1430-1433
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    For frequency hopping spread spectrum communication systems, the average Hamming correlation (AHC) among frequency hopping sequences (FHSs) is an important performance indicator. In this letter, a sufficient and necessary condition for a set of FHSs with optimal AHC is given. Based on interleaved technique, a new construction for optimal AHC FHS sets is also proposed, which generalizes the construction of Chung and Yang. Several optimal AHC FHS sets with more flexible parameters not covered in the literature are obtained by the new construction, which are summarized in Table 1.
    Download PDF (91K)
  • Honggyu JUNG, Kwang-Yul KIM, Yoan SHIN
    Article type: LETTER
    Subject area: Communication Theory and Signals
    2014 Volume E97.A Issue 6 Pages 1434-1438
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    We propose a cooperative compressed spectrum sensing scheme for correlated signals in wideband cognitive radio networks. In order to design a reconstruction algorithm which accurately recover the wideband signals from the compressed samples in low SNR (Signal-to-Noise Ratio) environments, we consider the multiple measurement vector model exploiting a sequence of input signals and propose a cooperative sparse Bayesian learning algorithm which models the temporal correlation of the input signals. Simulation results show that the proposed scheme outperforms existing compressed sensing algorithms for low SNRs.
    Download PDF (651K)
  • Yunpyo HONG, Juwon BYUN, Youngjo KIM, Jaeseok KIM
    Article type: LETTER
    Subject area: Image
    2014 Volume E97.A Issue 6 Pages 1439-1442
    Published: June 01, 2014
    Released on J-STAGE: June 01, 2014
    JOURNAL RESTRICTED ACCESS
    This letter proposes a pipelined architecture with prediction mode scheduling for high efficiency video coding (HEVC). An increased number of intra prediction modes in HEVC have introduced a new technique, named rough mode decision (RMD). This development, however, means that pipeline architectures for H.264 cannot be used in HEVC. The proposed scheme executes the RMD and the rate-distortion optimization (RDO) process simultaneously by grouping the intra prediction modes and changing the candidate selection method of the RMD algorithm. The proposed scheme reduces execution cycle by up to 26% with negligible coding loss.
    Download PDF (707K)
feedback
Top