IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508
Volume E96.A , Issue 6
Showing 1-50 articles out of 59 articles from the selected issue
Special Section on Discrete Mathematics and Its Applications
• Hisashi KOGA
2013 Volume E96.A Issue 6 Pages 1023
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1024-1031
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Periodic-finite-type shifts (PFT's) are sofic shifts which forbid the appearance of finitely many pre-specified words in a periodic manner. The class of PFT's strictly includes the class of shifts of finite type (SFT's). The zeta function of a PFT is a generating function for the number of periodic sequences in the shift. For a general sofic shift, there exists a formula, attributed to Manning and Bowen, which computes the zeta function of the shift from certain auxiliary graphs constructed from a presentation of the shift. In this paper, we derive an interesting alternative formula computable from certain “word-based graphs” constructed from the periodically-forbidden word description of the PFT. The advantages of our formula over the Manning-Bowen formula are discussed.
• Shin-ichi NAKANO, Katsuhisa YAMANAKA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1032-1035
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
A rectangular drawing is a plane drawing of a graph in which every face is a rectangle. Rectangular drawings have an application for floorplans, which may have a huge number of faces, so compact code to store the drawings is desired. The most compact code for rectangular drawings needs at most 4f-4bits, where f is the number of inner faces of the drawing. The code stores only the graph structure of rectangular drawings, so the length of each edge is not encoded. A grid rectangular drawing is a rectangular drawing in which each vertex has integer coordinates. To store grid rectangular drawings, we need to store some information for lengths or coordinates. One can store a grid rectangular drawing by the code for rectangular drawings and the width and height of each inner face. Such a code needs 4f-4+f⌈log W⌉+f⌈log H⌉+o(f)+o(W)+o(H) bits*, where W and H are the maximum width and the maximum height of inner faces, respectively. In this paper we design a simple and compact code for grid rectangular drawings. The code needs 4f-4+(f+1)⌈log L⌉+o(f)+o(L) bits for each grid rectangular drawing, where L is the maximum length of edges in the drawing. Note that L ≤ max {W,H} holds. Our encoding and decoding algorithms run in O(f) time.
• Masaki KAWABATA, Takao NISHIZEKI
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1036-1043
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Let T be a given tree. Each vertex of T is either a supply vertex or a demand vertex, and is assigned a positive number, called the supply or demand. Each demand vertex v must be supplied an amount of “power,” equal to the demand of v, from exactly one supply vertex through edges in T. Each edge is assigned a positive number called the capacity. One wishes to partition T into subtrees by deleting edges from T so that each subtree contains exactly one supply vertex whose supply is no less than the sum of all demands in the subtree and the power flow through each edge is no more than capacity of the edge. The “partition problem” is a decision problem to ask whether T has such a partition. The “maximum partition problem” is an optimization version of the partition problem. In this paper, we give three algorithms for the problems. First is a linear-time algorithm for the partition problem. Second is a pseudo-polynomial-time algorithm for the maximum partition problem. Third is a fully polynomial-time approximation scheme (FPTAS) for the maximum partition problem.
• Tetsuo ASANO, Revant KUMAR
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1044-1050
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Given a binary image I and a threshold t, the size-thresholded binary image I(t) defined by I and t is the binary image after removing all connected components consisting of at most t pixels. This paper presents space-efficient algorithms for computing a size-thresholded binary image for a binary image of n pixels, assuming that the image is stored in a read-only array with random-access. With regard to the problem, there are two cases depending on how large the threshold t is, namely, Relatively large threshold where t=Ω(√n), and Relatively small threshold where t=O(√n). In this paper, a new algorithmic framework for the problem is presented. From an algorithmic point of view, the problem can be solved in O(n) time and O(n) work space. We propose new algorithms for both the above cases which compute the size-threshold binary image for any binary image of n pixels in O(n log n) time using only O(√n) work space.
• Hirotoshi HONMA, Yoko NAKAJIMA, Haruka AOSHIMA, Shigeru MASUYAMA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1051-1058
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Given a simple connected graph G with n vertices, the spanning tree problem involves finding a tree that connects all the vertices of G. Solutions to this problem have applications in electrical power provision, computer network design, circuit analysis, among others. It is known that highly efficient sequential or parallel algorithms can be developed by restricting classes of graphs. Circular trapezoid graphs are proper superclasses of trapezoid graphs. In this paper, we propose an O(n) time algorithm for the spanning tree problem on a circular trapezoid graph. Moreover, this algorithm can be implemented in O(log n) time with O(n/ log n) processors on EREW PRAM computation model.
• Ro-Yu WU, Jou-Ming CHANG, An-Hang CHEN, Ming-Tat KO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1059-1065
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
A non-regular tree T with a prescribed branching sequence (s1,s2,...,sn) is a rooted and ordered tree such that its internal nodes are numbered from 1 to n in preorder and every internal node i in T has si children. Recently, Wu et al. (2010) introduced a concise representation called RD-sequences to represent all non-regular trees and proposed a loopless algorithm for generating all non-regular trees in a Gray-code order. In this paper, based on such a Gray-code order, we present efficient ranking and unranking algorithms of non-regular trees with n internal nodes. Moreover, we show that the ranking algorithm and the unranking algorithm can be run in O(n2) time and O(n2+nSn-1) time, respectively, provided a preprocessing takes O(n2Sn-1) time and space in advance, where $S_{n-1}=\sum_{i=1}^{n-1}(s_i-1)$.
• Matsuo KONAGAYA, Tetsuo ASANO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1066-1071
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
This paper presents an efficient algorithm for reporting all intersections among n given segments in the plane using work space of arbitrarily given size. More exactly, given a parameter s which is between Ω(1) and O(n) specifying the size of work space, the algorithm reports all the segment intersections in roughly O(n2/√s+K) time using O(s) words of O(log n) bits, where K is the total number of intersecting pairs. The time complexity can be improved to O((n2/s)log s+K) when input segments have only some number of different slopes.
• Tomoko IZUMI, Taisuke IZUMI, Sayaka KAMEI, Fukuhito OOSHITA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1072-1080
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
The gathering problem of anonymous and oblivious mobile robots is one of the fundamental problems in the theoretical mobile robotics. We consider the gathering problem in unoriented and anonymous rings, which requires that all robots eventually keep their positions at a common non-predefined node. Since the gathering problem cannot be solved without any additional capability to robots, all the previous results assume some capability of robots, such as the agreement of local view. In this paper, we focus on the multiplicity detection capability. This paper presents a deterministic gathering algorithm with local-weak multiplicity detection, which provides a robot with information about whether its current node has more than one robot or not. This assumption is strictly weaker than that in previous works. Our algorithm achieves the gathering from an aperiodic and asymmetric configuration with 2<k<n/2 robots, where n is the number of nodes. We also show that our algorithm is asymptotically time-optimal one, i.e., the time complexity of our algorithm is O(n). Interestingly, despite the weaker assumption, it achieves significant improvement compared to the previous algorithm, which takes O(kn) time for k robots.
• Ryuichi HARASAWA, Yutaka SUEYOSHI, Aichi KUDO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1081-1087
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
We consider the computation of r-th roots in finite fields. For the computation of square roots (i.e., the case of r=2), there are two typical methods: the Tonelli-Shanks method [7], [10] and the Cipolla-Lehmer method [3], [5]. The former method can be extended to the case of r-th roots with r prime, which is called the Adleman-Manders-Miller method [1]. In this paper, we generalize the Cipolla-Lehmer method to the case of r-th roots in Fq with r prime satisfying r|q-1, and provide an efficient computational procedure of our method. Furthermore, we implement our method and the Adleman-Manders-Miller method, and compare the results.
• Atsushi FUJIOKA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1088-1099
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
This paper examines two-pass authenticated key exchange (AKE) protocols that are secure without the NAXOS technique under the gap Diffie-Hellman assumption in the random oracle model: FHMQV [18], KFU1 [21], SMEN- [13], and UP [17]. We introduce two protocol, biclique DH protocol and multiplied biclique DH protocol, to analyze the subject protocols, and show that the subject protocols use the multiplied biclique DH protocol as internal protocols. The biclique DH protocol is secure, however, the multiplied biclique DH protocol is insecure. We show the relations between the subject protocols from the viewpoint of how they overcome the insecurity of the multiplied biclique DH protocol:

·FHMQV virtually executes two multiplied biclique DH protocols in sequence with the same ephemeral key on two randomized static keys.
·KFU1 executes two multiplied biclique DH protocols in parallel with the same ephemeral key.
·UP is a version of KFU1 in which one of the static public keys is generated with a random oracle.
·SMEN- can be thought of as a combined execution of two multiplied biclique DH protocols.

In addition, this paper provides ways to characterize the AKE protocols and defines two parameters: one consists of the number of static keys, the number of ephemeral keys, and the number of shared secrets, and the other is defined as the total sum of these numbers. When an AKE protocol is constructed based on some group, these two parameters indicate the number of elements in the group, i.e., they are related to the sizes of the storage and communication data.
• Manh Ha NGUYEN, Kenji YASUNAGA, Keisuke TANAKA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1100-1111
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
We consider the problem of constructing public-key encryption (PKE) schemes that are resilient to a-posteriori chosen-ciphertext and key-leakage attacks (LR-CCA2). In CTYPTO'09, Naor and Segev proved that the Naor-Yung generic construction of PKE which is secure against chosen-ciphertext attack (CCA2) is also secure against key-leakage attacks. They also presented a variant of the Cramer-Shoup cryptosystem, and showed that this PKE scheme is LR-CCA2-secure under the decisional Diffie-Hellman assumption. In this paper, we apply the generic construction of “Universal Hash Proofs and a Paradigm for Adaptive Chosen Ciphertext Secure Public-Key Encryption” (EUROCRYPT'02) to generalize the above work of Naor-Segev. In comparing to the first construction of Naor-Segev, ours is more efficient because of not using simulation-sound NIZK. We also extend it to stateful PKE schemes. Concretely, we present the notion of LR-CCA2 attack in the case of stateful PKE, and a generic construction of stateful PKE that is secure against this attack.
• Kazuki YONEYAMA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1112-1123
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In this paper, we propose a generic construction of one-round attribute-based (implicitly) authenticated key exchange (ABAKE). The construction is based on a chosen-ciphertext (CCA) secure attribute-based KEM and the decisional Diffie-Hellman (DDH) assumption. If an underlying attribute-based KEM scheme allows expressive access controls and is secure in the standard model (StdM), an instantiated ABAKE scheme also achieves them. Our scheme enjoys the best of both worlds: efficiency and security. The number of rounds is one (optimal) while the known secure scheme in the StdM is not one-round protocol. Our scheme is comparable in communication complexity with the most efficient known scheme that is not proved in the StdM. Also, our scheme is proved to satisfy security against advanced attacks like key compromise impersonation.
• Kazuki YONEYAMA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1124-1138
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Forward secrecy (FS) is a central security requirement of authenticated key exchange (AKE). Especially, strong FS (sFS) is desirable because it can guarantee security against a very realistic attack scenario that an adversary is allowed to be active in the target session. However, most of AKE schemes cannot achieve sFS, and currently known schemes with sFS are only proved in the random oracle model. In this paper, we propose a generic construction of AKE protocol with sFS in the standard model against a constrained adversary. The constraint is that session-specific intermediate computation results (i.e., session state) cannot be revealed to the adversary for achieving sFS, that is shown to be inevitable by Boyd and González Nieto. However, our scheme maintains weak FS (wFS) if session state is available to the adversary. Thus, our scheme satisfies one of strongest security definitions, the CK+ model, which includes wFS and session state reveal. The main idea to achieve sFS is to use signcryption KEM while the previous CK+ secure construction uses ordinary KEM. We show a possible instantiation of our construction from Diffie-Hellman problems.
• Atsushi FUJIOKA, Fumitaka HOSHINO, Tetsutaro KOBAYASHI, Koutarou SUZUK ...
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1139-1155
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In this paper, we propose an identity-based authenticated key exchange (ID-AKE) protocol that is secure in the identity-based extended Canetti-Krawczyk (id-eCK) model in the random oracle model under the gap Bilinear Diffie-Hellman assumption. The proposed ID-AKE protocol is the most efficient among the existing ID-AKE protocols that is id-eCK secure, and it can be extended to use in asymmetric pairing.
• Yusuke SAKAI, Keita EMURA, Goichiro HANAOKA, Yutaka KAWAI, Kazumasa OM ...
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1156-1168
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
This paper proposes methods for “restricting the message space” of public-key encryption, by allowing a third party to verify whether a given ciphertext does not encrypt some message which is previously specified as a “bad” (or “problematic”) message. Public-key encryption schemes are normally designed not to leak even partial information of encrypted plaintexts, but it would be problematic in some circumstances. This higher level of confidentiality could be abused, as some malicious parties could communicate with each other, or could talk about some illegal topics, using an ordinary public key encryption scheme with help of the public-key infrastructure. It would be undesirable considering the public nature of PKI. The primitive of restrictive public key encryption will help this situation, by allowing a trusted authority to specify a set of “bad” plaintexts, and allowing every third party to detect ciphertexts that encrypts some of the specified “bad” plaintext. The primitive also provides strong confidentiality (of indistinguishability type) of the plaintext when it is not specified as “bad.” In this way, a third party (possible a gateway node of the network) can examine a ciphertext (which comes from the network) includes an allowable content or not, and only when the ciphertext does not contain forbidden message, the gateway transfers the ciphertext to a next node. In this paper, we formalize the above requirements and provide two constructions that satisfied the formalization. The first construction is based on the techniques of Teranishi et al. (IEICE Trans. Fundamentals E92-A, 2009), Boudot (EUROCRYPT 2000), and Nakanishi et al. (IEICE Trans. Fundamentals E93-A, 2010), which are developed in the context of (revocation of) group signature. The other construction is based on the OR-proof technique. The first construction has better performance when very few messages are specified as bad, while the other does when almost all of messages are specified as bad (and only very few messages are allowed to encrypt).
• Bennian DOU
Type: LETTER
2013 Volume E96.A Issue 6 Pages 1169-1170
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
At Eurocrypt'03, Boneh, Gentry, Lynn and Shacham proposed a pairing based verifiably encrypted signature scheme (the BGLS-VES scheme). In 2004, Hess mounted an efficient rogue-key attack on the BGLS-VES scheme in the plain public-key model. In this letter, we show that the BGLS-VES scheme is not secure in the proof of possession (POP) model.
• Bennian DOU
Type: LETTER
2013 Volume E96.A Issue 6 Pages 1171-1172
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In 2004, Menezes and Smart left an open problem that is whether there exists a realistic scenario where message and key substitution (MKS) attacks can have damaging consequences. In this letter, we show that MKS attacks can have damaging consequences in practice, by pointing out that a verifiably encrypted signature (VES) scheme is not opaque if MKS attacks are possible.
Special Section on Circuit, System, and Computer Technologies
• Qi-Wei GE
2013 Volume E96.A Issue 6 Pages 1173
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
• Wei ZHONG, Song CHEN, Bo HUANG, Takeshi YOSHIMURA, Satoshi GOTO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1174-1184
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Application-Specific Network-on-Chips (ASNoCs) have been proposed as a more promising solution than regular NoCs to the global communication challenges for particular applications in nanoscale System-on-Chip (SoC) designs. In ASNoC Design, one of the key challenges is to generate the most suitable and power efficient NoC topology under the constraints of the application specification. In this work, we present a two-step floorplanning (TSF) algorithm, integrating topology synthesis into floorplanning phase, to automate the synthesis of such ASNoC topologies. At the first-step floorplanning, during the simulated annealing, we explore the optimal positions and clustering of cores and implement an incremental path allocation algorithm to predictively evaluate the power consumption of the generated NoC topology. At the second-step floorplanning, we explore the optimal positions of switches and network interfaces on the floorplan. A power and timing aware path allocation algorithm is also integrated into this step to determine the connectivity across different switches. Experimental results on a variety of benchmarks show that our algorithm can produce greatly improved solutions over the latest works.
• Yongqing HUO, Fan YANG, Vincent BROST, Bo GU
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1185-1194
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Due to the growing popularity of High Dynamic Range (HDR) images and HDR displays, a large amount of existing Low Dynamic Range (LDR) images are required to be converted to HDR format to benefit HDR advantages, which give rise to some LDR to HDR algorithms. Most of these algorithms especially tackle overexposed areas during expanding, which is the potential to make the image quality worse than that before processing and introduces artifacts. To dispel these problems, we present a new LDR to HDR approach, unlike the existing techniques, it focuses on avoiding sophisticated treatment to overexposed areas in dynamic range expansion step. Based on a separating principle, firstly, according to the familiar types of overexposure, the overexposed areas are classified into two categories which are removed and corrected respectively by two kinds of techniques. Secondly, for maintaining color consistency, color recovery is carried out to the preprocessed images. Finally, the LDR image is expanded to HDR. Experiments show that the proposed approach performs well and produced images become more favorable and suitable for applications. The image quality metric also illustrates that we can reveal more details without causing artifacts introduced by other algorithms.
• Xinwei XUE, Xin JIN, Chenyuan ZHANG, Satoshi GOTO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1195-1203
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Adverse weather, such as rain or snow, can cause difficulties in the processing of video streams. Because the appearance of raindrops can affect the performance of human tracking and reduce the efficiency of video compression, the detection and removal of rain is a challenging problem in outdoor surveillance systems. In this paper, we propose a new algorithm for rain detection and removal based on both spatial and wavelet domain features. Our system involves fewer frames during detection and removal, and is robust to moving objects in the rain. Experimental results demonstrate that the proposed algorithm outperforms existing approaches in terms of subjective and objective quality.
• Jiu XU, Ning JIANG, Satoshi GOTO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1204-1213
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In this paper, a novel feature named bidirectional local template patterns (B-LTP) is proposed for use in pedestrian detection in still images. B-LTP is a combination and modification of two features, histogram of templates (HOT) and center-symmetric local binary patterns (CS-LBP). For each pixel, B-LTP defines four templates, each of which contains the pixel itself and two neighboring center-symmetric pixels. For each template, it then calculates information from the relationships among these three pixels and from the two directional transitions across these pixels. Moreover, because the feature length of B-LTP is small, it consumes less memory and computational power. Experimental results on an INRIA dataset show that the speed and detection rate of our proposed B-LTP feature outperform those of other features such as histogram of orientated gradient (HOG), HOT, and covariance matrix (COV).
• Mitsuji MUNEYASU, Hiroshi KUDO, Takafumi SHONO, Yoshiko HANADA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1214-1221
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In this paper, we propose an improved data embedding and extraction method for information retrieval considering the use of mobile devices. Although the conventional method has demonstrated good results for images captured by cellular phones, some problems remain with this method. One problem is the lack of consideration of the construction of the code grouping in the code grouping method. In this paper, a new construction method for code grouping is proposed, and it is shown that a suitable grouping of the codes can be found. Another problem is the correction method of lens distortion, which is time-consuming. Therefore, to improve the processing speed, the golden section search method is adopted to estimate the distortion coefficients. In addition, a new tuning algorithm for the gain coefficient in the embedding process is also proposed. Experimental results show an increase in the detection rate for embedding data and a reduction of the processing time.
• Hsuan-Chun LIAO, Mochamad ASRI, Tsuyoshi ISSHIKI, Dongju LI, Hiroaki K ...
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1222-1235
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Image processing engine is crucial for generating high quality images in video system. As Application Specific Integrated Circuit (ASIC) is dedicated for specific standards, Application Specific Instruction-set Processor (ASIP) which provides high flexibility and high performance seems to have more advantages in supporting the nonstandard pre/post image processing in video system. In our previous work, we have designed some ASIPs that can perform several image processing algorithms with a reconfigurable datapath. ASIP is as efficient as DSP, but its area is considerably smaller than DSP. As the resolution of image and the complexity of processing increase, the performance requirement also increases accordingly. In this paper, we presents a novel multi ASIP based image processing unit (IPU) which can provide sufficient performance for the emerging very-high-resolution applications. In order to provide a high performance image processing engine, we propose several new techniques and architecture such as multi block-pipes architecture, pixel direct transmission and boundary pixel write-through. Multi block-pipes architecture has flexible scalability in supporting a various ranges of resolution, which ranges from low resolution to high resolution. The boundary pixel write-through technique provides high efficient parallel processing, and pixel direct transmission technique is implemented in each ASIP to further reduce the data transmission time. Cycle-accurate SystemC simulations are performed, and the experimental results show that the maximum bandwidth of the proposed communication approach can achieve up to 1580Mbyte/s at 400MHz. Moreover, communication overhead can be reduced about a maximum of 88% compared to our previous works.
• Qian ZHAO, Yukikazu NAKAMOTO, Shimpei YAMADA, Koutaro YAMAMURA, Makoto ...
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1236-1244
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Wireless sensor nodes are becoming more and more common in various settings and require a long battery life for better maintainability. Since most sensor nodes are powered by batteries, energy efficiency is a critical problem. In an experiment, we observed that when peak power consumption is high, battery voltage drops quickly, and the sensor stops working even though some useful charge remains in the battery. We propose three off-line algorithms that extend battery life by scheduling sensors' execution time that is able to reduce peak power consumption as much as possible under a deadline constraint. We also developed a simulator to evaluate the effectiveness of these algorithms. The simulation results showed that one of the three algorithms dramatically can extend battery life approximately three time as long as in simultaneous sensor activation.
• Shenchuan LIU, Masaaki FUJIYOSHI, Hitoshi KIYA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1245-1252
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
This paper introduces amplitude-only images to image trading systems in which not only the copyright of images but also the privacy of consumers are protected. In the latest framework for image trading systems, an image is divided into an unrecognizable piece and a recognizable but distorted piece to simultaneously protect the privacy of a consumer and the copyright of the image. The proposed scheme uses amplitude-only images which are completely unrecognizable as the former piece, whereas the conventional schemes leave recognizable parts to the piece which degrades privacy protection performance. Moreover, the proposed scheme improves the robustness against copyright violation regardless of the used digital fingerprinting technique, because an amplitude-only image is larger than the piece in the conventional scheme. In addition, phase-only image is used as the second piece in the proposed scheme, the consumer can confirm what he/she bought. Experimental results show the effectiveness of the proposed scheme.
• Lei SUN, Zhenyu LIU, Takeshi IKENAGA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1253-1263
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Scalable Video Coding (SVC) is an extension of H.264/AVC, aiming to provide the ability to adapt to heterogeneous networks or requirements. It offers great flexibility for bitstream adaptation in multi-point applications such as videoconferencing. However, transcoding between SVC and AVC is necessary due to the existence of legacy AVC-based systems. The straightforward re-encoding method requires great computational cost, and delay-sensitive applications like videoconferencing require much faster transcoding scheme. This paper proposes an ultra-low-delay SVC-to-AVC MGS (Medium-Grain quality Scalability) transcoder for videoconferencing applications. Transcoding is performed in pure frequency domain with partial decoding/encoding in order to achieve significant speed-up. Three fast transcoding methods in frequency domain are proposed for macroblocks with different coding modes in non-KEY pictures. KEY pictures are transcoded by reusing the base layer motion data, and error propagation is constrained between KEY pictures. Simulation results show that proposed transcoder achieves averagely 38.5 times speed-up compared with the re-encoding method, while introducing merely 0.71dB BDPSNR coding quality loss for videoconferencing sequences as compared with the re-encoding algorithm.
• Naoya OKADA, Yuichi NAKAMURA, Shinji KIMURA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1264-1272
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Nonvolatile flip-flop enables leakage power reduction in logic circuits and quick return from standby mode. However, it has limited write endurance, and its power consumption for writing is larger than that of conventional D flip-flop (DFF). For this reason, it is important to reduce the number of write operations. The write operations can be reduced by stopping the clock signal to synchronous flip-flops because write operations are executed only when the clock is applied to the flip-flops. In such clock gating, a method using Exclusive OR (XOR) of the current value and the new value as the control signal is well known. The XOR based method is effective, but there are several cases where the write operations can be reduced even if the current value and the new value are different. The paper proposes a method to detect such unnecessary write operations based on state transition analysis, and proposes a write control method to save power consumption of nonvolatile flip-flops. In the method, redundant bits are detected to reduce the number of write operations. If the next state and the outputs do not depend on some current bit, the bit is redundant and not necessary to write. The method is based on Binary Decision Diagram (BDD) calculation. We construct write control circuits to stop the clock signal by converting BDDs representing a set of states where write operations are unnecessary. Proposed method can be combined with the XOR based method and reduce the total write operations. We apply combined method to some benchmark circuits and estimate the power consumption with Synopsys NanoSim. On average, 15.0% power consumption can be reduced compared with only the XOR based method.
• Sanchuan GUO, Zhenyu LIU, Guohong LI, Takeshi IKENAGA, Dongsheng WANG
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1273-1282
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
H.264 video codec system requires big capacity and high bandwidth of Frame Store (FS) for buffering reference frames. The up-to-date three dimensional (3D) stacked Phase change Random Access Memory (PRAM) is the promising approach for on-chip caching the reference signals, as 3D stacking offers high memory bandwidth, while PRAM possesses the advantages in terms of high density and low leakage power. However, the write endurance problem, that is a PRAM cell can only tolerant limited number of write operations, becomes the main barrier in practical applications. This paper studies the wear reduction techniques of PRAM based FS in H.264 codec system. On the basis of rate-distortion theory, the content oriented selective writing mechanisms are proposed to reduce bit updates in the reference frame buffers. With the proposed control parameter a, our methods make the quantitative trade off between the quality degradation and the PRAM lifetime prolongation. Specifically, taking a in the range of [0.2,2], experimental results demonstrate that, our methods averagely save 29.9-35.5% bit-wise write operations and reduce 52-57% power, at the cost of 12.95-20.57% BDBR bit-rate increase accordingly.
• Masashi TAWADA, Masao YANAGISAWA, Nozomu TOGAWA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1283-1292
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Recently, multi-core processors are used in embedded systems very often. Since application programs is much limited running on embedded systems, there must exists an optimal cache memory configuration in terms of power and area. Simulating application programs on various cache configurations is one of the best options to determine the optimal one. Multi-core cache configuration simulation, however, is much more complicated and takes much more time than single-core cache configuration simulation. In this paper, we propose a very fast dual-core L1 cache configuration simulation algorithm. We first propose a new data structure where just a single data structure represents two or more multi-core cache configurations with different cache associativities. After that, we propose a new multi-core cache configuration simulation algorithm using our new data structure associated with new theorems. Experimental results demonstrate that our algorithm obtains exact simulation results but runs 20 times faster than a conventional approach.
• Guohong LI, Zhenyu LIU, Sanchuan GUO, Dongsheng WANG
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1293-1305
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
As the number of cores and the working sets of parallel workloads increase, shared L2 caches exhibit fewer misses than private L2 caches by making a better use of the total available cache capacity, but they induce higher overall L1 miss latencies because of the longer average distance between the requestor and the home node, and the potential congestions at certain nodes. We observed that there is a high probability that the target data of an L1 miss resides in the L1 cache of a neighbor node. In such cases, these long-distance accesses to the home nodes can be potentially avoided. In order to leverage the aforementioned property, we propose Bayesian Theory based Adaptive Proximity Data Accessing (APDA). In our proposal, we organize the multi-core into clusters of 2x2 nodes, and introduce the Proximity Data Prober (PDP) to detect whether an L1 miss can be served by one of the cluster L1 caches. Furthermore, we devise the Bayesian Decision Classifier (BDC) to adaptively select the remote L2 cache or the neighboring L1 node as the server according to the minimum miss cost. We evaluate this approach on a 64-node multi-core using SPLASH-2 and PARSEC benchmarks, and we find that the APDA can reduce the execution time by 20% and reduce the energy by 14% compared to a standard multi-core with a shared L2. The experimental results demonstrate that our proposal outperforms the up-to-date mechanisms, such as ASR, DCC and RNUCA.
• Wenxin YU, Weichen WANG, Minghui WANG, Satoshi GOTO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1306-1314
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Multi-view video can provide users with three-dimen-sional (3-D) and virtual reality perception through multiple viewing angles. In recent years, depth image-based rendering (DIBR) has been generally used to synthesize virtual view images in free viewpoint television (FTV) and 3-D video. To conceal the zero-region more accurately and improve the quality of a virtual view synthesized frame, an integrated hole-filling algorithm for view synthesis is proposed in this paper. The proposed algorithm contains five parts: an algorithm for distinguishing different regions, foreground and background boundary detection, texture image isophotes detection, a textural and structural isophote prediction algorithm, and an in-painting algorithm with gradient priority order. Based on the texture isophote prediction with a geometrical principle and the in-painting algorithm with a gradient priority order, the boundary information of the foreground is considerably clearer and the texture information in the zero-region can be concealed much more accurately than in previous works. The vision quality mainly depends on the distortion of the structural information. Experimental results indicate that the proposed algorithm improves not only the objective quality of the virtual image, but also its subjective quality considerably; human vision is also clearly improved based on the subjective results. In particular, the algorithm ensures the boundary contours of the foreground objects and the textural and structural information.
• Hyunduk KIM, Sang-Heon LEE, Myoung-Kyu SOHN, Dong-Ju KIM, Byungmin KIM
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1315-1322
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Super resolution (SR) reconstruction is the process of fusing a sequence of low-resolution images into one high-resolution image. Many researchers have introduced various SR reconstruction methods. However, these traditional methods are limited in the extent to which they allow recovery of high-frequency information. Moreover, due to the self-similarity of face images, most of the facial SR algorithms are machine learning based. In this paper, we introduce a facial SR algorithm that combines learning-based and regularized SR image reconstruction algorithms. Our conception involves two main ideas. First, we employ separated frequency components to reconstruct high-resolution images. In addition, we separate the region of the training face image. These approaches can help to recover high-frequency information. In our experiments, we demonstrate the effectiveness of these ideas.
• Wannida SAE-TANG, Masaaki FUJIYOSHI, Hitoshi KIYA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1323-1330
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In this paper, 1) it is shown that amplitude-only images (AOIs) have quite wide intensity ranges (IRs), and 2) an IR reduction method for AOIs is proposed. An AOI is the inversely transformed amplitude spectra of an image, and it is used in the privacy- and copyright-protected image trading system because of its invisibility. Since an AOI is the coherent summation of cosine waves with the same phase, the IR of the AOI is too large to be stored and/or transmitted. In the proposed method, random signs are applied to discrete Fourier transformed amplitude coefficients to obtained AOIs with significantly lower IRs without distortion while keeping the invisibility of images. With reasonable processing time, high correct watermark extracting rates, inversely quantized AOIs with low mean squared errors, and reconstructed images with high peak signal-to-noise ratios are obtained by a linear quantizer in the proposed method.
• Satoshi TAOKA, Daisuke TAKAFUJI, Toshimasa WATANABE
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1331-1339
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
A vertex cover of a given graph G=(V,E) is a subset N of V such that N contains either u or v for any edge (u,v) of E. The minimum weight vertex cover problem (MWVC for short) is the problem of finding a vertex cover N of any given graph G=(V,E), with weight w(v) for each vertex v of V, such that the sum w(N) of w(v) over all v of N is minimum. In this paper, we consider MWVC with w(v) of any v of V being a positive integer. We propose simple procedures as postprocessing of algorithms for MWVC. Furthremore, five existing approximation algorithms with/without the proposed procedures incorporated are implemented, and they are evaluated through computing experiment.
• Shogo FUJITA, Leonardo LANANTE Jr., Yuhei NAGAO, Masayuki KUROSAKI, Hi ...
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1340-1347
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In this paper, we propose a modified Tomlinson Harashima precoding (THP) method with less increase of computational complexity for the multi-user MIMO downlink system. The proposed THP scheme minimizes the influence of noise enhancement at the receivers by placing the diagonal weighted filters at both transmitter side and receiver side with square root. Compared to previously proposed non-linear precoding methods including vector perturbation (VP), the proposed THP achieves high BER performance. Furthermore, we show that the proposed THP method is implemented with lower computational complexity than that of existing modified THP and VP in literature.
• Syota KUWABARA, Yukihide KOHIRA, Yasuhiro TAKASHIMA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1348-1356
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In the recent LSI design, it is difficult to obtain a placement which satisfies both design constraints and specifications due to the increase of the circuit size, the progress of the manufacturing technology, and the speed-up of the circuit performance. Analytical placement methods are promising to obtain the placement which satisfies both design constraints and specifications. Although existing analytical placement methods obtain the placement with the short wire length, the obtained placement has overlap. In this paper, we propose Overlap Removable Area as an overlap evaluation method for an analytical placement method. Experiments show that the proposed evaluation method is effective for removing overlap in the analytical placement method.
• Jienan ZHANG, Shouyi YIN, Peng OUYANG, Leibo LIU, Shaojun WEI
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1357-1365
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In this paper we propose a method to use features of an individual object to locate and recognize this object concurrently in a static image with Multi-feature fusion based on multiple objects sample library. This method is proposed based on the observation that lots of previous works focuses on category recognition and takes advantage of common characters of special category to detect the existence of it. However, these algorithms cease to be effective if we search existence of individual objects instead of categories in complex background. To solve this problem, we abandon the concept of category and propose an effective way to use directly features of an individual object as clues to detection and recognition. In our system, we import multi-feature fusion method based on colour histogram and prominent SIFT (p-SIFT) feature to improve detection and recognition accuracy rate. p-SIFT feature is an improved SIFT feature acquired by further feature extraction of correlation information based on Feature Matrix aiming at low computation complexity with good matching rate that is proposed by ourselves. In process of detecting object, we abandon conventional methods and instead take full use of multi-feature to start with a simple but effective way-using colour feature to reduce amounts of patches of interest (POI). Our method is evaluated on several publicly available datasets including Pascal VOC 2005 dataset, Objects101 and datasets provided by Achanta et al.
• Muchen LI, Jinjia ZHOU, Dajiang ZHOU, Xiao PENG, Satoshi GOTO
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1366-1375
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
As the successive video compression standard of H.264/AVC, High Efficiency Video Codec (HEVC) will play an important role in video coding area. In the deblocking filter part, HEVC inherits the basic property of H.264/AVC and gives some new features. Based on this variation, this paper introduces a novel dual-mode deblocking filter architecture which could support both of the HEVC and H.264/AVC standards. For HEVC standard, the proposed symmetric unified-cross unit (SUCU) based filtering scheme greatly reduces the design complexity. As a result, processing a 16×16 block needs 24 clock cycles. For H.264/AVC standard, it takes 48 clock cycles for a 16×16 macro-block (MB). In synthesis result, the proposed architecture occupies 41.6k equivalent gate count at frequency of 200MHz in SMIC 65nm library, which could satisfy the throughput requirement of super hi-vision (SHV) on 60fps. With filter reusing scheme, the universal design for the two standards saves 30% gate counts than the dedicated ones in filter part. In addition, the total power consumption could be reduced by 57.2% with skipping mode when the edges need not be filtered.
• Takahiro SUZUKI, Takeshi IKENAGA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1376-1383
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Scale-Invariant Feature Transform (SIFT) has lately attracted attention in computer vision as a robust keypoint detection algorithm which is invariant for scale, rotation and illumination changes. However, its computational complexity is too high to apply in practical real-time applications. This paper proposes a low complexity keypoint extraction algorithm based on SIFT descriptor and utilization of the database, and its real-time hardware implementation for Full-HD resolution video. The proposed algorithm computes SIFT descriptor on the keypoint obtained by corner detection and selects a scale from the database. It is possible to parallelize the keypoint detection and descriptor computation modules in the hardware. These modules do not depend on each other in the proposed algorithm in contrast with SIFT that computes a scale. The processing time of descriptor computation in this hardware is independent of the number of keypoints because its descriptor generation is pipelining structure of pixel. Evaluation results show that the proposed algorithm on software is 12 times faster than SIFT. Moreover, the proposed hardware on FPGA is 427 times faster than SIFT and 61 times faster than the proposed algorithm on software. The proposed hardware performs keypoint extraction and matching at 60fps for Full-HD video.
• Tatsuya SAKANUSHI, Jie HU, Kou YAMADA
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1384-1392
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
The simple repetitive control system proposed by Yamada et al. is a type of servomechanism for periodic reference inputs. This system follows a periodic reference input with a small steady-state error, even if there is periodic disturbance or uncertainty in the plant. In addition, simple repetitive control systems ensure that transfer functions from the periodic reference input to the output and from the disturbance to the output have finite numbers of poles. Yamada et al. clarified the parameterization of all stabilizing simple repetitive controllers. Recently, Yamada et al. proposed the parameterization of all stabilizing two-degrees-of-freedom (TDOF) simple repetitive controllers that can specify the input-output characteristic and the disturbance attenuation characteristic separately. However, when using the method of Yamada et al., it is complex to specify the low-pass filter in the internal model for the periodic reference input that specifies the frequency characteristics. This paper extends the results of Yamada et al. and proposes the parameterization of all stabilizing TDOF simple repetitive controllers with specified frequency characteristics in which the low-pass filter can be specified beforehand.
• Peng OUYANG, Shouyi YIN, Hui GAO, Leibo LIU, Shaojun WEI
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1393-1402
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
Scale Invariant Feature Transform (SIFT) algorithm is a very excellent approach for feature detection. It is characterized by data intensive computation. The current studies of accelerating SIFT algorithm are mainly reflected in three aspects: optimizing the parallel parts of the algorithm based on general-purpose multi-core processors, designing the customized multi-core processor dedicated for SIFT, and implementing it based on the FPGA platform. The real-time performance of SIFT has been highly improved. However, the factors such as the input image size, the number of octaves and scale factors in the SIFT algorithm are restricted for some solutions, the flexibility that ensures the high execution performance under variable factors should be improved. This paper proposes a reconfigurable solution to solve this problem. We fully exploit the algorithm and adopt several techniques, such as full parallel execution, block computation and CORDIC transformation, etc., to improve the execution efficiency on a REconfigurable MUltimedia System called REMUS. Experimental results show that the execution performance of the SIFT is improved by 33%, 50% and 8 times comparing with that executed in the multi-core platform, FPGA and ASIC separately. The scheme of dynamic reconfiguration in this work can configure the circuits to meet the computation requirements under different input image size, different number of octaves and scale factors in the process of computing.
• Jirabhorn CHAIWONGSAI, Werapon CHIRACHARIT, Kosin CHAMNONGTHAI, Yoshik ...
Type: PAPER
2013 Volume E96.A Issue 6 Pages 1403-1411
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
This paper proposes a low power tone recognition suitable for automatic tonal speech recognizer (ATSR). The tone recognition estimates fundamental frequency (F0) only from vowels by using a new magnitude difference function (MDF), called vowel-MDF. Accordingly, the number of operations is considerably reduced. In order to apply the tone recognition in portable electronic equipment, the tone recognition is designed using parallel and pipeline architecture. Due to the pipeline and parallel computations, the architecture achieves high throughput and consumes low power. In addition, the architecture is able to reduce the number of input frames depending on vowels, making it more adaptable depending on the maximum number of frames. The proposed architecture is evaluated with words selected from voice activation for GPS systems, phone dialing options, and words having the same phoneme but different tones. In comparison with the autocorrelation method, the experimental results show 35.7% reduction in power consumption and 27.1% improvement of tone recognition accuracy (110 words comprising 187 syllables). In comparison with ATSR without the tone recognition, the speech recognition accuracy indicates 25.0% improvement of ATSR with tone recogntion (2,250 training data and 45 testing words).
Regular Section
• Sinuk KANG, Kil Hyun KWON, Dae Gwan LEE
Type: PAPER
Subject area: Digital Signal Processing
2013 Volume E96.A Issue 6 Pages 1412-1420
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
We present a multi-channel sampling expansion for signals with selectively tiled band-region. From this we derive an oversampling expansion for any bandpass signal, and show that any finitely many missing samples from two-channel oversampling expansion can always be uniquely recovered. In addition, we find a sufficient condition under which some infinitely many missing samples can be recovered. Numerical stability of the recovery process is also discussed in terms of the oversampling rate and distribution of the missing samples.
• Yoshihiro AKEBOSHI, Seiichi SAITO, Hideyuki OHASHI
Type: PAPER
Subject area: Analog Signal Processing
2013 Volume E96.A Issue 6 Pages 1421-1428
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In the field of Factory Automation (FA), process control, and Supervisory Control and Data Acquisition (SCADA), an analog data acquisition system using isolation transformers is commonly used to measure and record analog signals through isolated inputs. In order to improve the input precision of the acquisition system, circuit techniques and a design method of the analog frontend circuit with the signal transformers are proposed in this paper. A circuit technique to compensate for the droop of the pulse signal due to the characteristics of the signal transformer is employed. Also, a numerical analysis of a non-linear circuit equation, which represents a behavior of the core saturation of the signal transformer, is performed in order to determine the parameters of the circuit. Using a small signal transformer, dedicatedly developed for this acquisition system, the performance of the precision achieved for the linearity error is experimentally confirmed within +0.0204%/-0.0215%.
• Sangsu YEH, Sangchul WON
Type: PAPER
Subject area: Systems and Control
2013 Volume E96.A Issue 6 Pages 1429-1436
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
This paper presents the stability analysis for continuous-time Takagi-Sugeno fuzzy systems using a fuzzy Lyapunov function. The proposed fuzzy Lyapunov function involves the time derivatives of states to include new free matrices in the LMI stability conditions. These free matrices extend the solution space for Linear Matrix Inequalities (LMIs) problems. Numerical examples illustrate the effectiveness of the proposed methods.
• Hui WANG, Martin HELL, Thomas JOHANSSON, Martin ÅGREN
Type: PAPER
Subject area: Cryptography and Information Security
2013 Volume E96.A Issue 6 Pages 1437-1444
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
BEAN is a newly proposed lightweight stream cipher adopting Fibonacci FCSRs. It is designed for very constrained environments and aims at providing a balance between security, efficiency and cost. A weakness in BEAN was first found by Ågren and Hell in 2011, resulting in a key recovery attack slightly better than brute force. In this paper, we present new correlations between state and keystream with large statistical advantage, leading to a much more efficient key recovery attack. The time and data complexities of this attack are 257.53 and 259.94, respectively. Moreover, two new output functions are provided as alternatives, which are more efficent than the function used in BEAN and are immune to all attacks proposed on the cipher. Also, suggestions for improving the FCSRs are given.
• Xing LIU, Daiyuan PENG, Xianhua NIU, Fang LIU
Type: PAPER
Subject area: Spread Spectrum Technologies and Applications
2013 Volume E96.A Issue 6 Pages 1445-1450
Published: June 01, 2013
Released: June 01, 2013
JOURNALS RESTRICTED ACCESS
In order to evaluate the goodness of frequency hopping (FH) sequence design, the periodic Hamming correlation function is used as an important measure. But aperiodic Hamming correlation of FH sequences matters in real applications, while it received little attraction in the literature compared with periodic Hamming correlation. In this paper, the new aperiodic Hamming correlation lower bounds for FH sequences, with respect to the size of the frequency slot set, the sequence length, the family size, the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation are established. The new aperiodic bounds are tighter than the Peng-Fan bounds. In addition, the new bounds include the second powers of the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation but the Peng-Fan bounds do not include them. For the given sequence length, the family size and the frequency slot set size, the values of the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation are inside of an ellipse which is given by the new aperiodic bounds.