-
Hiroyuki SEKI
2010 Volume E93.D Issue 5 Pages
941
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
-
Haruhiko SATO, Masahito KURIHARA
Article type: PAPER
Subject area: Term Rewriting Systems
2010 Volume E93.D Issue 5 Pages
942-952
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Inductive theorem proving plays an important role in the field of formal verification of systems. The rewriting induction (RI) is a method for inductive theorem proving proposed by Reddy. In order to obtain successful proofs, it is very important to choose appropriate contexts (such as in which direction each equation should be oriented) when applying RI inference rules. If the choice is not appropriate, the procedure may diverge or the users have to come up with several lemmas to prove together with the main theorem. Therefore we have a good reason to consider parallel execution of several instances of the rewriting induction procedure, each in charge of a distinguished single context in search of a successful proof. In this paper, we propose a new procedure, called
multi-context rewriting induction, which efficiently simulates parallel execution of rewriting induction procedures in a single process, based on the idea of the
multi-completion procedure. By the experiments with a well-known problem set, we discuss the effectiveness of the proposed procedure when searching along various contexts for a successful inductive proof.
View full abstract
-
Keita UCHIYAMA, Masahiko SAKAI, Toshiki SAKABE
Article type: PAPER
Subject area: Term Rewriting Systems
2010 Volume E93.D Issue 5 Pages
953-962
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
In this paper, we show that the termination and the innermost termination properties are decidable for the class of term rewriting systems (TRSs for short) all of whose dependency pairs are right-linear and right-shallow. We also show that the innermost termination is decidable for the class of TRSs all of whose dependency pairs are shallow. The key observation common to these two classes is as follows: for every TRS in the class, we can construct, by using the dependency-pairs information, a finite set of terms such that if the TRS is non-terminating then there is a looping sequence beginning with a term in the finite set. This fact is obtained by modifying the analysis of argument propagation in shallow dependency pairs proposed by Wang and Sakai in 2006. However we gained a great benefit that the resulted procedures do not require any decision procedure of reachability problem used in Wang's procedure for shallow case, because known decidable classes of reachability problem are not larger than classes discussing in this paper.
View full abstract
-
Yuki CHIBA, Takahito AOTO, Yoshihito TOYAMA
Article type: PAPER
Subject area: Program Transformation
2010 Volume E93.D Issue 5 Pages
963-973
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Chiba et al. (2006) proposed a framework of program transformation of term rewriting systems by developed templates. Contrast to the previous framework of program transformation by templates based on lambda calculus, this framework provides a method to verify the correctness of transformation automatically. Tupling (Bird, 1980) is a well-known technique to eliminate redundant recursive calls for improving efficiency of programs. In Chiba et al.'s framework, however, one can not use tuple symbols to construct developed templates. Thus their framework is not capable of tupling transformations. In this paper, we propose a more flexible notion of templates so that a wider variety of transformations, including tupling transformations, can be handled.
View full abstract
-
Weiqiang KONG, Kazuhiro OGATA, Kokichi FUTATSUGI
Article type: PAPER
Subject area: Formal Specification
2010 Volume E93.D Issue 5 Pages
974-984
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
System implementation for e-Government initiatives should be reliable. Unreliable system implementation could, on the one hand, be insufficient to fulfill basic system requirements, and more seriously on the other hand, break the trust of citizens on governments. The objective of this paper is to advocate the use of formal methods in general, the OTS/CafeOBJ method in particular in this paper, to help develop reliable system implementation for e-Government initiatives. An experiment with the OTS/CafeOBJ method on an e-Government messaging framework proposed for providing citizens with seamless public services is described to back up our advocation. Two previously not well-clarified problems of the framework and their potential harm realized in this experiment are reported, and possible ways of revisions to the framework are suggested as well. The revisions are proved to be sufficient for making the framework satisfy certain desired properties.
View full abstract
-
Chul-Joo KIM, Jeong-Han YUN, Seonggun KIM, Kwang-Moo CHOE, Taisook HAN
Article type: PAPER
Subject area: Program Analysis
2010 Volume E93.D Issue 5 Pages
985-993
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Esterel is an imperative synchronous language for control-dominant reactive systems. Regardless of imperative features of Esterel, combination of parallel execution and preemption makes it difficult to build control flow graphs (CFGs) of Esterel programs. Simple and convenient CFGs can help to analyze Esterel programs. However, previous researches are not suitable for flow analyses of imperative languages. In this work, we present a method to construct over-approximated CFGs for Pure Esterel. Generated CFGs expose invisible interferences among threads and show program structures explicitly so that they are useful for program analyses based on graph theory or control-/data- flows.
View full abstract
-
Takeshi NAGAOKA, Kozo OKANO, Shinji KUSUMOTO
Article type: PAPER
Subject area: Model Checking
2010 Volume E93.D Issue 5 Pages
994-1005
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Model checking techniques are useful for design of high-reliable information systems. The well-known problem of state explosion, however, might occur in model checking of large systems. Such explosion severely limits the scalability of model checking. In order to avoid it, several abstraction techniques have been proposed. Some of them are based on CounterExample-Guided Abstraction Refinement (CEGAR) loop technique proposed by E. Clarke
et al.. This paper proposes a concrete abstraction technique for timed automata used in model checking of real time systems. Our technique is based on CEGAR, in which we use a counter example as a guide to refine the abstract model. Although, in general, the refinement operation is applied to abstract models, our method modifies the original timed automaton. Next, we generate refined abstract models from the modified automaton. This paper describes formal descriptions of the algorithm and the correctness proof of the algorithm.
View full abstract
-
Satoshi HATTORI
Article type: PAPER
Subject area: Software Testing
2010 Volume E93.D Issue 5 Pages
1006-1017
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
We try to use a computer algebra system Mathematica as a test case generation system. In test case generation, we generally need to solve equations and inequalities. The main reason why we take Mathematica is because it has a built-in function to solve equations and inequalities. In this paper, we deal with both black-box testing and white-box testing. First, we show two black-box test case generation procedures described in Mathematica. The first one is based on equivalence partitioning. Mathematica explicitly shows a case that test cases do no exist. This is an advantage in using Mathematica. The second procedure is a modification of the first one adopting boundary value analysis. For implementation of boundary value analysis, we give a formalization for it. Next, we show a white-box test case generation procedure. For this purpose, we also give a model for source programs. It is like a control flow graph model. The proposed procedure analyzes a model description of a program.
View full abstract
-
Hiroaki KIKUCHI
2010 Volume E93.D Issue 5 Pages
1018-1019
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
-
Yang OU, Kyung Hyune RHEE
Article type: INVITED PAPER
2010 Volume E93.D Issue 5 Pages
1020-1030
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
The traditional cryptographic hash functions are sensitive to even one-bit difference of the input message. While multimedia data always undergo compression or other signal processing operations, which lead to the unsuitability of multimedia authentication using cryptographic hash. The image hashing has emerged recently which captures visual essentials for robust image authentication. In this paper, we give a comprehensive survey of image hashing. We present an overview of various image hashing schemes and discuss their advantages and limitations in terms of security, robustness, and discrimination under different types of operations on the image.
View full abstract
-
Fangming ZHAO, Yoshiaki HORI, Kouichi SAKURAI
Article type: INVITED PAPER
2010 Volume E93.D Issue 5 Pages
1031-1043
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
In a society preoccupied with gradual erosion of electronic privacy, loss of privacy in the current Domain Name System is an important issue worth considering. In this paper, we first review the DNS and some security & privacy threats to make average users begin to concern about the significance of privacy preservation in DNS protocols. Then, by an careful survey of four noise query generation based existing privacy protection approaches, we analyze some benefits and limitations of these proposals in terms of both related performance evaluation results and theoretic proofs. Finally, we point out some problems that still exist for research community's continuing efforts in the future.
View full abstract
-
Wen Tao ZHU, Robert H. DENG, Jianying ZHOU, Feng BAO
Article type: INVITED PAPER
2010 Volume E93.D Issue 5 Pages
1044-1052
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
The access privileges in distributed systems can be effectively organized as a partial-order hierarchy that consists of distinct security classes, and the access rights are often designated with certain temporal restrictions. The time-bound hierarchical key assignment problem is to assign distinct cryptographic keys to distinct security classes according to their privileges so that users from a higher class can use their class key to derive the keys of lower classes, and these keys are time-variant with respect to sequentially allocated temporal units called time slots. In this paper, we present the involved principle, survey the state of the art, and particularly, look into two representative approaches to time-bound hierarchical key assignment for in-depth case studies.
View full abstract
-
Sunghyun KIM, Heejo LEE
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1053-1061
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Policy in security devices such as firewalls and Network Intrusion Prevention Systems (NIPS) is usually implemented as a sequence of rules. This allows network packets to proceed or to be discarded based on rule's decision. Since attack methods are increasing rapidly, a huge number of security rules are generated and maintained in security devices. Under attack or during heavy traffic, the policy configured wrong creates security holes and prevents the system from deciding quickly whether to allow or deny a packet. Anomalies between the rules occur when there is overlap among the rules. In this paper, we propose a new method to detect anomalies among rules and generate new rules without configuration error in multiple security devices as well as in a single security device. The proposed method cuts the overlap regions among rules into minimum overlap regions and finds the abnormal domain regions of rules' predicates. Classifying rules by the network traffic flow, the proposed method not only reduces computation overhead but blocks unnecessary traffic among distributed devices.
View full abstract
-
Masana MURASE
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1062-1069
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
This paper presents effective and efficient implementation techniques for DMA buffer overflow elimination on the Cell Broadband Engine™ (Cell/B.E.) processor. In the Cell/B.E. programming model, application developers manually issue DMA commands to transfer data from the system memory to the local memories of the Cell/B.E. cores. Although this allows us to eliminate cache misses or cache invalidation overhead, it requires careful management of the buffer arrays for DMA in the application programs to prevent DMA buffer overflows. To guard against DMA buffer overflows, we introduced safe DMA handling functions for the applications to use. To improve and minimize the performance overhead of buffer overflow prevention, we used three different optimization techniques that take advantage of SIMD operations: branch-hint-based optimizations, jump-table-based optimizations and self-modifying-based optimizations. Our optimized implementation prevents all DMA buffer overflows with minimal performance overhead, only 2.93% average slowdown in comparison to code without the buffer overflow protection.
View full abstract
-
Chao HUANG, Jianling SUN, Xinyu WANG, Di WU
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1070-1079
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
In this paper, we propose an inconsistency resolution method based on a new concept,
insecure backtracking role mapping. By analyzing the role graph, we prove that the root cause of security inconsistency in distributed interoperation is the existence of insecure backtracking role mapping. We propose a novel and efficient algorithm to detect the inconsistency via finding all of the insecure backtracking role mappings. Our detection algorithm will not only report the existence of inconsistency, but also generate the inconsistency information for the resolution. We reduce the inconsistency resolution problem to the known Minimum-Cut problem, and based on the results generated by our detection algorithm we propose an inconsistency resolution algorithm which could guarantee the security of distributed interoperation. We demonstrate the effectiveness of our approach through simulated tests and a case study.
View full abstract
-
Kazumasa OMOTE, Kazuhiko KATO
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1080-1086
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
In key-recovery methods using smart cards, a user can recover the disk encryption key in cooperation with the system administrator, even if the user has lost the smart card including the disk encryption key. However, the disk encryption key is known to the system administrator in advance in most key-recovery methods. Hence user's disk data may be read by the system administrator. Furthermore, if the disk encryption key is not known to the system administrator in advance, it is difficult to achieve a key authentication.
In this paper, we propose a scheme which enables to recover the disk encryption key when the user's smart card is lost. In our scheme, the disk encryption key is not preserved anywhere and then the system administrator cannot know the key before key-recovery phase. Only someone who has a user's smart card and knows the user's password can decrypt that user's disk data. Furthermore, we measured the processing time required for user authentication in an experimental environment using a virtual machine monitor. As a result, we found that this processing time is short enough to be practical.
View full abstract
-
Lei WANG, Kazuo OHTA, Yu SASAKI, Kazuo SAKIYAMA, Noboru KUNIHIRO
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1087-1095
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Many hash-based authentication protocols have been proposed, and proven secure assuming that underlying hash functions are secure. On the other hand, if a hash function compromises, the security of authentication protocols based on this hash function becomes unclear. Therefore, it is significantly important to verify the security of hash-based protocols when a hash function is broken.
In this paper, we will re-evaluate the security of two MD5-based authentication protocols based on a fact that MD5 cannot satisfy a required fundamental property named
collision resistance. The target protocols are APOP (Authenticated Post Office Protocol) and NMAC (Nested Message Authentication Code), since they or their variants are widely used in real world. For security evaluation of APOP, we will propose a modified password recovery attack procedure, which is twice as fast as previous attacks. Moreover, our attack is more realistic, as the probability of being detected is lower than that of previous attacks. For security evaluation of MD5-based NMAC, we will propose a new key-recovery attack procedure, which has a complexity lower than that of previous attack. The complexity of our attack is 2
76, while that of previous attack is 2
100.
**Moreover, our attack has another interesting point. NMAC has two keys: the inner key and the outer key. Our attack can recover the outer key partially without the knowledge of the inner key.
View full abstract
-
Ilsun YOU, Jong-Hyouk LEE, Kouichi SAKURAI, Yoshiaki HORI
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1096-1105
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Fast Handover for Hierarchical Mobile IPv6 (F-HMIPv6) that combines advantages of Fast Handover for Mobile IPv6 (FMIPv6) and Hierarchical Mobile IPv6 (HMIPv6) achieves the superior performance in terms of handover latency and signaling overhead compared with previously developed mobility protocols. However, without being secured, F-HMIPv6 is vulnerable to various security threats. In 2007, Kang and Park proposed a security scheme, which is seamlessly integrated into F-HMIPv6. In this paper, we reveal that Kang-Park's scheme cannot defend against the Denial of Service (DoS) and redirect attacks while largely relying on the group key. Then, we propose an Enhanced Security Scheme for F-HMIPv6 (ESS-FH) that achieves the strong key exchange and the key independence as well as addresses the weaknesses of Kang-Park's scheme. More importantly, it enables fast handover between different MAP domains. The proposed scheme is formally verified based on BAN-logic, and its handover latency is analyzed and compared with that of Kang-Park's scheme.
View full abstract
-
Masashi ETO, Kotaro SONODA, Daisuke INOUE, Katsunari YOSHIOKA, Koji NA ...
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1106-1116
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “
SPectrum Analysis for Distinction and Extraction of malware features(
SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.
View full abstract
-
Chun-Liang LEE, Chia-Tai CHAN, Pi-Chung WANG
Article type: PAPER
2010 Volume E93.D Issue 5 Pages
1117-1126
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Packet classification has become one of the most important application techniques in network security since the last decade. The technique involves a traffic descriptor or user-defined criteria to categorize packets to a specific forwarding class which will be accessible for future security handling. To achieve fast packet classification, we propose a new scheme,
Hierarchical Cross-Producting. This approach simplifies the classification procedure and decreases the distinct combinations of fields by hierarchically decomposing the multi-dimensional space based on the concept of
telescopic search. Analogous to the use of telescopes with different powers
**, a multiple-step process is used to search for targets. In our scheme, the multi-dimensional space is endowed with a hierarchical property which self-divides into several smaller subspaces, whereas the procedure of packet classification is translated into recursive searching for matching subspaces. The required storage of our scheme could be significantly reduced since the distinct field specifications of subspaces is manageable. The performance are evaluated based on both real and synthetic filter databases. The experimental results demonstrate the effectiveness and scalability of the proposed scheme.
View full abstract
-
Md. Rakib HASSAN, Md. Monirul ISLAM, Kazuyuki MURASE
Article type: PAPER
Subject area: Fundamentals of Information Systems
2010 Volume E93.D Issue 5 Pages
1127-1136
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Ant Colony Optimization (ACO) algorithms are a new branch of swarm intelligence. They have been applied to solve different combinatorial optimization problems successfully. Their performance is very promising when they solve small problem instances. However, the algorithms' time complexity increase and solution quality decrease for large problem instances. So, it is crucial to reduce the time requirement and at the same time to increase the solution quality for solving large combinatorial optimization problems by the ACO algorithms. This paper introduces a Local Search based ACO algorithm (LSACO), a new algorithm to solve large combinatorial optimization problems. The basis of LSACO is to apply an adaptive local search method to improve the solution quality. This local search automatically determines the number of edges to exchange during the execution of the algorithm. LSACO also applies pheromone updating rule and constructs solutions in a new way so as to decrease the convergence time. The performance of LSACO has been evaluated on a number of benchmark combinatorial optimization problems and results are compared with several existing ACO algorithms. Experimental results show that LSACO is able to produce good quality solutions with a higher rate of convergence for most of the problems.
View full abstract
-
In Hwan DOH, Hyo J. LEE, Young Je MOON, Eunsam KIM, Jongmoo CHOI, Dong ...
Article type: PAPER
Subject area: Software System
2010 Volume E93.D Issue 5 Pages
1137-1146
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
File systems make use of the buffer cache to enhance their performance. Traditionally, part of DRAM, which is volatile memory, is used as the buffer cache. In this paper, we consider the use of of Non-Volatile RAM (NVRAM) as a write cache for metadata of the file system in embedded systems. NVRAM is a state-of-the-art memory that provides characteristics of both non-volatility and random byte addressability. By employing NVRAM as a write cache for dirty metadata, we retain the same integrity of a file system that always synchronously writes its metadata to storage, while at the same time improving file system performance to the level of a file system that always writes asynchronously. To show quantitative results, we developed an embedded board with NVRAM and modify the VFAT file system provided in Linux 2.6.11 to accommodate the NVRAM write cache. We performed a wide range of experiments on this platform for various synthetic and realistic workloads. The results show that substantial reductions in execution time are possible from an application viewpoint. Another consequence of the write cache is its benefits at the FTL layer, leading to improved wear leveling of Flash memory and increased energy savings, which are important measures in embedded systems. From the real numbers obtained through our experiments, we show that wear leveling is improved considerably and also quantify the improvements in terms of energy.
View full abstract
-
Yong-Hee KIM, Myoung-Jo JUNG, Cheol-Hoon LEE
Article type: PAPER
Subject area: Software System
2010 Volume E93.D Issue 5 Pages
1147-1153
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
We propose a dynamic voltage scaling algorithm to exploit the temporal locality called
TLDVS(Temporal Locality DVS) that can achieve significant energy savings while simultaneously preserving timeliness guarantees made by real-time scheduling. Traditionally hard real-time scheduling algorithms assume that the actual computation requirement of tasks would be varied continuously from time to time, but most real-time tasks have a limited number of operational modes changing with temporal locality. Such temporal locality can be exploited for energy savings by scaling down the operating frequency and the supply voltage accordingly. The proposed algorithm does not assume task periodicity, and requires only previous execution time among a priori information on the task set to schedule. Simulation results show that
TLDVS achieves up to 25% energy savings compared with
OLDVS, and up to 42% over the non-DVS scheduling.
View full abstract
-
Ing-Xiang CHEN, Chien-Hung LI, Cheng-Zen YANG
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2010 Volume E93.D Issue 5 Pages
1154-1161
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Automated bug localization is an important issue in software engineering. In the last few decades, various proactive and reactive localization approaches have been proposed to predict the fault-prone software modules. However, most proactive or reactive approaches need source code information or software complexity metrics to perform localization. In this paper, we propose a reactive approach which considers only bug report information and historical revision logs. In our approach, the co-location relationships among bug reports are explored to improve the prediction accuracy of a state-of-the-art learning method. Studies on three open source projects reveal that the proposed scheme can consistently improve the prediction accuracy in all three software projects by nearly 11.6% on average.
View full abstract
-
Yu ZHANG, Binxing FANG, Hao LUO
Article type: PAPER
Subject area: Information Network
2010 Volume E93.D Issue 5 Pages
1162-1174
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-
k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
View full abstract
-
Jianliang GAO, Yinhe HAN, Xiaowei LI
Article type: PAPER
Subject area: Information Network
2010 Volume E93.D Issue 5 Pages
1175-1185
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Bugs are becoming unavoidable in complex integrated circuit design. It is imperative to identify the bugs as soon as possible through post-silicon debug. For post-silicon debug, observability is one of the biggest challenges. Scan-based debug mechanism provides high observability by reusing scan chains. However, it is not feasible to scan dump cycle-by-cycle during program execution due to the excessive time required. In fact, it is not necessary to scan out the error-free states. In this paper, we introduce Suspect Window to cover the clock cycle in which the bug is triggered. Then, we present an efficient approach to determine the suspect window. Based on Suspect Window, we propose a novel debug mechanism to locate the bug both temporally and spatially. Since scan dumps are only taken in the suspect window with the proposed mechanism, the time required for locating the bug is greatly reduced. The approaches are evaluated using ISCAS'89 and ITC'99 benchmark circuits. The experimental results show that the proposed mechanism can significantly reduce the overall debug time compared to scan-based debug mechanism while keeping high observability.
View full abstract
-
Hirotoshi IWASAKI, Nobuhiro MIZUNO, Kousuke HARA, Yoichi MOTOMURA
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2010 Volume E93.D Issue 5 Pages
1186-1196
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Mobile devices, such as cellular phones and car navigation systems, are essential to daily life. People acquire necessary information and preferred content over communication networks anywhere, anytime. However, usability issues arise from the simplicity of user interfaces themselves. Thus, a recommendation of content that is adapted to a user's preference and situation will help the user select content. In this paper, we describe a method to realize such a system using Bayesian networks. This user-adapted mobile system is based on a user model that provides recommendation of content (i.e., restaurants, shops, and music that are suitable to the user and situation) and that learns incrementally based on accumulated usage history data. However, sufficient samples are not always guaranteed, since a user model would require combined dependency among users, situations, and contents. Therefore, we propose the LK method for modeling, which complements incomplete and insufficient samples using knowledge data, and CPT incremental learning for adaptation based on a small number of samples. In order to evaluate the methods proposed, we applied them to restaurant recommendations made on car navigation systems. The evaluation results confirmed that our model based on the LK method can be expected to provide better generalization performance than that of the conventional method. Furthermore, our system would require much less operation than current car navigation systems from the beginning of use. Our evaluation results also indicate that learning a user's individual preference through CPT incremental learning would be beneficial to many users, even with only a few samples. As a result, we have developed the technology of a system that becomes more adapted to a user the more it is used.
View full abstract
-
Keita IMADA, Katsuhiko NAKAMURA
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2010 Volume E93.D Issue 5 Pages
1197-1204
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
This paper describes recent improvements to
Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.
View full abstract
-
Yizhong XIN, Xiangshi REN
Article type: PAPER
Subject area: Human-computer Interaction
2010 Volume E93.D Issue 5 Pages
1205-1213
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Continuous pen pressure can be used to operate multi-state widgets such as menus in pen based user interfaces. The number of levels into which the pen pressure space is divided determines the number of states in the multi-state widgets. To increase the optimal number of divisions of the pen pressure space and achieve greater pen pressure usability, we propose a new discretization method which divides the pen pressure space according to a personal pen pressure use profile. We present here four variations of the method: discretization according to personal/aggregation pen pressure use profile with/without visual feedback of uniform level widths and the traditional even discretization method. Two experiments were conducted respectively to investigate pen pressure use profile and to comparatively evaluate the performance of these methods. Results indicate that the subjects performed fastest and with the fewest errors when the pen pressure space was divided according to personal profile with visual feedback of uniform level widths (PU) and performed worst when the pen pressure space was divided evenly. With PU method, the optimal number of divisions of the pen pressure space was 8. Visual feedback of uniform level widths enhanced performance of uneven discretization. The findings of this study have implications for human-oriented pen pressure use in pen pressure based user interface designs.
View full abstract
-
Jinyong JO, JongWon KIM
Article type: PAPER
Subject area: Educational Technology
2010 Volume E93.D Issue 5 Pages
1214-1224
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
The recent growth in available network bandwidth envisions the wide-spread use of broadband applications such as uncompressed HD-SDI (High-definition serial digital interface) over IP. These cutting-edge applications are also driving the development of a media-oriented infrastructure for networked collaboration. This paper introduces
imCast, a high-quality digital media platform dealing with uncompressed HD-SDI over IP, and discusses its internal architecture in depth.
imCast mainly provides cost-effective hardware-based approaches for high-quality media acquisition and presentation; flexible software-based approaches for presentation; and allows for economical network transmission. Experimental results (taken over best-effort IP networks) will demonstrate the functional feasibility and performance of
imCast.
View full abstract
-
Bing-Fei WU, Hao-Yu HUANG, Yen-Lin CHEN, Hsin-Yuan PENG, Jia-Hsiung HU ...
Article type: PAPER
Subject area: Speech and Hearing
2010 Volume E93.D Issue 5 Pages
1225-1237
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
This study presents several optimization approaches for the MPEG-2/4 Audio Advanced Coding (AAC) Low Complexity (LC) encoding and decoding processes. Considering the power consumption and the peripherals required for consumer electronics, this study adopts the TI OMAP5912 platform for portable devices. An important optimization issue for implementing AAC codec on embedded and mobile devices is to reduce computational complexity and memory consumption. Due to power saving issues, most embedded and mobile systems can only provide very limited computational power and memory resources for the coding process. As a result, modifying and simplifying only one or two blocks is insufficient for optimizing the AAC encoder and enabling it to work well on embedded systems. It is therefore necessary to enhance the computational efficiency of other important modules in the encoding algorithm. This study focuses on optimizing the Temporal Noise Shaping (TNS), Mid/Side (M/S) Stereo, Modified Discrete Cosine Transform (MDCT) and Inverse Quantization (IQ) modules in the encoder and decoder. Furthermore, we also propose an efficient memory reduction approach that provides a satisfactory balance between the reduction of memory usage and the expansion of the encoded files. In the proposed design, both the AAC encoder and decoder are built with fixed-point arithmetic operations and implemented on a DSP processor combined with an ARM-core for peripheral controlling. Experimental results demonstrate that the proposed AAC codec is computationally effective, has low memory consumption, and is suitable for low-cost embedded and mobile applications.
View full abstract
-
Fu-Kun CHEN, Kuo-Bao KUO
Article type: PAPER
Subject area: Speech and Hearing
2010 Volume E93.D Issue 5 Pages
1238-1243
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Differing from the long-term prediction used in the modern speech codec, the standard of the internet low bit rate codec (iLBC) independently encodes the residual of the linear predictive coding (LPC) frame by frame. In this paper, a complexity scalability design is proposed for the coding of the dynamic codebook search in the iLBC speech codec. In addition, a trade-off between the computational complexity and the speech quality can be achieved by dynamically setting the parameter of the proposed approach. Simulation results show that the computational complexity can be effectively reduced with imperceptible degradation of the speech quality.
View full abstract
-
Makoto SAKAI, Norihide KITAOKA, Kazuya TAKEDA
Article type: PAPER
Subject area: Speech and Hearing
2010 Volume E93.D Issue 5 Pages
1244-1252
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
To improve speech recognition performance, feature transformation based on discriminant analysis has been widely used to reduce the redundant dimensions of acoustic features. Linear discriminant analysis (LDA) and heteroscedastic discriminant analysis (HDA) are often used for this purpose, and a generalization method for LDA and HDA, called power LDA (PLDA), has been proposed. However, these methods may result in an unexpected dimensionality reduction for multimodal data. It is important to preserve the local structure of the data when reducing the dimensionality of multimodal data. In this paper we introduce two methods, locality-preserving HDA and locality-preserving PLDA, to reduce dimensionality of multimodal data appropriately. We also propose an approximate calculation scheme to calculate sub-optimal projections rapidly. Experimental results show that the locality-preserving methods yield better performance than the traditional ones in speech recognition.
View full abstract
-
Keita HIRAI, Jambal TUMURTOGOO, Ayano KIKUCHI, Norimichi TSUMURA, Tosh ...
Article type: PAPER
Subject area: Image Processing and Video Processing
2010 Volume E93.D Issue 5 Pages
1253-1262
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Due to the development and popularization of high-definition televisions, digital video cameras, Blu-ray discs, digital broadcasting, IP television and so on, it plays an important role to identify and quantify video quality degradations. In this paper, we propose SV-CIELAB which is an objective video quality assessment (VQA) method using a spatio-velocity contrast sensitivity function (SV-CSF). In SV-CIELAB, motion information in videos is effectively utilized for filtering unnecessary information in the spatial frequency domain. As the filter to apply videos, we used the SV-CSF. It is a modulation transfer function of the human visual system, and consists of the relationship among contrast sensitivities, spatial frequencies and velocities of perceived stimuli. In the filtering process, the SV-CSF cannot be directly applied in the spatial frequency domain because spatial coordinate information is required when using velocity information. For filtering by the SV-CSF, we obtain video frames separated in spatial frequency domain. By using velocity information, the separated frames with limited spatial frequencies are weighted by contrast sensitivities in the SV-CSF model. In SV-CIELAB, the criteria are obtained by calculating image differences between filtered original and distorted videos. For the validation of SV-CIELAB, subjective evaluation experiments were conducted. The subjective experimental results were compared with SV-CIELAB and the conventional VQA methods such as CIELAB color difference, Spatial-CIELAB, signal to noise ratio and so on. From the experimental results, it was shown that SV-CIELAB is a more efficient VQA method than the conventional methods.
View full abstract
-
Zhu LI, Kenichi YABUTA, Hitoshi KITAZAWA
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2010 Volume E93.D Issue 5 Pages
1263-1271
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Robust object tracking is required by many vision applications, and it will be useful for the motion analysis of moving object if we can not only track the object, but also make clear the corresponding relation of each part between consecutive frames. For this purpose, we propose a new method for moving object extraction and tracking based on the exclusive block matching. We build a cost matrix consisting of the similarities between the current frame's and the previous frame's blocks and obtain the corresponding relation by solving one-to-one matching as linear assignment problem. In addition, we can track the trajectory of occluded blocks by dealing with multi-frames simultaneously.
View full abstract
-
Takanobu OBA, Takaaki HORI, Atsushi NAKAMURA
Article type: PAPER
Subject area: Natural Language Processing
2010 Volume E93.D Issue 5 Pages
1272-1281
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
A dependency structure interprets modification relationships between words or phrases and is recognized as an important element in semantic information analysis. With the conventional approaches for extracting this dependency structure, it is assumed that the complete sentence is known before the analysis starts. For spontaneous speech data, however, this assumption is not necessarily correct since sentence boundaries are not marked in the data. Although sentence boundaries can be detected before dependency analysis, this cascaded implementation is not suitable for online processing since it delays the responses of the application. To solve these problems, we proposed a sequential dependency analysis (SDA) method for online spontaneous speech processing, which enabled us to analyze incomplete sentences sequentially and detect sentence boundaries simultaneously. In this paper, we propose an improved SDA integrating a labeling-based sentence boundary detection (SntBD) technique based on Conditional Random Fields (CRFs). In the new method, we use CRF for soft decision of sentence boundaries and combine it with SDA to retain its online framework. Since CRF-based SntBD yields better estimates of sentence boundaries, SDA can provide better results in which the dependency structure and sentence boundaries are consistent. Experimental results using spontaneous lecture speech from the Corpus of Spontaneous Japanese show that our improved SDA outperforms the original SDA with SntBD accuracy providing better dependency analysis results.
View full abstract
-
Xia MAO, Zheng LI
Article type: PAPER
Subject area: Kansei Information Processing, Affective Information Processing
2010 Volume E93.D Issue 5 Pages
1282-1290
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.
View full abstract
-
Kazuto OSHIMA
Article type: LETTER
Subject area: Fundamentals of Information Systems
2010 Volume E93.D Issue 5 Pages
1291-1293
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
We exhibit a simple procedure to find how classical signals should be processed in cluster-state quantum computation. Using stabilizers characterizing a cluster state, we can easily find a precise classical signal-flow that is required in performing cluster-state computation.
View full abstract
-
Kai-Jie ZHENG, Ji-Gen PENG, Ke-Xue LI
Article type: LETTER
Subject area: Fundamentals of Information Systems
2010 Volume E93.D Issue 5 Pages
1294-1296
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Graph matching is a NP-Hard problem. In this paper, we relax the admissible set of permutation matrices and meantime incorporate a barrier function into the objective function. The resulted model is equivalent to the original model. Alternate iteration algorithm is designed to solve it. It is proven that the algorithm proposed is locally convergent. Our experimental results reveal that the proposed algorithm outperforms the algorithm in [1].
View full abstract
-
Zhan GAO, Siwei LUO
Article type: LETTER
Subject area: Computer System
2010 Volume E93.D Issue 5 Pages
1297-1300
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Traditional double auction protocols only concern the price information of participants without considering their QoS requirements, which makes them unsuitable for the service grid. In this paper we first introduce QoS information into double auction to present the QoS-enabled Double Auction Protocol (QDAP). QDAP tries to assign the asks which have the least candidate bids firstly to make more participants trade and provides QoS guarantee at the same time. Simulation experiments have been performed to compare QDAP with two traditional double auction protocols and the result shows that QDAP is more suitable for the service grid.
View full abstract
-
Mohamed RASSLAN, Amr YOUSSEF
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2010 Volume E93.D Issue 5 Pages
1301-1302
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Tseng et al. proposed two efficient authenticated encryption schemes with message linkages for message flows. Hwang et al. (IEICE Trans. Inf. and Syst., Vol. E89-D, No. 4, April 2006) presented a forgery attack against these two schemes and proposed an improvement that they claim resists such attacks. In this paper, we show that the improved authenticated encryption schemes proposed by Hwang et al. are not secure by presenting another message forgery attack against these improved schemes.
View full abstract
-
Dae Hyun YUM, Jae Woo SEO, Kookrae CHO, Pil Joong LEE
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2010 Volume E93.D Issue 5 Pages
1303-1306
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
A hash chain
H for a one-way hash function
h(·) is a sequence of hash values <
v0,
v1, . . . ,
vn >, where
v0 is a public value,
vn a secret value, and
vi =
h(
vi+1). A hash chain traversal algorithm
T computes and outputs the hash chain
H, returning
vi in time period (called round)
i for 1 ≤
i ≤
n. While previous hash chain traversal algorithms were designed to output all hash values
vi (1 ≤
i ≤
n) in order, there are applications where every
m-th hash value (i.e.,
vm,
v2m,
v3m, . . .) is required to be output. We introduce a hash chain traversal algorithm that selectively outputs every
m-th hash value efficiently. The main technique is a transformation from a hash chain traversal algorithm outputting every hash value into that outputting every
m-th hash value. Compared with the direct use of previous hash chain traversal algorithms, our proposed method requires less memory storages and computational costs.
View full abstract
-
Kuo-Chen HUNG, Yu-Wen WOU, Peterson JULIAN
Article type: LETTER
Subject area: Pattern Recognition
2010 Volume E93.D Issue 5 Pages
1307-1308
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
This paper is in response to the report of Ahn, Mun, Kim, Oh, and Han published in IEICE Trans. INF. & SYST., Vol.E91-D, No.4, 2008, 1215-1217. They tried to extend their previous paper that published on IEICE Trans. INF. & SYST., Vol.E86-D, No.12, 2003, 2790-2793. However, we will point out that their extension is based on the detailed data of knowing the frequency of three types. Their new occurrence information based on intuitionistic fuzzy set for medical diagnosis of headache becomes redundant. We advise researchers to directly use the detailed data to decide the diagnosis of headache.
View full abstract
-
Jae-woong JEONG, Young-cheol PARK, Dae-hee YOUN, Seok-Pil LEE
Article type: LETTER
Subject area: Speech and Hearing
2010 Volume E93.D Issue 5 Pages
1309-1312
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
In this paper, we propose a robust room inverse filtering algorithm for speech dereverberation based on a kurtosis maximization. The proposed algorithm utilizes a new normalized kurtosis function that nonlinearly maps the input kurtosis onto a finite range from zero to one, which results in a kurtosis warping. Due to the kurtosis warping, the proposed algorithm provides more stable convergence and, in turn, better performance than the conventional algorithm. Experimental results are presented to confirm the robustness of the proposed algorithm.
View full abstract
-
Suk Tae SEO, In Keun LEE, Hye Cheun JEONG, Soon Hak KWON
Article type: LETTER
Subject area: Image Processing and Video Processing
2010 Volume E93.D Issue 5 Pages
1313-1316
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Histogram equalization is the most popular method for image enhancement. However it has some drawbacks: i) it causes undesirable artifacts and ii) it can degrade the visual quality. To overcome the drawbacks, in this letter, multi-histogram equalization on smoothed histogram using a Gaussian kernel is proposed. To demonstrate the effectiveness, the method is tested on several images and compared with conventional methods.
View full abstract
-
Quan MIAO, Guijin WANG, Xinggang LIN
Article type: LETTER
Subject area: Image Processing and Video Processing
2010 Volume E93.D Issue 5 Pages
1317-1320
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Image sequence registration has attracted increasing attention due to its significance in image processing and computer vision. In this paper, we put forward a new kernel based image registration approach, combining both feature-based and intensity-based methods. The proposed algorithm consists of two steps. The first step utilizes feature points to roughly estimate a motion parameter between successive frames; the second step applies our kernel based idea to align all the frames to the reference frame (typically the first frame). Experimental results using both synthetic and real image sequences demonstrate that our approach can automatically register all the image frames and be robust against illumination change, occlusion and image noise.
View full abstract
-
Chang LIU, Guijin WANG, Fan JIANG, Xinggang LIN
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2010 Volume E93.D Issue 5 Pages
1321-1324
Published: May 01, 2010
Released on J-STAGE: May 01, 2010
JOURNAL
FREE ACCESS
Object detection and tracking is one of the most important research topics in pattern recognition and the basis of many computer vision systems. Many accomplishments in this field have been achieved recently. Some specific objects, such as human face and vehicles, can already be detected in various applications. However, tracking objects with large variances in color, texture and local shape (such as pedestrians) is still a challenging topic in this field. To solve this problem, a pedestrian tracking scheme is proposed in this paper, including online training for pedestrian-detector. Simulation and analysis of the results shows that, the proposal method could deal with illumination change, pose change and occlusion problem and any combination thereof.
View full abstract