IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E93.D , Issue 8
Showing 1-39 articles out of 39 articles from the selected issue
Special Section on Multiple-Valued Logic and VLSI Computing
  • Michitaka KAMEYAMA
    2010 Volume E93.D Issue 8 Pages 2025
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Download PDF (54K)
  • Tsutomu SASAO, Hiroki NAKAHARA, Munehiro MATSUURA, Yoshifumi KAWAMURA, ...
    Type: INVITED PAPER
    2010 Volume E93.D Issue 8 Pages 2026-2035
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper first reviews the trends of VLSI design, focusing on the power dissipation and programmability. Then, we show the advantage of Quarternary Decision Diagrams (QDDs) in representing and evaluating logic functions. That is, we show how QDDs are used to implement QDD machines, which yield high-speed implementations. We compare QDD machines with binary decision diagram (BDD) machines, and show a speed improvement of 1.28-2.02 times when QDDs are chosen. We consider 1-and 2-address BDD machines, and 3- and 4-address QDD machines, and we show a method to minimize the number of instructions.
    Download PDF (652K)
  • Koki NISHIZAWA
    Type: INVITED PAPER
    2010 Volume E93.D Issue 8 Pages 2036-2039
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    In this paper, I will show how multi-valued logics are used for model checking. Model checking is an automatic technique to analyze correctness of hardware and software systems. A model checker is based on a temporal logic or a modal fixed point logic. That is to say, a system to be checked is formalized as a Kripke model, a property to be satisfied by the system is formalized as a temporal formula or a modal formula, and the model checker checks that the Kripke model satisfies the formula. Although most existing model checkers are based on 2-valued logics, recently new attempts have been made to extend the underlying logics of model checkers to multi-valued logics. I will summarize these new results.
    Download PDF (108K)
  • Noboru TAKAGI
    Type: PAPER
    Subject area: Logic Design
    2010 Volume E93.D Issue 8 Pages 2040-2047
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Delay models for binary logic circuits have been proposed and clarified their mathematical properties. Kleene's ternary logic is one of the simplest delay models to express transient behavior of binary logic circuits. Goto first applied Kleene's ternary logic to hazard detection of binary logic circuits in 1948. Besides Kleene's ternary logic, there are many delay models of binary logic circuits, Lewis's 5-valued logic etc. On the other hand, multiple-valued logic circuits recently play an important role for realizing digital circuits. This is because, for example, they can reduce the size of a chip dramatically. Though multiple-valued logic circuits become more important, there are few discussions on delay models of multiple-valued logic circuits. Then, in this paper, we introduce a delay model of multiple-valued logic circuits, which are constructed by Min, Max, and Literal operations. We then show some of the mathematical properties of our delay model.
    Download PDF (171K)
  • Hiroki NAKAHARA, Tsutomu SASAO, Munehiro MATSUURA, Yoshifumi KAWAMURA
    Type: PAPER
    Subject area: Logic Design
    2010 Volume E93.D Issue 8 Pages 2048-2058
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    The parallel branching program machine (PBM128) consists of 128 branching program machines (BMs) and a programmable interconnection. To represent logic functions on BMs, we use quaternary decision diagrams. To evaluate functions, we use 3-address quaternary branch instructions. We realized many benchmark functions on the PBM128, and compared its memory size, computation time, and power consumption with the Intel's Core2Duo microprocessor. The PBM128 requires approximately a quarter of the memory for the Core2Duo, and is 21.4-96.1 times faster than the Core2Duo. It dissipates a quarter of the power of the Core2Duo. Also, we realized packet filters such as an access controller and a firewall, and compared their performance with software on the Core2Duo. For these packet filters, the PBM128 requires approximately 17% of the memory for the Core2Duo, and is 21.3-23.7 times faster than the Core2Duo.
    Download PDF (1064K)
  • Shinobu NAGAYAMA, Tsutomu SASAO, Jon T. BUTLER
    Type: PAPER
    Subject area: Logic Design
    2010 Volume E93.D Issue 8 Pages 2059-2067
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes a high-speed architecture to realize two-variable numeric functions. It represents the given function as an edge-valued multiple-valued decision diagram (EVMDD), and shows a systematic design method based on the EVMDD. To achieve a design, we characterize a numeric function ƒ by the values of l and p for which ƒ is an l-restricted Mp-monotone increasing function. Here, l is a measure of subfunctions of ƒ and p is a measure of the rate at which ƒ increases with an increase in the dependent variable. For the special case of an EVMDD, the EVBDD, we show an upper bound on the number of nodes needed to realize an l-restricted Mp-monotone increasing function. Experimental results show that all of the two-variable numeric functions considered in this paper can be converted into an l-restricted Mp-monotone increasing function with p = 1 or 3. Thus, they can be compactly realized by EVBDDs. Since EVMDDs have shorter paths and smaller memory size than EVBDDs, EVMDDs can produce fast and compact NFGs.
    Download PDF (392K)
  • Kwang-Jow GAN, Dong-Shong LIANG, Yan-Wun CHEN
    Type: PAPER
    Subject area: Multiple-Valued VLSI Technology
    2010 Volume E93.D Issue 8 Pages 2068-2072
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    The paper demonstrates a novel multiple-valued logic (MVL) design using a three-peak negative differential resistance (NDR) circuit, which is made of several Si-based metal-oxide-semiconductor field-effect-transistor (MOS) and SiGe-based heterojunction bipolar transistor (HBT) devices. Specifically, this three-peak NDR circuit is biased by two switch-controlled current sources. Compared to the traditional MVL circuit made of resonant tunneling diode (RTD), this multiple-peak MOS-HBT-NDR circuit has two major advantages. One is that the fabrication of this circuit can be fully implemented by the standard BiCMOS process without the need for molecular-beam epitaxy system. Another is that we can obtain more logic states than the RTD-based MVL design. In measuring, we can obtain eight logic states at the output according to a sequent control of two current sources on and off in order.
    Download PDF (293K)
  • Motoi INABA, Koichi TANNO, Hiroki TAMURA, Okihiko ISHIZUKA
    Type: PAPER
    Subject area: Multiple-Valued VLSI Technology
    2010 Volume E93.D Issue 8 Pages 2073-2079
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    In this paper, optimization and verification of the current-mode multiple-valued digit ORNS arithmetic circuits are presented. The multiple-valued digit ORNS is the redundant number system using digit values in the multiple-valued logic and it realizes the full-parallel calculation without any ripple carry propagation. First, the 4-bit addition and multiplication algorithms employing the multiple-valued digit ORNS are optimized through logic-level analyses. In the multiplier, the maximum digit value and the number of modulo operations in series are successfully reduced from 49 to 29 and from 3 to 2, respectively, by the arrangement of addition lines. Next, circuit components such as a current mirror are verified using HSPICE. The proposed switched current mirror which has functions of a current mirror and an analog switch is effective to reduce the minimum operation voltage by about 0.13 volt. Besides an ordinary strong-inversion region, the circuit components operated under the weak-inversion region show good simulation results with the unit current of 10 nanoamperes, and it brings both of the lower power dissipation and the stable operation under the lower supply voltage.
    Download PDF (679K)
  • Hirokatsu SHIRAHAMA, Takashi MATSUURA, Masanori NATSUI, Takahiro HANYU
    Type: PAPER
    Subject area: Multiple-Valued VLSI Technology
    2010 Volume E93.D Issue 8 Pages 2080-2088
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    A multiple-valued current-mode (MVCM) circuit using current-flow control is proposed for a power-greedy sequential linear-array system. Whenever operation is completed in processing element (PE) at the present stage, every possible current source in the PE at the previous stage is cut off, which greatly reduces the wasted power dissipation due to steady current flows during standby states. The completion of the operation can be easily detected using “operation monitor” that observes input and output signals at latches, and that generates control signal immediately at the time completed. Since the wires of data and control signals are shared in the proposed MVCM circuit, no additional wires are required for current-flow control. In fact, it is demonstrated that the power consumption of the MVCM circuit using the proposed method is reduced to 53 percent in comparison with that without current-source control.
    Download PDF (1422K)
  • Naoya ONIZAWA, Takahiro HANYU
    Type: PAPER
    Subject area: Multiple-Valued VLSI Technology
    2010 Volume E93.D Issue 8 Pages 2089-2099
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper presents highly reliable multiple-valued one-phase signalling for an asynchronous on-chip communication link under process, supply-voltage and temperature variations. New multiple-valued dual-rail encoding, where each code is represented by the minimum set of three values, makes it possible to perform asynchronous communication between modules with just two wires. Since an appropriate current level is individually assigned to the logic value, a sufficient dynamic range between adjacent current signals can be maintained in the proposed multiple-valued current-mode (MVCM) circuit, which improves the robustness against the process variation. Moreover, as the supply-voltage and the temperature variations in smaller dimensions of circuit elements are dominated as the common-mode variation, a local reference voltage signal according to the variations can be adaptively generated to compensate characteristic change of the MVCM-circuit component. As a result, the proposed asynchronous on-chip communication link is correctly operated in the operation range from 1.1V to 1.4V of the supply voltage and that from −50°C to 75°C under the process variation of 3σ. In fact, it is demonstrated by HSPICE simulation in a 0.13-µm CMOS process that the throughput of the proposed circuit is enhanced to 435% in comparison with that of the conventional 4-phase asynchronous communication circuit under a comparable energy dissipation.
    Download PDF (1494K)
  • Liang-Bi CHEN, Jiun-Cheng JU, Chien-Chou WANG, Ing-Jer HUANG
    Type: PAPER
    Subject area: Multiple-Valued VLSI Technology
    2010 Volume E93.D Issue 8 Pages 2100-2108
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Bus-based system-on-a-chip (SoC) design has become the major integrated methodology for shortening SoC design time. The main challenge is how to verify on-chip bus protocols efficiently. Although traditional simulation-based bus protocol monitors can check whether bus signals obey bus protocol or not. They are still lack of an efficient bus protocols verification environment such as FPGA-level or chip-level. To overcome the shortage, we propose a rule-based synthesizable AMBA AHB on-chip bus protocol checker, which contains 73 related AHB on-chip bus protocol rules to check AHB bus signal behaviors, and two corresponding verification mechanisms: an error reference table (ERT) and a windowed trace buffer, to shorten verification time.
    Download PDF (1183K)
  • Yasushi YUMINAKA, Yasunori TAKAHASHI, Kenichi HENMI
    Type: PAPER
    Subject area: Multiple-Valued VLSI Technology
    2010 Volume E93.D Issue 8 Pages 2109-2116
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a Pulse-Width Modulation (PWM) pre-emphasis technique which utilizes time-domain information processing to increase the data rate for a given bandwidth of interconnection. The PWM pre-emphasis method does not change the pulse amplitude as for conventional FIR pre-emphasis, but instead exploits timing resolution. This fits well with recent CMOS technology trends toward higher switching speeds and lower supply voltage. We discuss multiple-valued data transmission based on time-domain pre-emphasis techniques in consideration of higher-order channel effects. Also, a new data-dependent adaptive time-domain pre-emphasis technique is proposed to compensate for the data-dependent jitter.
    Download PDF (1159K)
  • Naofumi HOMMA, Yuichi BABA, Atsushi MIYAMOTO, Takafumi AOKI
    Type: PAPER
    Subject area: Application of Multiple-Valued VLSI
    2010 Volume E93.D Issue 8 Pages 2117-2125
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes a constant-power adder based on multiple-valued logic and its application to cryptographic processors being resistant to side-channel attacks. The proposed adder is implemented in Multiple-Valued Current-Mode Logic (MV-CML). The important feature of MV-CML is that the power consumption can be constant regardless of input values, which makes it possible to prevent power-analysis attacks using dependencies between power consumption and intermediate values or operations of the executed cryptographic algorithms. In this paper, we focus on a multiple-valued Binary Carry-Save adder based on the Positive-Digit (PD) number system and its application to RSA processors. The power characteristic of the proposed design is evaluated with HSPICE simulation using 90nm process technology. The result shows that the proposed design can achieve constant power consumption with lower performance overhead in comparison with the conventional binary design.
    Download PDF (978K)
  • Nobuaki OKADA, Michitaka KAMEYAMA
    Type: PAPER
    Subject area: Application of Multiple-Valued VLSI
    2010 Volume E93.D Issue 8 Pages 2126-2133
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    A fine-grain bit-serial multiple-valued reconfigurable VLSI based on logic-in-control architecture is proposed for effective use of the hardware resources. In logic-in-control architecture, the control circuits can be merged with the arithmetic/logic circuits, where the control and arithmetic/logic circuits are constructed by using one or multiple logic blocks. To implement the control circuit, only one state in a state transition diagram is allocated to one logic block, which leads to reduction of the complexity of interconnections between logic blocks. The fine-grain logic block is implemented based on multiple-valued current-mode circuit technology. In the fine-grain logic block, an arbitrary 3-variable binary function can be programmed by using one multiplexer and two universal literal circuits. Three-variable binary functions are used to implement the control circuit. Moreover, the hardware resources can be utilized to construct a bit-serial adder, because full-adder sum and carry can be realized by programming in the universal literal circuit. Therefore, the logic block can be effectively reconfigured for arithmetic/logic and control circuits. It is made clear that the hardware complexity of the control circuit in the proposed reconfigurable VLSI can be reduced in comparison with that of the control circuit based on a typically sequential circuit in the conventional FPGA and the fine-grain field-programmable VLSI reported until now.
    Download PDF (791K)
  • Shota ISHIHARA, Noriaki IDOBATA, Masanori HARIYAMA, Michitaka KAMEYAMA
    Type: PAPER
    Subject area: Application of Multiple-Valued VLSI
    2010 Volume E93.D Issue 8 Pages 2134-2144
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Dynamically Programmable Gate Arrays (DPGAs) provide more area-efficient implementations than conventional Field Programmable Gate Arrays (FPGAs). One of typical DPGA architectures is multi-context architecture. An DPGA based on multi-context architecture is Multi-Context FPGA (MC-FPGA) which achieves fast switching between contexts. The problem of the conventional SRAM-based MC-FPGA is its large area and standby power dissipation because of the large number of configuration memory bits. Moreover, since SRAM is volatile, the SRAM-based multi-context FPGA is difficult to implement power-gating for standby power reduction. This paper presents an area-efficient and nonvolatile multi-context switch block architecture for MC-FPGAs based on a ferroelectric-capacitor functional pass-gate which merges a multiple-valued threshold function and a nonvolatile multiple-valued storage. The test chip for four contexts is fabricated in a 0.35µm-CMOS/0.60µm-ferroelectric-capacitor process. The transistor count of the proposed multi-context switch block is reduced to 63% in comparison with that of the SRAM-based one.
    Download PDF (2129K)
Regular Section
  • Chammika MANNAKKARA, Tomohiro YONEDA
    Type: PAPER
    Subject area: Computer System
    2010 Volume E93.D Issue 8 Pages 2145-2161
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    A new pipeline controller based on the Early Acknowledgement (EA) protocol is proposed for bundled-data asynchronous circuits. The EA protocol indicates acknowledgement by the falling edge of the acknowledgement signal in contrast to the 4-phase protocol, which indicates it on the rising edge. Thus, it can hide the overhead caused by the resetting period of the handshake cycle. Since we have designed our controller assuming several timing constraints, we first analyze the timing constraints under which our controller correctly works and then discuss their appropriateness. The performance of the controller is compared both analytically and experimentally with those of two other pipeline controllers, namely, a very high-speed 2-phase controller and an ordinary 4-phase controller. Our controller performs better than a 4-phase controller when pipeline has processing elements. We have obtained interesting results in the case of a non-linear pipeline with a Conditional Branch (CB) operation. Our controller has slightly better performance even compared to 2-phase controller in the case of a pipeline with processing elements. Its superiority lies in the EA protocol, which employs return-to-zero control signals like the 4-phase protocol. Hence, our controller for CB operation is simple in construction just like the 4-phase controller. A 2-phase controller for the same operation needs to have a slightly complicated mechanism to handle the 2-phase operation because of the non-return-to-zero control signals, and this results in a performance overhead.
    Download PDF (671K)
  • Sung-Rae LEE, Ser-Hoon LEE, Sun-Young HWANG
    Type: PAPER
    Subject area: Software System
    2010 Volume E93.D Issue 8 Pages 2162-2171
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper presents an efficient instruction scheduling algorithm which generates low-power codes for embedded system applications. Reordering and recoding are concurrently applied for low-power code generation in the proposed algorithm. By appropriate reordering of instruction sequences, the efficiency of instruction recoding is increased. The proposed algorithm constructs program codes on a basic-block basis by selecting a code sequence from among the schedules generated randomly and maintained by the system. By generating random schedules for each of the basic blocks constituting an application program, the proposed algorithm constructs a histogram graph for each of the instruction fields to estimate the figure-of-merits achievable by reordering instruction sequences. For further optimization, the system performs simulated annealing on the generated code. Experimental results for benchmark programs show that the codes generated by the proposed algorithm consume 37.2% less power on average when compared to the previous algorithm which performs list scheduling prior to instruction recoding.
    Download PDF (1506K)
  • Ngoc Hung PHAM, Viet Ha NGUYEN, Toshiaki AOKI, Takuya KATAYAMA
    Type: PAPER
    Subject area: Software System
    2010 Volume E93.D Issue 8 Pages 2172-2181
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    An assume-guarantee verification method has been recognized as a promising approach to verify component-based software by model checking. This method is not only fitted to component-based software but also has a potential to solve the state space explosion problem in model checking. The method allows us to decompose a verification target into components so that we can model check each of them separately. In this method, assumptions are seen as the environments needed for the components to satisfy a property and for the rest of the system to be satisfied. The number of states of the assumptions should be minimized because the computational cost of model checking is influenced by that number. Thus, we propose a method for generating minimal assumptions for the assume-guarantee verification of component-based software. The key idea of this method is finding the minimal assumptions in the search spaces of the candidate assumptions. The minimal assumptions generated by the proposed method can be used to recheck the whole system at much lower computational cost. We have implemented a tool for generating the minimal assumptions. Experimental results are also presented and discussed.
    Download PDF (321K)
  • Takako NAKATANI, Shouzo HORI, Naoyasu UBAYASHI, Keiichi KATAMINE, Masa ...
    Type: PAPER
    Subject area: Software Engineering
    2010 Volume E93.D Issue 8 Pages 2182-2189
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Requirements changes sometimes cause a project to fail. A lot of projects now follow incremental development processes so that new requirements and requirements changes can be incorporated as soon as possible. These processes are called integrated requirements processes, which function to integrate requirements processes with other developmental processes. We have quantitatively and qualitatively investigated the requirements processes of a specific project from beginning to end. Our focus is to clarify the types of necessary requirements based on the components contained within a certain portion of the software architecture. Further, each type reveals its typical requirements processes through its own rationale. This case study is a system to manage the orders and services of a restaurant. In this paper, we introduce the case and categorize its requirements processes based on the components of the system and the qualitative characteristics of ISO-9126. We could identify seven categories of the typical requirements process to be managed and/or controlled. Each category reveals its typical requirements processes and their characteristics. The case study is our first step of practical integrated requirements engineering.
    Download PDF (749K)
  • Ricardo MARTINHO, Dulce DOMINGOS, João VARAJÃO
    Type: PAPER
    Subject area: Software Engineering
    2010 Volume E93.D Issue 8 Pages 2190-2197
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Software processes and corresponding models are dynamic entities that are often changed and evolved by skillful knowledge workers such as the members of a software development team. Consequently, process flexibility has been identified as one of the most important features that should be supported by both Process Modelling Languages (PMLs) and software tools that manage the processes. However, in the everyday practice, most software team members do not want total flexibility. They rather prefer to have controlled flexibility, i.e., to learn and follow advices previously modelled by a process engineer on which and how they can change the elements that compose a software process. Since process models constitute a preferred vehicle for sharing and communicating knowledge on software processes, the process engineer needs a PML that can express this controlled flexibility, along with other process perspectives. To achieve this enhanced PML, we first need a sound core set of concepts and relationships that defines the knowledge domain associated with the modelling of controlled flexibility. In this paper we capture and represent this domain by using Concept Maps (Cmaps). These include diagrams and descriptions that elicit the relationships between the concepts involved. The proposed Cmaps can then be used as input to extend a PML with modelling constructs to express controlled flexibility within software processes. Process engineers can use these constructs to define, in a process model, advices on changes that can be made to the model itself or to related instances. Software team members can then consult this controlled flexibility information within the process models and perform changes accordingly.
    Download PDF (688K)
  • Nobutaka SUZUKI
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 8 Pages 2198-2212
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    DTDs are continuously updated according to changes in the real world. Let t be an XML document valid against a DTD D, and suppose that D is updated by an update script s. In general, we cannot uniquely “infer” a transformation of t from s, i.e., we cannot uniquely determine the elements in t that should be deleted and/or the positions in t that new elements should be inserted into. In this paper, we consider inferring K optimum transformations of t from s so that a user finds the most desirable transformation more easily. We first show that the problem of inferring K optimum transformations of an XML document from an update script is NP-hard even if K = 1. Then, assuming that an update script is of length one, we show an algorithm for solving the problem, which runs in time polynomial of |D|, |t|, and K.
    Download PDF (985K)
  • Achmad BASUKI, Achmad Husni THAMRIN, Hitoshi ASAEDA, Jun MURAI
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 8 Pages 2213-2222
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a method to monitor information of a large-sized multicast group that can follow the group's dynamics in real-time while avoiding feedback implosion by using probabilistic polling. In particular, this paper improves the probabilistic-polling-based approach by deriving a reference mean value as the reference control value for the number of expected feedback from the properties of a binomial estimation model. As a result, our method adaptively changes its estimation parameters depending on the feedback from receivers in order to achieve a fast estimate time with high accuracy, while preventing the possible occurrence of feedback implosion. Our experimental implementation and evaluation on PlanetLab showed that the proposed method effectively controls the number of feedback and accurately estimates the size of a dynamic multicast group.
    Download PDF (709K)
  • Jun LIU, Yinhe HAN, Xiaowei LI
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 8 Pages 2223-2232
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Test data volume and test power are two major concerns when testing modern large circuits. Recently, selective encoding of scan slices is proposed to compress test data. This encoding technique, unlike many other compression techniques encoding all the bits, only encodes the target-symbol by specifying a single bit index and copying group data. In this paper, we propose an extended selective encoding which presents two new techniques to optimize this method: a flexible grouping strategy, X bits exploitation and filling strategy. Flexible grouping strategy can decrease the number of groups which need to be encoded and improve test data compression ratio. X bits exploitation and filling strategy can exploit a large number of don't care bits to reduce testing power with no compression ratio loss. Experimental results show that the proposed technique needs less test data storage volume and reduces average weighted switching activity by 25.6% and peak weighted switching activity by 9.68% during scan shift compared to selective encoding.
    Download PDF (567K)
  • Tae-Heon YANG, Sang-Youn KIM, Wayne J. BOOK, Dong-Soo KWON
    Type: PAPER
    Subject area: Human-computer Interaction
    2010 Volume E93.D Issue 8 Pages 2233-2242
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    For tactile feedback in mobile devices, the size and the power consumption of tactile modules are the dominant factors. Thus, vibration motors have been widely used in mobile devices to provide tactile sensation. However, the vibration motor cannot sufficiently generate a great amount of tactile sensation because the magnitude and the frequency of the vibration motor are coupled. For the generation of a wide variety of tactile sensations, this paper presents a new tactile actuator that incorporates a solenoid, a permanent magnet and an elastic spring. The feedback force in this actuator is generated by elastic and electromagnetic force. This paper also proposes a tiny tactile module with the proposed actuators. To construct a tiny tactile module, the contactor gap of the module is minimized without decreasing the contactor stroke, the output force, and the working frequency. The elastic springs of the actuators are separated into several layers to minimize the contactor gap without decreasing the performance of the tactile module. Experiments were conducted to investigate each contactor output force as well as the frequency response of the proposed tactile module. Each contactor of the tactile module can generate enough output force to stimulate human mechanoreceptors. As the contactors are actuated in a wide range of frequency, the proposed tactile module can generate various tactile sensations. Moreover, the size of the proposed tactile module is small enough to be embedded it into a mobile device, and its power consumption is low. Therefore, the proposed tactile actuator and module have good potential in many interactive mobile devices.
    Download PDF (830K)
  • Wei CHEN, Gang LIU, Jun GUO, Shinichiro OMACHI, Masako OMACHI, Yujing ...
    Type: PAPER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 8 Pages 2243-2251
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    In speech recognition, confidence annotation adopts a single confidence feature or a combination of different features for classification. These confidence features are always extracted from decoding information. However, it is proved that about 30% of knowledge of human speech understanding is mainly derived from high-level information. Thus, how to extract a high-level confidence feature statistically independent of decoding information is worth researching in speech recognition. In this paper, a novel confidence feature extraction algorithm based on latent topic similarity is proposed. Each word topic distribution and context topic distribution in one recognition result is firstly obtained using the latent Dirichlet allocation (LDA) topic model, and then, the proposed word confidence feature is extracted by determining the similarities between these two topic distributions. The experiments show that the proposed feature increases the number of information sources of confidence features with a good information complementary effect and can effectively improve the performance of confidence annotation combined with confidence features from decoding information.
    Download PDF (440K)
  • Sanaz SEYEDIN, Seyed Mohammad AHADI
    Type: PAPER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 8 Pages 2252-2261
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a novel noise-robust feature extraction method for speech recognition. It is based on making the Minimum Variance Distortionless Response (MVDR) power spectrum estimation method robust against noise. This robustness is obtained by modifying the distortionless constraint of the MVDR spectral estimation method via weighting the sub-band power spectrum values based on the sub-band signal to noise ratios. The optimum weighting is obtained by employing the experimental findings of psychoacoustics. According to our experiments, this technique is successful in modifying the power spectrum of speech signals and making it robust against noise. The above method, when evaluated on Aurora 2 task for recognition purposes, outperformed both the MFCC features as the baseline and the MVDR-based features in different noisy conditions.
    Download PDF (586K)
  • Kyung-Yong KIM, Gwang-Hoon PARK, Doug-Young SUH
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 8 Pages 2262-2272
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes an efficient adaptive depth-map coding scheme for generating virtual-view images in 3D-video. Virtual-view images can be generated by view-interpolation based on the decoded depth-map of the image. The proposed depth-map coding scheme is designed to have a new gray-coding-based bit-plane coding method for efficiently coding the depth-map images on the object-boundary areas, as well as the conventional DCT-based coding scheme (H.264/AVC) for efficiently coding the inside area images of the objects or the background depth-map images. Simulation results show that the proposed coding scheme, in comparison with the H.264/AVC coding scheme, improves the BD-rate savings 6.77%-10.28% and the BD-PSNR gains 0.42dB-0.68dB. It also improves the subjective picture quality of synthesized virtual-view images using decoded depth-maps.
    Download PDF (1806K)
  • Jing-Xin WANG, Alvin W.Y. SU
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 8 Pages 2273-2280
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Scanning quantized transform coefficients is an important tool for video coding. For example, the MPEG-4 video coder adopts three different scans to get better coding efficiency. This paper proposes an adaptive zero-coefficient distribution scan in inter block coding. The proposed method attempts to improve H.264/AVC zero coefficient coding by modifying the scan operation. Since the zero-coefficient distribution is changed by the proposed scan method, new VLC tables for syntax elements used in context-adaptive variable length coding (CAVLC) are also provided. The savings in bit-rate range from 2.2% to 5.1% in the high bit-rate cases, depending on different test sequences.
    Download PDF (306K)
  • Kong-Joo LEE, Jee-Eun KIM
    Type: PAPER
    Subject area: Natural Language Processing
    2010 Volume E93.D Issue 8 Pages 2281-2290
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    The proposed automated scoring system for English writing tests provides an assessment result including a score and diagnostic feedback to test-takers without human's efforts. The system analyzes an input sentence and detects errors related to spelling, syntax and content similarity. The scoring model has adopted one of the statistical approaches, a regression tree. A scoring model in general calculates a score based on the count and the types of automatically detected errors. Accordingly, a system with higher accuracy in detecting errors raises the accuracy in scoring a test. The accuracy of the system, however, cannot be fully guaranteed for several reasons, such as parsing failure, incompleteness of knowledge bases, and ambiguous nature of natural language. In this paper, we introduce an error-weighting technique, which is similar to term-weighting widely used in information retrieval. The error-weighting technique is applied to judge reliability of the errors detected by the system. The score calculated with the technique is proven to be more accurate than the score without it.
    Download PDF (436K)
  • Hyunjin PARK, Alfred HERO, Peyton BLAND, Marc KESSLER, Jongbum SEO, Ch ...
    Type: PAPER
    Subject area: Biological Engineering
    2010 Volume E93.D Issue 8 Pages 2291-2301
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    A good abdominal probabilistic atlas can provide important information to guide segmentation and registration applications in the abdomen. Here we build and test probabilistic atlases using 24 abdominal CT scans with available expert manual segmentations. Atlases are built by picking a target and mapping other training scans onto that target and then summing the results into one probabilistic atlas. We improve our previous abdominal atlas by 1) choosing a least biased target as determined by a statistical tool, i.e. multidimensional scaling operating on bending energy, 2) using a better set of control points to model the deformation, and 3) using higher information content CT scans with visible internal liver structures. One atlas is built in the least biased target space and two atlases are built in other target spaces for performance comparisons. The value of an atlas is assessed based on the resulting segmentations; whichever atlas yields the best segmentation performance is considered the better atlas. We consider two segmentation methods of abdominal volumes after registration with the probabilistic atlas: 1) simple segmentation by atlas thresholding and 2) application of a Bayesian maximum a posteriori method. Using jackknifing we measure the atlas-augmented segmentation performance with respect to manual expert segmentation and show that the atlas built in the least biased target space yields better segmentation performance than atlases built in other target spaces.
    Download PDF (520K)
  • In Hwan DOH, Myoung Sub SHIM, Eunsam KIM, Jongmoo CHOI, Donghee LEE, S ...
    Type: LETTER
    Subject area: Software System
    2010 Volume E93.D Issue 8 Pages 2302-2305
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    Due to the detachability of Flash storage, which is a dominant portable storage, data integrity stored in Flash storages becomes an important issue. This study considers the performance of Flash Translation Layer (FTL) schemes embedded in Flash storages in conjunction with file system behavior that pursue high data integrity. To assure extreme data integrity, file systems synchronously write all file data to storage accompanying hot write references. In this study, we concentrate on the effect of hot write references on Flash storage, and we consider the effect of absorbing the hot write references via nonvolatile write cache on the performance of the FTL schemes in Flash storage. In so doing, we quantify the performance of typical FTL schemes for a realistic digital camera workload that contains hot write references through experiments on a real system environment. Results show that for the workload with hot write references FTL performance does not conform with previously reported studies. We also conclude that the impact of the underlying FTL schemes on the performance of Flash storage is dramatically reduced by absorbing the hot write references via nonvolatile write cache.
    Download PDF (196K)
  • Min Soo KIM, Ju Wan KIM, Myoung Ho KIM
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 8 Pages 2306-2310
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    There has been much interest in a spatial query which acquires sensor readings from sensor nodes inside specified geographical area of interests. A centralized approach performs the spatial query at a server after acquiring all sensor readings. However, it incurs high wireless transmission cost in accessing all sensor nodes. Therefore, various in-network spatial search methods have been proposed, which focus on reducing the wireless transmission cost. However, the in-network methods sometimes incur unnecessary wireless transmissions because of dead space, which is spatially indexed but does not contain real data. In this paper, we propose a hybrid spatial query processing algorithm which removes the unnecessary wireless transmissions. The main idea of the hybrid algorithm is to find results of a spatial query at a server in advance and use the results in removing the unnecessary wireless transmissions at a sensor network. We compare the in-network method through several experiments and clarify our algorithm's remarkable features.
    Download PDF (938K)
  • Eun-Jun YOON, Kee-Young YOO
    Type: LETTER
    Subject area: Information Network
    2010 Volume E93.D Issue 8 Pages 2311-2315
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This letter proposes a robust biometric authenticated key agreement (BAKA) protocol for a secure token to provide strong security and minimize the computation cost of each participant. Compared with other related protocols, the proposed BAKA protocol not only is secure against well-known cryptographical attacks but also provides various functionality and performance requirements.
    Download PDF (239K)
  • Miyuki KOSHIMURA, Hidetomo NABESHIMA, Hiroshi FUJITA, Ryuzo HASEGAWA
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2010 Volume E93.D Issue 8 Pages 2316-2318
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.
    Download PDF (69K)
  • Chang Sik SON, Yoon-Nyun KIM, Kyung-Ri PARK, Hee-Joon PARK
    Type: LETTER
    Subject area: Pattern Recognition
    2010 Volume E93.D Issue 8 Pages 2319-2323
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    A scheme for designing a hierarchical fuzzy classification system with a different number of fuzzy partitions based on statistical characteristics of the data is proposed. To minimize the number of misclassified patterns in intermediate layers, a method of fuzzy partitioning from the defuzzified outputs of previous layers is also presented. The effectiveness of the proposed scheme is demonstrated by comparing the results from five datasets in the UCI Machine Learning Repository.
    Download PDF (723K)
  • Xia MAO, Lijiang CHEN
    Type: LETTER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 8 Pages 2324-2326
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose a new method that employs two novel features, correlation density (Cd) and fractal dimension (Fd), to recognize emotional states contained in speech. The former feature obtained by a list of parametric filters reflects the broad frequency components and the fine structure of lower frequency components, contributed by unvoiced phones and voiced phones, respectively; the latter feature indicates the non-linearity and self-similarity of a speech signal. Comparative experiments based on Hidden Markov Model and K Nearest Neighbor methods are carried out. The results show that Cd and Fd are much more closely related with emotional expression than the features commonly used.
    Download PDF (76K)
  • Sung Soo KIM, Chang Woo HAN, Nam Soo KIM
    Type: LETTER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 8 Pages 2327-2330
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    In this letter, we present useful features accounting for pronunciation prominence and propose a classification technique for prominence detection. A set of phone-specific features are extracted based on a forced alignment of the test pronunciation provided by a speech recognition system. These features are then applied to the traditional classifiers such as the support vector machine (SVM), artificial neural network (ANN) and adaptive boosting (Adaboost) for detecting the place of prominence.
    Download PDF (147K)
  • Chang Woo HAN, Shin Jae KANG, Nam Soo KIM
    Type: LETTER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 8 Pages 2331-2335
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    In this letter, we propose a novel approach to estimate three different kinds of phone mismatch penalty matrices for two-stage keyword spotting. When the output of a phone recognizer is given, detection of a specific keyword is carried out through text matching with the phone sequences provided by the specified keyword using the proposed phone mismatch penalty matrices. The penalty matrices associated with substitution, insertion and deletion errors are estimated from the training data through deliberate error generation. The proposed approach has shown a significant improvement in a Korean continuous speech recognition task.
    Download PDF (160K)
  • Do-Gil LEE, Gumwon HONG, Seok Kee LEE, Hae-Chang RIM
    Type: LETTER
    Subject area: Natural Language Processing
    2010 Volume E93.D Issue 8 Pages 2336-2338
    Published: August 01, 2010
    Released: August 01, 2010
    JOURNALS FREE ACCESS
    The construction of annotated corpora requires considerable manual effort. This paper presents a pragmatic method to minimize human intervention for the construction of Korean part-of-speech (POS) tagged corpus. Instead of focusing on improving the performance of conventional automatic POS taggers, we devise a discriminative POS tagger which can selectively produce either a single analysis or multiple analyses based on the tagging reliability. The proposed approach uses two decision rules to judge the tagging reliability. Experimental results show that the proposed approach can effectively control the quality of corpus and the amount of manual annotation by the threshold value of the rule.
    Download PDF (127K)
feedback
Top