IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E97.D , Issue 9
Showing 1-46 articles out of 46 articles from the selected issue
Special Section on Multiple-Valued Logic and VLSI Computing
  • Takao WAHO
    2014 Volume E97.D Issue 9 Pages 2217
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Download PDF (73K)
  • Yutaka HATA, Hiroshi NAKAJIMA
    Type: INVITED PAPER
    2014 Volume E97.D Issue 9 Pages 2218-2225
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This paper gives a survey of intelligent computational techniques in medical and health care system. First, we briefly describe diagnosable techniques in medical image processing. Next, we demonstrate two ultrasonic surgery support systems for orthopedic and rectum cancer surgeons. In them, intelligent computational technique plays a primary role. Third, computational techniques are introduced in human health care system. Usually, this goal is not to apply clinical treatment but to home use to pay consciousness to health. In it, a simple ECG and respiration meter are introduced with a mat sheet which detects heart rate and respiration. Finally, a medical big data application is introduced, that is, body weight prediction is shown based on autoregressive model. Thus, we show that intelligent computing is effective and essential in modern medical and health care system.
    Download PDF (1949K)
  • Hiromitsu KIMURA, Zhiyong ZHONG, Yuta MIZUOCHI, Norihiro KINOUCHI, Yos ...
    Type: INVITED PAPER
    2014 Volume E97.D Issue 9 Pages 2226-2233
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    A ferroelectric-based (FE-based) non-volatile logic is proposed for low-power LSI. Standby currents in a logic circuit can be cut off by using FE-based non-volatile flip-flops (NVFFs), and the standby power can be reduced to zero. The FE capacitor is accessed only when the power turns on/off, performance of the NVFF is almost as same as that of the conventional flip-flop (FF) in a logic operation. The use of complementarily stored data in coupled FE capacitors makes it possible to realize wide read voltage margin, which guarantees 10 years retention at 85 degree Celsius under less than 1.5V operation. The low supply voltage and electro-static discharge (ESD) detection technique prevents data destruction caused by illegal access for the FE capacitor during standby state. Applying the proposed circuitry in CPU, the write and read operation for all FE capacitors in 1.6k-bit NVFFs are performed within 7µs and 3µs with access energy of 23.1nJ and 8.1nJ, respectively, using 130nm CMOS with Pb(Zr,Ti)O3(PZT) thin films.
    Download PDF (4842K)
  • Shinobu NAGAYAMA, Tsutomu SASAO, Jon T. BUTLER, Mitchell A. THORNTON, ...
    Type: PAPER
    Subject area: Logic Design
    2014 Volume E97.D Issue 9 Pages 2234-2242
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    In the optimization of decision diagrams, variable reordering approaches are often used to minimize the number of nodes. However, such approaches are less effective for analysis of multi-state systems given by monotone structure functions. Thus, in this paper, we propose algorithms to minimize the number of edges in an edge-valued multi-valued decision diagram (EVMDD) for fast analysis of multi-state systems. The proposed algorithms minimize the number of edges by grouping multi-valued variables into larger-valued variables. By grouping multi-valued variables, we can reduce the number of nodes as well. To show the effectiveness of the proposed algorithms, we compare the proposed algorithms with conventional optimization algorithms based on a variable reordering approach. Experimental results show that the proposed algorithms reduce the number of edges by up to 15% and the number of nodes by up to 47%, compared to the conventional ones. This results in a speed-up of the analysis of multi-state systems by about three times.
    Download PDF (915K)
  • Hiroki NAKAHARA, Tsutomu SASAO, Munehiro MATSUURA
    Type: PAPER
    Subject area: Logic Design
    2014 Volume E97.D Issue 9 Pages 2243-2252
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    A Decision Diagram Machine (DDM) is a special-purpose processor that has special instructions to evaluate a decision diagram. Since the DDM uses only a limited number of instructions, it is faster than the general-purpose Micro Processor Unit (MPU). Also, the architecture for the DDM is much simpler than that for an MPU. This paper presents a packet classifier using a parallel EVMDD (k) machine. To reduce computation time and code size, first, a set of rules for a packet classifier is partitioned into groups. Then, the parallel EVMDD (k) machine evaluates them. To further speed-up for the standard EVMDD (k) machine, we propose the prefetching EVMDD (k) machine which reads both the index and the jump address at the same time. The prefetching EVMDD (k) machine is 2.4 times faster than the standard one using the same memory size. We implemented a parallel prefetching EVMDD (k) machine consisting of 30 machines on an FPGA, and compared it with the Intel's Core i5 microprocessor running at 1.7GHz. Our parallel machine is 15.1-77.5 times faster than the Core i5, and it requires only 8.1-58.5 percents of the memory for the Core i5.
    Download PDF (2224K)
  • Takashi HIRAYAMA, Hayato SUGAWARA, Katsuhisa YAMANAKA, Yasuaki NISHITA ...
    Type: PAPER
    Subject area: Reversible/Quantum Computing
    2014 Volume E97.D Issue 9 Pages 2253-2261
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    We present a new lower bound on the number of gates in reversible logic circuits that represent a given reversible logic function, in which the circuits are assumed to consist of general Toffoli gates and have no redundant input/output lines. We make a theoretical comparison of lower bounds, and prove that the proposed bound is better than the previous one. Moreover, experimental results for lower bounds on randomly-generated reversible logic functions and reversible benchmarks are given. The results also demonstrate that the proposed lower bound is better than the former one.
    Download PDF (600K)
  • Martin LUKAC, Dipal SHAH, Marek PERKOWSKI, Michitaka KAMEYAMA
    Type: PAPER
    Subject area: Reversible/Quantum Computing
    2014 Volume E97.D Issue 9 Pages 2262-2269
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Reversible logic is becoming more and more popular due to the fact that many novel technologies such as quantum computing, low power CMOS circuit design or quantum optical computing are becoming more and more realistic. In quantum computing, reversible computing is the main venue for the realization and design of classical functions and circuits. We present a new approach to synthesis of reversible circuits using Kronecker Functional Lattice Diagrams (KFLD). Unlike many of contemporary algorithms for synthesis of reversible functions that use n×n Toffoli gates, our method synthesizes functions using 3×3 Toffoli gates, Feynman gates and NOT gates. This reduces the quantum cost of the designed circuit but adds additional ancilla bits. The resulting circuits are always regular in a 4-neighbor model and all connections are predictable. Consequently resulting circuits can be directly mapped in to a quantum device such as quantum FPGA [14]. This is a significant advantage of our method, as it allows us to design optimum circuits for a given quantum technology.
    Download PDF (1051K)
  • Kotaro OKAMOTO, Naofumi HOMMA, Takafumi AOKI
    Type: PAPER
    Subject area: VLSI Architecture
    2014 Volume E97.D Issue 9 Pages 2270-2277
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This paper presents a graph-based approach to designing arithmetic circuits over Galois fields (GFs) using normal basis representations. The proposed method is based on a graph-based circuit description called Galois-field Arithmetic Circuit Graph (GF-ACG). First, we extend GF-ACG representation to describe GFs defined by normal basis in addition to polynomial basis. We then apply the extended design method to Massey-Omura parallel multipliers which are well known as typical multipliers based on normal basis. We present the formal description of the multipliers in a hierarchical manner and show that the verification time can be greatly reduced in comparison with those of the conventional techniques. In addition, we design GF exponentiation circuits consisting of the Massey-Omura parallel multipliers and an inversion circuit over composite field GF(((22)2)2) in order to demonstrate the advantages of normal-basis circuits over polynomial-basis ones.
    Download PDF (1266K)
  • Xu BAI, Michitaka KAMEYAMA
    Type: PAPER
    Subject area: VLSI Architecture
    2014 Volume E97.D Issue 9 Pages 2278-2285
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    A global tree local X-net network (GTLX) is introduced to realize high-performance data transfer in a multiple-valued fine-grain reconfigurable VLSI (MVFG-RVLSI). A global pipelined tree network is utilized to realize high-performance long-distance bit-parallel data transfer. Moreover, a logic-in-memory architecture is employed for solving data transfer bottleneck between a block data memory and a cell. A local X-net network is utilized to realize simple interconnections and compact switch blocks for eight-near neighborhood data transfer. Moreover, multiple-valued signaling is utilized to improve the utilization of the X-net network, where two binary data can be transferred from two adjacent cells to one common adjacent cell simultaneously at each “X” intersection. To evaluate the MVFG-RVLSI, a fast Fourier transform (FFT) operation is mapped onto a previous MVFG-RVLSI using only the X-net network and the MVFG-RVLSI using the GTLX. As a result, the computation time, the power consumption and the transistor count of the MVFG-RVLSI using the GTLX are reduced by 25%, 36% and 56%, respectively, in comparison with those of the MVFG-RVLSI using only the X-net network.
    Download PDF (2026K)
  • Naoya ONIZAWA, Warren J. GROSS, Takahiro HANYU, Vincent C. GAUDET
    Type: PAPER
    Subject area: VLSI Architecture
    2014 Volume E97.D Issue 9 Pages 2286-2295
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Stochastic decoding provides ultra-low-complexity hardware for high-throughput parallel low-density parity-check (LDPC) decoders. Asynchronous stochastic decoding was proposed to demonstrate the possibility of low power dissipation and high throughput in stochastic decoders, but decoding might stop before convergence due to “lock-up”, causing error floors that also occur in synchronous stochastic decoding. In this paper, we introduce a wire-delay dependent (WDD) scheduling algorithm for asynchronous stochastic decoding in order to reduce the error floors. Instead of assigning the same delay to all computation nodes in the previous work, different computation delay is assigned to each computation node depending on its wire length. The variation of update timing increases switching activities to decrease the possibility of the “lock-up”, lowering the error floors. In addition, the WDD scheduling algorithm is simplified for the hardware implementation in order to eliminate time-averaging and multiplication functions used in the original WDD scheduling algorithm. BER performance using a regular (1024, 512) (3,6) LDPC code is simulated based on our timing model that has computation and wire delay estimated under ASPLA 90nm CMOS technology. It is demonstrated that the proposed asynchronous decoder achieves a 6.4-9.8× smaller latency than that of the synchronous decoder with a 0.25-0.3 dB coding gain.
    Download PDF (838K)
  • Yosuke IIJIMA, Yuuki TAKADA, Yasushi YUMINAKA
    Type: PAPER
    Subject area: Communication for VLSI
    2014 Volume E97.D Issue 9 Pages 2296-2303
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    The data rate of VLSI interconnections has been increasing according to the demand for high-speed operation of semiconductors such as CPUs. To realize high performance VLSI systems, high-speed data communication has become an important factor. However, at high-speed data rates, it is difficult to achieve accurate communication without bit errors because of inter-symbol interference (ISI). This paper presents high-speed data communication techniques for VLSI systems using Tomlinson-Harashima Precoding (THP). Since THP can eliminate the ISI with limiting average and peak power of transmitter signaling, THP is suitable for implementing advanced low-voltage VLSI systems. In this paper, 4-PAM (Pulse amplitude modulation) with THP has been employed to achieve high-speed data communication in VLSI systems. Simulation results show that THP can remove the ISI without increasing peak and average power of a transmitter. Moreover, simulation results clarify that multiple-valued data communication is very effective to reduce implementation costs for realizing high-speed serial links.
    Download PDF (4458K)
  • Akira MOCHIZUKI, Hirokatsu SHIRAHAMA, Yuma WATANABE, Takahiro HANYU
    Type: PAPER
    Subject area: Communication for VLSI
    2014 Volume E97.D Issue 9 Pages 2304-2311
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    An energy-efficient intra-chip communication link circuit with ternary current signaling is proposed for an asynchronous Network-on-Chip. The data signal encoded by an asynchronous three-state protocol is represented by a small-voltage-swing three-level intermediate signal, which results in the reduction of transition delay and achieving energy-efficient data transfer. The three-level voltage is generated by using a combination of dynamically controlled current sources with feedback loop mechanism. Moreover, the proposed circuit contains a power-saving scheme where the dynamically controlled transistors also are utilized. By cutting off the current paths when the data transfer on the communication link is inactive, the power dissipation can be greatly reduced. It is demonstrated that the average data-transfer speed is about 1.5 times faster than that of a binary CMOS implementation using a 130nm CMOS technology at the supply voltage of 1.2V.
    Download PDF (1066K)
  • Reza FAGHIH MIRZAEE, Keivan NAVI
    Type: PAPER
    Subject area: Circuit Implementations
    2014 Volume E97.D Issue 9 Pages 2312-2319
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    The unique characteristic of Ternary ripple-carry addition enables us to optimize Ternary Full Adder for this specific application. Carbon nanotube field effect transistors are used in this paper to design new Ternary Half and Full Adders, which are essential components of Ternary ripple-carry adder. The novel designs take the sum of input variables as a single input signal, and generate outputs in a way which is far more efficient than the previously presented similar structures. The new ripple-carry adder operates rapidly, with high performance, and low-transistor-count.
    Download PDF (1610K)
  • Katherine Shu-Min LI, Yingchieh HO, Yu-Wei YANG, Liang-Bi CHEN
    Type: PAPER
    Subject area: Circuit Implementations
    2014 Volume E97.D Issue 9 Pages 2320-2329
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    The excessively high temperature in a chip may cause circuit malfunction and performance degradation, and thus should be avoided to improve system reliability. In this paper, a novel oscillation-based on-chip thermal sensing architecture for dynamically adjusting supply voltage and clock frequency in System-on-a-Chip (SoC) is proposed. It is shown that the oscillation frequency of a ring oscillator reduces linearly as the temperature rises, and thus provides a good on-chip temperature sensing mechanism. An efficient Dynamic Voltage-to-Frequency Scaling (DF2VS) algorithm is proposed to dynamically adjust supply voltage according to the oscillation frequencies of the ring oscillators distributed in SoC so that thermal sensing can be carried at all potential hot spots. An on-chip Dynamic Voltage Scaling or Dynamic Voltage and Frequency Scaling (DVS or DVFS) monitor selects the supply voltage level and clock frequency according to the outputs of all thermal sensors. Experimental results on SoC benchmark circuits show the effectiveness of the algorithm that a 10% reduction in supply voltage alone can achieve about 20% power reduction (DVS scheme), and nearly 50% reduction in power is achievable if the clock frequency is also scaled down (DVFS scheme). The chip temperature will be significant lower due to the reduced power consumption.
    Download PDF (1741K)
  • Mingmin YAN, Hiroki TAMURA, Koichi TANNO
    Type: PAPER
    Subject area: Circuit Implementations
    2014 Volume E97.D Issue 9 Pages 2330-2337
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. In this paper, we introduce the gaze estimation system of electrooculogram signals. Using this system, the electrooculogram signals can be recorded when the patients focused on each direct. All these recorded signals could be analyzed using math-method and the mathematical model will be set up. Gaze estimation can be recognized using electrooculogram signals follow these models.
    Download PDF (2710K)
Regular Section
  • Cholwich NATTEE
    Type: SURVEY PAPER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 9 Pages 2338-2345
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Smartphones have become vital devices in the current on-the-go Thai culture. Typically, virtual keyboards serve as tools for text input on smartphones. Due to the limited screen area and the large number of Thai characters, the size of each button on the keyboard is quite small. This leads to character mistyping and low typing speed. In this paper, we present a typical framework of a Thai Input Method on smartphones which includes four processes; Character Candidate Generation, Word Candidate Generation, Word Candidate Display, and Model Update. This framework not only works with Thai, it works with other letter-based languages as well. We also review virtual keyboards and techniques currently used and available for Thai text input.
    Download PDF (1916K)
  • Ji-Eun ROH, Chang-Soo AHN, Seon-Joo KIM
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 9 Pages 2346-2355
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Recently, radar resource management of multifunction radar is a challenging issue in electronically scanned array radar technology. This paper deals with radar beam scheduling, which is a core issue of radar resource management. This paper proposed stochastic scheduler algorithm using Simulated Annealing (SA) and Hybrid scheduler algorithm which automatically selects two different types of schedulers according to the radar load: Rule based scheduler using modified Butler algorithm for underload situations and SA based scheduler for overload situations. The proposed algorithms are evaluated in terms of scheduling latency, the number of scheduled tasks, and time complexity. The simulation results show that the performance of rule based scheduler is seriously degraded in overload situation. However, SA based scheduler and Hybrid scheduler have graceful performance degradation in overload situation. Compared with rule based scheduler, SA based scheduler and Hybrid scheduler can schedule many more tasks on time for the same operation duration in the overload situation. Even though their time complex is relatively high, it can be applied to real applications if the parameters are properly controlled. Especially, Hybrid scheduler has an advantage of low time complexity with good performance.
    Download PDF (2098K)
  • Yongxin ZHAO, Yanhong HUANG, Qin LI, Huibiao ZHU, Jifeng HE, Jianwen L ...
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 9 Pages 2356-2370
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Survivability is an essential requirement of the networked information systems analogous to the dependability. The definition of survivability proposed by Knight in [16] provides a rigorous way to define the concept. However, the Knight's specification does not provide a behavior model of the system as well as a verification framework for determining the survivability of a system satisfying a given specification. This paper proposes a complete formal framework for specifying and verifying the concept of system survivability on the basis of Knight's research. A computable probabilistic model is proposed to specify the functions and services of a networked information system. A quantified survivability specification is proposed to indicate the requirement of the survivability. A probabilistic refinement relation is defined to determine the survivability of the system. The framework is then demonstrated with three case studies: the restaurant system (RES), the Warship Command and Control system (LWC) and the Command-and-Control (C2) system.
    Download PDF (1200K)
  • Yukinori SATO, Yasushi INOGUCHI, Tadao NAKAMURA
    Type: PAPER
    Subject area: Computer System
    2014 Volume E97.D Issue 9 Pages 2371-2385
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This paper presents a mechanism for detecting dynamic loop and procedure nesting during the actual program execution on-the-fly. This mechanism aims primarily at making better strategies for performance tuning or parallelization. Using a pre-compiled application executable machine code as an input, our mechanism statically generates simple but precise markers that indicate loop entries and loop exits, and dynamically monitors loop nesting that appears during the actual execution together with call context tree. To keep precise loop structures all the time, we monitor the indirect jumps that enter the loop regions and the setjmp/longjmp functions that cause irregular function call transfers. We also present a novel representation called Loop-Call Context Graph that can keep track of inter-procedural loop nests. We implement our mechanism and evaluate it using SPEC CPU2006 benchmark suite. The results confirm that our mechanism can successfully reveal the precise inter-procedural loop nest structures from all of SPEC CPU2006 benchmark executions without any particular compiler support. The results also show that it can reduce runtime loop detection overheads compared with the existing loop profiling method.
    Download PDF (1963K)
  • Tao WANG, Huaimin WANG, Gang YIN, Cheng YANG, Xiang LI, Peng ZOU
    Type: PAPER
    Subject area: Software Engineering
    2014 Volume E97.D Issue 9 Pages 2386-2397
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    The large amounts of freely available open source software over the Internet are fundamentally changing the traditional paradigms of software development. Efficient categorization of the massive projects for retrieving relevant software is of vital importance for Internet-based software development such as solution searching, best practices learning and so on. Many previous works have been conducted on software categorization by mining source code or byte code, but were verified on only relatively small collections of projects with coarse-grained categories or clusters. However, Internet-based software development requires finer-grained, more scalable and language-independent categorization approaches. In this paper, we propose a novel approach to hierarchically categorize software projects based on their online profiles. We design a SVM-based categorization framework and adopt a weighted combination strategy to aggregate different types of profile attributes from multiple repositories. Different basic classification algorithms and feature selection techniques are employed and compared. Extensive experiments are carried out on more than 21,000 projects across five repositories. The results show that our approach achieves significant improvements by using weighted combination. Compared to the previous work, our approach presents competitive results with more finer-grained and multi-layered category hierarchy with more than 120 categories. Unlike approaches that use source code or byte code, our approach is more effective for large-scale and language-independent software categorization. In addition, experiments suggest that hierarchical categorization combined with general keyword-based searching improves the retrieval efficiency and accuracy.
    Download PDF (742K)
  • In-Joong KIM, Kyu-Young WHANG, Hyuk-Yoon KWON
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2014 Volume E97.D Issue 9 Pages 2398-2414
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    A top-k keyword query in relational databases returns k trees of tuples — where the tuples containing the query keywords are connected via primary key-foreign key relationships — in the order of relevance to the query. Existing works are classified into two categories: 1) the schema-based approach and 2) the schema-free approach. We focus on the former utilizing database schema information for more effective ranking of the query results. Ranking measures used in existing works can be classified into two categories: 1) the size of the tree (i.e., the syntactic score) and 2) ranking measures, such as TF-IDF, borrowed from the information retrieval field. However, these measures do not take into account semantic relevancy among relations containing the tuples in the query results. In this paper, we propose a new ranking method that ranks the query results by utilizing semantic relevancy among relations containing the tuples at the schema level. First, we propose a structure of semantically strongly related relations, which we call the strongly related tree (SRT). An SRT is a tree that maximally connects relations based on the lossless join property. Next, we propose a new ranking method, SRT-Rank, that ranks the query results by a new scoring function augmenting existing ones with the concept of the SRT. SRT-Rank is the first research effort that applies semantic relevancy among relations to ranking the results of keyword queries. To show the effectiveness of SRT-Rank, we perform experiments on synthetic and real datasets by augmenting the representative existing methods with SRT-Rank. Experimental results show that, compared with existing methods, SRT-Rank improves performance in terms of four quality measures — the mean normalized discounted cumulative gain (nDCG), the number of queries whose top-1 result is relevant to the query, the mean reciprocal rank, and the mean average precision — by up to 46.9%, 160.0%, 61.7%, and 63.8%, respectively. In addition, we show that the query performance of SRT-Rank is comparable to or better than those of existing methods.
    Download PDF (3423K)
  • Shigeo MATSUBARA, Meile WANG
    Type: PAPER
    Subject area: Information Network
    2014 Volume E97.D Issue 9 Pages 2415-2422
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    We propose a method for finding an appropriate setting of a pay-per-performance payment system to prevent participation of insincere workers in crowdsourcing. Crowdsourcing enables fast and low-cost accomplishment of tasks; however, insincere workers prevent the task requester from obtaining high-quality results. Instead of a fixed payment system, the pay-per-performance payment system is promising for excluding insincere workers. However, it is difficult to learn what settings are better, and a naive payment setting may cause unsatisfactory outcomes. To overcome these drawbacks, we propose a method for calculating the expected payments for sincere and insincere workers, and then clarifying the conditions in the payment setting in which sincere workers are willing to choose a task, while insincere workers are not willing to choose the task. We evaluated the proposed method by conducting several experiments on tweet labeling tasks in Amazon Mechanical Turk. The results suggest that the pay-per-performance system is useful for preventing participation of insincere workers.
    Download PDF (301K)
  • Chunsheng HUA, Juntong QI, Jianda HAN, Haiyuan WU
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 9 Pages 2423-2433
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we introduced a novel Kernel-Reliability-based K-Means (KRKM) clustering algorithm for categorizing an unknown dataset under noisy condition. Compared with the conventional clustering algorithms, the proposed KRKM algorithm will measure both the reliability and the similarity for classifying data into its neighbor clusters by the dynamic kernel functions, where the noisy data will be rejected by being given low reliability. The reliability for classifying data is measured by a dynamic kernel function whose window size will be determined by the triangular relationship from this data to its two nearest clusters. The similarity from a data item to its neighbor clusters is measured by another adaptive kernel function which takes into account not only the similarity from data to clusters but also that between its two nearest clusters. The main contribution of this work lies in introducing the dynamic kernel functions to evaluate both the reliability and similarity for clustering, which makes the proposed algorithm more efficient in dealing with very strong noisy data. Through various experiments, the efficiency and effectiveness of proposed algorithm have been confirmed.
    Download PDF (2541K)
  • Ruicong ZHI, Lei ZHAO, Bolin SHI, Yi JIN
    Type: PAPER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 9 Pages 2434-2442
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    A novel Two-dimensional Fuzzy Discriminant Locality Preserving Projections (2D-FDLPP) algorithm is proposed for learning effective subspace of two-dimensional images. The 2D-FDLPP algorithm is derived from the Two-dimensional Locality Preserving Projections (2D-LPP) by exploiting both fuzzy and discriminant properties. 2D-FDLPP algorithm preserves the relationship degree of each sample belonging to given classes with fuzzy k-nearest neighbor classifier. Also, it introduces between-class scatter constrain and label information into 2D-LPP algorithm. 2D-FDLPP algorithm finds the subspace which can best discriminate different pattern classes and weakens the environment factors according to soft assignment method. Therefore, 2D-FDLPP algorithm has more discriminant power than 2D-LPP, and is more suitable for recognition tasks. Experiments are conducted on the MNIST database for handwritten image classification, the JAFFE database and Cohn-Kanade database for facial expression recognition and the ORL database for face recognition. Experimental results reported the effectiveness of our proposed algorithm.
    Download PDF (2370K)
  • Federico ANG, Rowena Cristina GUEVARA, Yoshikazu MIYANAGA, Rhandley CA ...
    Type: PAPER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 9 Pages 2443-2452
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    In this paper, a new database suitable for HMM-based automatic Filipino speech recognition is described for the purpose of training a domain-independent, large-vocabulary continuous speech recognition system. Although it is known that high-performance speech recognition systems depend on a superior speech database used in the training stage, due to the lack of such an appropriate database, previous reports on Filipino speech recognition had to contend with serious data sparsity issues. In this paper we alleviate such sparsity through appropriate data analysis that makes the evaluation results more reliable. The best system is identified through its low word-error rate to a cross-validation set containing almost three hours of unknown speech data. Language-dependent problems are discussed, and their impact on accuracy was analyzed. The approach is currently data driven, however it serves as a competent baseline model for succeeding future developments.
    Download PDF (646K)
  • Hang ZHANG, Yong DING, Peng Wei WU, Xue Tong BAI, Kai HUANG
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2453-2460
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Visual quality evaluation is crucially important for various video and image processing systems. Traditionally, subjective image quality assessment (IQA) given by the judgments of people can be perfectly consistent with human visual system (HVS). However, subjective IQA metrics are cumbersome and easily affected by experimental environment. These problems further limits its applications of evaluating massive pictures. Therefore, objective IQA metrics are desired which can be incorporated into machines and automatically evaluate image quality. Effective objective IQA methods should predict accurate quality in accord with the subjective evaluation. Motivated by observations that HVS is highly adapted to extract irregularity information of textures in a scene, we introduce multifractal formalism into an image quality assessment scheme in this paper. Based on multifractal analysis, statistical complexity features of nature images are extracted robustly. Then a novel framework for image quality assessment is further proposed by quantifying the discrepancies between multifractal spectrums of images. A total of 982 images are used to validate the proposed algorithm, including five type of distortions: JPEG2000 compression, JPEG compression, white noise, Gaussian blur, and Fast Fading. Experimental results demonstrate that the proposed metric is highly effective for evaluating perceived image quality and it outperforms many state-of-the-art methods.
    Download PDF (2208K)
  • Guanwen ZHANG, Jien KATO, Yu WANG, Kenji MASE
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2461-2472
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we propose a novel approach for multiple-shot people re-identification. Due to high variance in camera view, light illumination, non-rigid deformation of posture and so on, there exists a crucial inter-/intra- variance issue, i.e., the same people may look considerably different, whereas different people may look extremely similar. This issue leads to an intractable, multimodal distribution of people appearance in feature space. To deal with such multimodal properties of data, we solve the re-identification problem under a local distance comparison framework, which significantly alleviates the difficulty induced by varying appearance of each individual. Furthermore, we build an energy-based loss function to measure the similarity between appearance instances, by calculating the distance between corresponding subsets in feature space. This loss function not only favors small distances that indicate high similarity between appearances of the same people, but also penalizes small distances or undesirable overlaps between subsets, which reflect high similarity between appearances of different people. In this way, effective people re-identification can be achieved in a robust manner against the inter-/intra- variance issue. The performance of our approach has been evaluated by applying it to the public benchmark datasets ETHZ and CAVIAR4REID. Experimental results show significant improvements over previous reports.
    Download PDF (2106K)
  • Dan XU, Wei XU, Zhenmin TANG, Fan LIU
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 9 Pages 2473-2482
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we propose a novel method for road sign detection and recognition in complex scene real world images. Our algorithm consists of four basic steps. First, we employ a regional contrast based bottom-up visual saliency method to highlight the traffic sign regions, which usually have dominant color contrast against the background. Second, each type of traffic sign has special color distribution, which can be explored by top-down visual saliency to enhance the detection precision and to classify traffic signs into different categories. A bag-of-words (BoW) model and a color name descriptor are employed to compute the special-class distribution. Third, the candidate road sign blobs are extracted from the final saliency map, which are generated by combining the bottom-up and the top-down saliency maps. Last, the color and shape cues are fused in the BoW model to express blobs, and a support vector machine is employed to recognize road signs. Experiments on real world images show a high success rate and a low false hit rate and demonstrate that the proposed framework is applicable to prohibition, warning and obligation signs. Additionally, our method can be applied to achromatic signs without extra processing.
    Download PDF (2748K)
  • Yoichi TOMIOKA, Hikaru MURAKAMI, Hitoshi KITAZAWA
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 9 Pages 2483-2492
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Recently, video surveillance systems have been widely introduced in various places, and protecting the privacy of objects in the scene has been as important as ensuring security. Masking each moving object with a background subtraction method is an effective technique to protect its privacy. However, the background subtraction method is heavily affected by sunshine change, and a redundant masking by over-extraction is inevitable. Such superfluous masking disturbs the quality of video surveillance. In this paper, we propose a moving object masking method combining background subtraction and machine learning based on Real AdaBoost. This method can reduce the superfluous masking while maintaining the reliability of privacy protection. In the experiments, we demonstrate that the proposed method achieves about 78-94% accuracy for classifying superfluous masking regions and moving objects.
    Download PDF (3221K)
  • Bunpei TOJI, Jun OHMIYA, Satoshi KONDO, Kiyoko ISHIKAWA, Masahiro YAMA ...
    Type: PAPER
    Subject area: Biological Engineering
    2014 Volume E97.D Issue 9 Pages 2493-2500
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we propose a fully automatic method for extracting carotid artery contours from ultrasound images based on an active contour approach. Several contour extraction techniques have been proposed to measure carotid artery walls for early detection of atherosclerotic disease. However, the majority of these techniques require a certain degree of user interaction that demands time and effort. Our proposal automatically detects the position of the carotid artery by identifying blood flow information related to the carotid artery, and an active contour model is employed that uses initial contours placed in the detected position. Our method also applies a global energy minimization scheme to the active contour model. Experiments on clinical cases show that the proposed method automatically extracts the carotid artery contours at an accuracy close to that achieved by manual extraction.
    Download PDF (2350K)
  • Xin XU, Tsuneo KATO
    Type: PAPER
    Subject area: Music Information Processing
    2014 Volume E97.D Issue 9 Pages 2501-2509
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This paper proposes a robust and fast lyric search method for music information retrieval (MIR). The effectiveness of lyric search systems based on full-text retrieval engines or web search engines is highly compromised when the queries of lyric phrases contain incorrect parts due to mishearing. To improve the robustness of the system, the authors introduce acoustic distance, which is computed based on a confusion matrix of an automatic speech recognition experiment, into Dynamic-Programming (DP)-based phonetic string matching to identify the songs that the misheard lyric phrases refer to. An evaluation experiment verified that the search accuracy is increased by 4.4% compared with the conventional method. Furthermore, in this paper a two-pass search algorithm is proposed to realize real-time execution. The algorithm pre-selects the probable candidates using a rapid index-based search in the first pass and executes a DP-based search process with an adaptive termination strategy in the second pass. Experimental results show that the proposed search method reduced processing time by more than 86.2% compared with the conventional methods for the same search accuracy.
    Download PDF (1001K)
  • Dong Hyun KANG, Changwoo MIN, Young Ik EOM
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2014 Volume E97.D Issue 9 Pages 2510-2513
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    NAND flash storage devices, such as eMMCs and microSD cards, are now widely used in mobile devices. In this paper, we propose a novel buffer replacement scheme for mobile NAND flash storages. It efficiently improves write performance by evicting pages flash-friendly and maintains high cache hit ratios by managing pages in order of recency. Our experimental results show that the proposed scheme outperforms the best performing scheme in the recent literature, Sp.Clock, by 48%.
    Download PDF (477K)
  • Kung-Jui PAI, Jinn-Shyong YANG, Sing-Chen YAO, Shyue-Ming TANG, Jou-Mi ...
    Type: LETTER
    Subject area: Information Network
    2014 Volume E97.D Issue 9 Pages 2514-2517
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Let T1,T2,...,Tk be spanning trees in a graph G. If, for any two vertices u,v of G, the paths joining u and v on the k trees are mutually vertex-disjoint, then T1,T2,...,Tk are called completely independent spanning trees (CISTs for short) of G. The construction of CISTs can be applied in fault-tolerant broadcasting and secure message distribution on interconnection networks. Hasunuma (2001) first introduced the concept of CISTs and conjectured that there are k CISTs in any 2k-connected graph. Unfortunately, this conjecture was disproved by Péterfalvi recently. In this note, we give a necessary condition for k-connected k-regular graphs with ⌊k/2⌋ CISTs. Based on this condition, we provide more counterexamples for Hasunuma's conjecture. By contrast, we show that there are two CISTs in 4-regular chordal rings CR(N,d) with N=k(d-1)+j under the condition that k ≥ 4 is even and 0 ≤ j ≤ 4. In particular, the diameter of each constructed CIST is derived.
    Download PDF (120K)
  • Jongwon SEOK, Keunsung BAE
    Type: LETTER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 9 Pages 2518-2521
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This letter describe target classification from the synthesized active sonar returns from targets. A fractional Fourier transform is applied to the sonar returns to extract shape variation in the fractional Fourier domain depending on the highlight points and aspects of the target. With the proposed features, four different targets are classified using two neural network classifiers.
    Download PDF (647K)
  • Junyang QIU, Yibing WANG, Zhisong PAN, Bo JIA
    Type: LETTER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 9 Pages 2522-2525
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Independent and identically distributed (i.i.d) assumptions are commonly used in the machine learning community. However, social media data violate this assumption due to the linkages. Meanwhile, with the variety of data, there exist many samples, i.e., Universum, that do not belong to either class of interest. These characteristics pose great challenges to dealing with social media data. In this letter, we fully take advantage of Universum samples to enable the model to be more discriminative. In addition, the linkages are also taken into consideration in the means of social dimensions. To this end, we propose the algorithm Semi-Supervised Linked samples Feature Selection with Universum (U-SSLFS) to integrate the linking information and Universum simultaneously to select robust features. The empirical study shows that U-SSLFS outperforms state-of-the-art algorithms on the Flickr and BlogCatalog.
    Download PDF (204K)
  • Jae-woong JEONG, Young-cheol PARK, Dae-hee YOUN
    Type: LETTER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 9 Pages 2526-2529
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This paper presents an approximated virtual source imaging system based on crosstalk cancellation with a pair of closely spaced loudspeakers. Utilizing the frequency-dependent relative importance of sound localization cues, the proposed system provides separate approximations for the low- and high-frequency bands. Experimental results show that the system provides good approximations within ±55° in the stereo dipole setup with natural sound quality.
    Download PDF (310K)
  • Peng SONG, Yun JIN, Li ZHAO, Minghai XIN
    Type: LETTER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 9 Pages 2530-2532
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    A major challenge for speech emotion recognition is that when the training and deployment conditions do not use the same speech corpus, the recognition rates will obviously drop. Transfer learning, which has successfully addressed the cross-domain classification or recognition problem, is presented for cross-corpus speech emotion recognition. First, by using the maximum mean discrepancy embedding (MMDE) optimization and dimension reduction algorithms, two close low-dimensional feature spaces are obtained for source and target speech corpora, respectively. Then, a classifier function is trained using the learned low-dimensional features in the labeled source corpus, and directly applied to the unlabeled target corpus for emotion label recognition. Experimental results demonstrate that the transfer learning method can significantly outperform the traditional automatic recognition technique for cross-corpus speech emotion recognition.
    Download PDF (280K)
  • Jinsoo PARK, Wooil KIM, David K. HAN, Hanseok KO
    Type: LETTER
    Subject area: Speech and Hearing
    2014 Volume E97.D Issue 9 Pages 2533-2536
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    We propose a new algorithm to suppress both stationary background noise and nonstationary directional interference noise in a speech enhancement system that employs the generalized sidelobe canceller. Our approach builds on advances in generalized sidelobe canceller design involving the transfer function ratio. Our system is composed of three stages. The first stage estimates the transfer function ratio on the acoustic path, from the nonstationary directional interference noise source to the microphones, and the powers of the stationary background noise components. Secondly, the estimated powers of the stationary background noise components are used to execute spectral subtraction with respect to input signals. Finally, the estimated transfer function ratio is used for speech enhancement on the primary channel, and an adaptive filter reduces the residual correlated noise components of the signal. These algorithmic improvements give consistently better performance than the transfer function generalized sidelobe canceller when input signal-to-noise ratio is 10 dB or lower.
    Download PDF (453K)
  • Yangbin LIM, Si-Woong LEE, Haechul CHOI
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2537-2540
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Screen content generally consists of text, images, and videos variously generated or captured by computers and other electronic devices. For the purpose of coding such screen content, we introduce alternative intra prediction (AIP) modes based on the emerging high efficiency video coding (HEVC) standard. With text and graphics, edges are much sharper and a large number of corners exist. These properties make it difficult to predict blocks using a one-directional intra prediction mode. The proposed method provides two-directional prediction by combining the existing vertical and horizontal prediction modes. Experiments show that our AIP modes provide an average BD-rate reduction of 2.8% relative to HEVC for general screen contents, and a 0.04% reduction for natural contents.
    Download PDF (745K)
  • Chae Eun RHEE
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2541-2544
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    The emerging high-efficiency video coding (HEVC) standard attempts to improve the coding efficiency by a factor of two over H.264/AVC through the use of new compression tools such as various block sizes with multiple directions. Although multiple-directional predictions are among the features contributing to the improved compression efficiency, its high computational complexity keeps it from being used widely. This paper presents an algorithm to skip backward and bi-directional predictions when merge or forward prediction modes are likely to be determined as the best mode. The proposed algorithm takes advantage of the fact that there is a cost relationship among multi-directional predictions and that the results of backward and bi-directional predictions are therefore predictable before the actual operations. After merge and forward predictions, if the expected results of backward and bi-directional predictions are worse than the results up to that point, then additional backward and bi-directional predictions to search for more accurate motion vectors are not performed. A simulation shows that the encoding time is reduced by about 15.18% with a marginal degradation in compression efficiency.
    Download PDF (708K)
  • Yun-Gu LEE, Ki-Hoon LEE
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2545-2548
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This letter introduces a new reference frame to improve the performance of motion estimation and compensation in video coding, based on a video stabilization technique. The proposed method synthesizes the new reference frame from the previous frame in a way that the new reference and current frames have the same camera orientations. The overhead data for each frame to transmit from an encoder to a decoder is only three rotational angles along the x, y, and z axes. Since the new reference and current frames have the same camera orientations, the proposed method significantly improves the performance of motion estimation and compensation for video sequences having dynamic camera motion by up to 0.98 dB with negligible overhead data.
    Download PDF (220K)
  • Peng YE, Zhiyong ZHAO, Fang LIU
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2549-2551
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Registration consistency (RC) stands out as a widely-used automatic measure from existing image registration evaluation measures. However the original RC neglects the influence brought by the image intensity variation, leading to several problems. This letter proposes a rectified registration consistency, which takes both image intensity variation and geometrical transformation into consideration. Therefore the geometrical transformation is evaluated more by decreasing the influence of intensity variation. Experiments on real image pairs demonstrated the superiority of the proposed measure over the original RC.
    Download PDF (1021K)
  • Yukio ISHIHARA, Makio ISHIHARA
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2552-2553
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    We present a way to correct light distortion of views looking into an aquarium. When we see fish in an aquarium, they appear closer and distorted due to light distortion. In order to correct the distortion, light rays travelling in the aquarium directly towards an observer should hit him/her after emerging from the aquarium. In this manuscript, those light rays are captured by a perspective camera at specific positions, not the observer position. And then it is shown that the taken images are successfully merged as a single one that is not affected by light distortion.
    Download PDF (1913K)
  • Hanhoon PARK
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 9 Pages 2554-2558
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    In this letter, we propose an improved single image haze removal algorithm using image segmentation. It can effectively resolve two common problems of conventional algorithms which are based on dark channel prior: halo artifact and wrong estimation of atmospheric light. The process flow of our algorithm is as follows. First, the input hazy image is over-segmented. Then, the segmentation results are used for improving the conventional dark channel computation which uses fixed local patches. Also, the segmentation results are used for accurately estimating the atmospheric light. Finally, from the improved dark channel and atmospheric light, an accurate transmission map is computed allowing us to recover a high quality haze-free image.
    Download PDF (4094K)
  • Kaihong SHI, Zongqing LU, Qingyun SHE, Fei ZHOU, Qingmin LIAO
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 9 Pages 2559-2562
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    This paper presents a novel filter to keep from over-smoothing the edges and corners and rectify the outliers in the flow field after each incremental computation step, which plays a key role during the process of estimating flow field. This filter works according to the spatial-temporal derivatives distance of the input image and velocity field distance, whose principle is more reasonable in filtering mechanism for optical flow than other existing nonlinear filters. Moreover, we regard the spatial-temporal derivatives as new powerful descriptions of different motion layers or regions and give a detailed explanation. Experimental results show that our proposed method achieves better performance.
    Download PDF (224K)
  • Shuang BAI, Jianjun HOU, Noboru OHNISHI
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 9 Pages 2563-2566
    Published: 2014
    Released: September 01, 2014
    JOURNALS FREE ACCESS
    Local descriptors, Local Binary Pattern (LBP) and Scale Invariant Feature Transform (SIFT) are widely used in various computer applications. They emphasize different aspects of image contents. In this letter, we propose to combine them in sparse coding for categorizing scene images. First, we regularly extract LBP and SIFT features from training images. Then, corresponding to each feature, a visual word codebook is constructed. The obtained LBP and SIFT codebooks are used to create a two-dimensional table, in which each entry corresponds to an LBP visual word and a SIFT visual word. Given an input image, LBP and SIFT features extracted from the same positions of this image are encoded together based on sparse coding. After that, spatial max pooling is adopted to determine the image representation. Obtained image representations are converted into one-dimensional features and classified by utilizing SVM classifiers. Finally, we conduct extensive experiments on datasets of Scene Categories 8 and MIT 67 Indoor Scene to evaluate the proposed method. Obtained results demonstrate that combining features in the proposed manner is effective for scene categorization.
    Download PDF (221K)
feedback
Top