-
Satoshi Fujita
2010 Volume E93.D Issue 12 Pages
3163
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
-
Kan WATANABE, Masaru FUKUSHI
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3164-3172
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
While volunteer computing (VC) systems reach the most powerful computing platforms, they still have the problem of guaranteeing computational correctness, due to the inherent unreliability of volunteer participants. Spot-checking technique, which checks each participant by allocating spotter jobs, is a promising approach to the validation of computation results. The current spot-checking is based on the implicit assumption that participants never distinguish spotter jobs from normal ones; however generating such spotter jobs is still an open problem. Hence, in the real VC environment where the implicit assumption does not always hold, spot-checking-based methods such as well-known credibility-based voting become almost impossible to guarantee the computational correctness. In this paper, we generalize spot-checking by introducing the idea of imperfect checking. This generalization allows to guarantee the computational correctness under the situation that spot-checking is not fully-reliable and participants may distinguish spotter jobs. Moreover, we develop a generalized formula of the credibility, which enables credibility-based voting to utilize check-by-voting technique. Simulation results show that check-by-voting improves the performance of credibility-based voting, while guaranteeing the same level of computational correctness.
View full abstract
-
Masaki KOHANA, Shusuke OKAMOTO, Masaru KAMADA, Tatsuhiro YONEKURA
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3173-3180
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
We have investigated the bottleneck in web-based MORPG system and proposed a load-distribution method using multiple web servers. This technique uses a dynamic data allocation method, called the moving home. This paper describes the evaluation of our method using 4, 8, 16 web servers. We evaluated it on both the single-server system and multi-server system. And we confirm that the effect of the moving home through the comparison between the multi-server system without the moving home and that with the moving home. Our experimental result shows that the upper bound of the number of avatars in the eight-server system with the moving home becomes 380 by contrast that in the single-server system is 200.
View full abstract
-
Takeru INOUE, Hiroshi ASAKURA, Yukio UEMATSU, Hiroshi SATO, Noriyuki T ...
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3181-3193
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
Web APIs are offered in many Web sites for Ajax and mashup, but they have been developed independently since no reusable database component has been specifically created for Web applications. In this paper, we propose WapDB, a distributed database management system for the rapid development of Web applications. WapDB is designed on Atom, a set of Web API standards, and provides several of the key features required for Web applications, including efficient access control, an easy extension mechanism, and search and statistics capabilities. By introducing WapDB, developers are freed from the need to implement these features as well as Web API processing. In addition, its design totally follows the REST architectural style, which gives uniformity and scalability to applications. We develop a proof-of-concept application with WapDB, and find that it offers great cost effectiveness with no significant impact on performance; in our experiments, the development cost is reduced to less than half with the overhead (in use) of response times of just a few msec.
View full abstract
-
Kyong Hoon KIM, Wan Yeon LEE, Jong KIM, Rajkumar BUYYA
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3194-3201
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. In this paper, we provide SLA-based scheduling algorithms for bag-of-tasks applications with deadline constraints on power-aware cluster systems. The scheduling objective is to minimize power consumption as long as the system provides the service levels of users. A bag-of-tasks application should finish all the sub-tasks before the deadline as the service level. We provide the power-aware scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes.
View full abstract
-
Min ZHU, Leibo LIU, Shouyi YIN, Chongyong YIN, Shaojun WEI
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3202-3210
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
This paper introduces a cycle-accurate Simulator for a dynamically REconfigurable MUlti-media System, called SimREMUS. SimREMUS can either be used at transaction-level, which allows the modeling and simulation of higher-level hardware and embedded software, or at register transfer level, if the dynamic system behavior is desired to be observed at signal level. Trade-offs among a set of criteria that are frequently used to characterize the design of a reconfigurable computing system, such as granularity, programmability, configurability as well as architecture of processing elements and route modules etc., can be quickly evaluated. Moreover, a complete tool chain for SimREMUS, including compiler and debugger, is developed. SimREMUS could simulate 270k cycles per second for million gates SoC (System-on-a-Chip) and produced one H.264 1080p frame in 15 minutes, which might cost days on VCS (platform: CPU: E5200@ 2.5Ghz, RAM: 2.0GB). Simulation showed that 1080p@30fps of H.264 High Profile@ Level 4 can be achieved when exploiting a 200MHz working frequency on the VLSI architecture of REMUS.
View full abstract
-
Xiaomin JIA, Pingjing LU, Caixia SUN, Minxuan ZHANG
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3211-3222
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
Chip Multi-Processors (CMPs) emerge as a mainstream architectural design alternative for high performance parallel and distributed computing. Last Level Cache (LLC) management is critical to CMPs because off-chip accesses often require a long latency. Due to its short access latency, well performance isolation and easy scalability, private cache is an attractive design alternative for LLC of CMPs. This paper proposes program Behavior Identification-based Cache Sharing (BICS) for LLC management. BICS is based on a private cache organization for the shorter access latency. Meanwhile, BICS tries to simulate a shared cache organization by allowing evicted blocks of one private LLC to be saved at peer LLCs. This technique is called spilling. BICS identifies cache behavior types of applications at runtime. When a cache block is evicted from a private LLC, cache behavior characteristics of the local application are evaluated so as to determine whether the block is to be spilled. Spilled blocks are allowed to replace some valid blocks of the peer LLCs as long as the interference is within a reasonable level. Experimental results using a full system CMP simulator show that BICS improves the overall throughput by as much as 14.5%, 12.6%, 11.0%and 11.7% (on average 8.8%, 4.8%, 4.0% and 6.8%) over private cache, shared cache, Utility-based Cache Partitioning (UCP) scheme and the baseline spilling-based organization Cooperative Caching (CC) respectively on a 4-core CMP for SPEC CPU2006 benchmarks.
View full abstract
-
Tongsheng GENG, Leibo LIU, Shouyi YIN, Min ZHU, Shaojun WEI
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3223-3231
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
This paper proposes approaches to perform HW/SW (Hardware/Software) partition and parallelization of computing-intensive tasks of the H.264 HiP (High Profile) decoding algorithm on an embedded coarse-grained reconfigurable multimedia system, called REMUS (REconfigurable MUltimedia System). Several techniques, such as MB (Macro-Block) based parallelization, unfixed sub-block operation etc., are utilized to speed up the decoding process, satisfying the requirements of real-time and high quality H.264 applications. Tests show that the execution performance of MC (Motion Compensation), deblocking, and IDCT-IQ(Inverse Discrete Cosine Transform-Inverse Quantization) on REMUS is improved by 60%, 73%, 88.5% in the typical case and 60%, 69%, 88.5% in the worst case, respectively compared with that on XPP PACT (a commercial reconfigurable processor). Compared with ASIC solutions, the performance of MC is improved by 70%, 74% in the typical and in the worst case, respectively, while those of Deblocking remain the same. As for IDCT_IQ, the performance is improved by 17% no matter in the typical or worst case. Relying on the proposed techniques, 1080p@30fps of H.264 HiP@ Level 4 decoding could be achieved on REMUSwhen utilizing a 200MHz working frequency.
View full abstract
-
Yi TANG, Junchen JIANG, Xiaofei WANG, Chengchen HU, Bin LIU, Zhijia CH ...
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3232-3242
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
Multi-pattern matching is a key technique for implementing network security applications such as Network Intrusion Detection/Protection Systems (NIDS/NIPSes) where every packet is inspected against tens of thousands of predefined attack signatures written in regular expressions (regexes). To this end, Deterministic Finite Automaton (DFA) is widely used for multi-regex matching, but existing DFA-based researches have claimed high throughput at an expense of extremely high memory cost, so fail to be employed in devices such as high-speed routers and embedded systems where the available memory is quite limited. In this paper, we propose a parallel architecture of DFA called Parallel DFA (PDFA) taking advantage of the large amount of concurrent flows to increase the throughput with nearly no extra memory cost. The basic idea is to selectively store the underlying DFA in memory modules that can be accessed in parallel. To explore its potential parallelism we intensively study DFA-split schemes from both state and transition points in this paper. The performance of our approach in both the average cases and the worst cases is analyzed, optimized and evaluated by numerical results. The evaluation shows that we obtain an average speedup of 100 times compared with traditional DFA-based matching approach.
View full abstract
-
Jacir Luiz BORDIM, Koji NAKANO
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3243-3250
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
It is known that wireless ad hoc networks employing omnidirectional communications suffer from poor network throughput due to inefficient spatial reuse. Although the use of directional communications is expected to provide significant improvements in this regard, the lack of efficient mechanisms to deal with deafness and hidden terminal problems makes it difficult to fully explore its benefits. The main contribution of this work is to propose a Medium Access Control (MAC) scheme which aims to lessen the effects of deafness and hidden terminal problems in directional communications without precluding spatial reuse. The simulation results have shown that the proposed directional MAC provides significant throughput improvement over both the IEEE802.11DCF MAC protocol and other prominent directional MAC protocols in both linear and grid topologies.
View full abstract
-
Koichi NISHIDE, Hiroyuki KUBO, Ryoichi SHINKUMA, Tatsuro TAKAHASHI
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3251-3259
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
The demand of using applications that assume bidirectional communication such as voice telephony and peer-to-peer using wireless stations has been increasing and especially, the rapid increase of uplink traffic from wireless terminals is expected. However, in uplink WLANs, the hidden-station problem remains to be solved. In this paper, we point out this hidden-station problem and clarify the following unfairness between UDP and TCP uplink flows: 1) the effect of collision caused by hidden-station relationship on throughput and 2) the instability of the throughput depending on the number of hidden stations. To solve these problems, we propose a virtual multi-AP access mechanism. Our mechanism first groups stations according to the hidden-station relationship and type of transport protocol they use then assigns a virtually isolated channel to each group, which enables STAs to communicate as if STAs in different groups are connected to different isolated APs (virtual APs: VAPs). It can mitigate the effect caused by collisions between hidden stations and eliminate the contention between UDP and TCP uplink flows. Its performance is shown through simulation.
View full abstract
-
Hiroyuki KUBO, Ryoichi SHINKUMA, Tatsuro TAKAHASHI
Article type: PAPER
2010 Volume E93.D Issue 12 Pages
3260-3268
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
The demand for data/audio streaming/video streaming multicast services in large scale networks has been increasing. Moreover, the improved transmission speed and mobile-device capability in wireless access networks enable people to use such services via their personal mobile devices. Peer-to-peer (P2P) architecture ensures scalability and robustness more easily and more economically than server-client architecture; as the number of nodes in a P2P network increases, the amount of workload per node decreases and lessens the impact of node failure. However, mobile users feel much larger psychological cost due to strict limitations on bandwidth, processing power, memory capacity, and battery life, and they want to minimize their contributions to these services. Therefore, the issue of how we can reduce this psychological cost remains. In this paper, we consider how effective a social networking service is as a platform for mobile P2P multicast. We model users' cooperative behaviors in mobile P2P multicast streaming, and propose a social-network based P2P streaming architecture for mobile networks. We also measured the psychological forwarding cost of real users in mobile P2P multicast streaming through an emulation experiment, and verify that our social-network based mobile P2P multicast streaming improves service quality by reducing the psychological forwarding cost using multi-agent simulation.
View full abstract
-
Kenji YAMADA, Tsuyoshi ITOKAWA, Teruaki KITASUKA, Masayoshi ARITSUGI
Article type: LETTER
2010 Volume E93.D Issue 12 Pages
3269-3272
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
In this letter, we reveal redundant control traffic in the optimized link state routing protocol (OLSR) for MANET. Topology control (TC) messages, which occupy a part of control traffic in OLSR, are used to exchange topology information with other nodes. TC messages are generated and forwarded by only nodes that have been selected as multipoint relays (MPRs) by at least one neighbor node. These nodes selected as MPRs are called TC message senders in this letter. One of solutions to reduce the number of TC messages is to reduce the number of TC message senders. We describe a non-distributed algorithm to minimize the number of TC message senders. Through simulation of static-node scenarios, we show 18% to 37% of TC message senders in RFC-based OLSR are redundant. By eliminating redundant TC message senders, the number of TC packets, each of which contains one or more TC messages, is also reduced from 19% to 46%. We also show that high density scenarios have more redundancy than low density scenarios. This observation can help to consider a cooperative MPR selection in OLSR.
View full abstract
-
Weiwei YANG, Yueming CAI, Lei WANG
Article type: LETTER
2010 Volume E93.D Issue 12 Pages
3273-3275
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
In this letter, we analyze the outage performance of decode-and-forward relay systems with imperfect MRC receiver at the destination. Unlike the conventional perfect MRC, the weight of each branch of the imperfect MRC receiver is only the conjugate of the channel impulse response, not being normalized by the noise variance. We derive an exact closed-form expression for the outage probability over dissimilar Nakagami-
m fading channels. Various numerical examples confirm the proposed analysis.
View full abstract
-
Kazuya YAMASHITA, Mitsuru SAKAI, Sadaki HIROSE, Yasuaki NISHITANI
Article type: PAPER
Subject area: Fundamentals of Information Systems
2010 Volume E93.D Issue 12 Pages
3276-3283
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
The
Firing Squad Synchronization Problem (FSSP), one of the most well-known problems related to cellular automata, was originally proposed by Myhill in 1957 and became famous through the work of Moore [1]. The first solution to this problem was given by Minsky and McCarthy [2] and a minimal time solution was given by Goto [3]. A significant amount of research has also dealt with variants of this problem. In this paper, from a theoretical interest, we will extend this problem to number patterns on a seven-segment display. Some of these problems can be generalized as the FSSP for some special trees called segment trees. The FSSP for segment trees can be reduced to a FSSP for a one-dimensional array divided evenly by joint cells that we call segment array. We will give algorithms to solve the FSSPs for this segment array and other number patterns, respectively. Moreover, we will clarify the minimal time to solve these problems and show that there exists no such solution.
View full abstract
-
Yoshiki YUNBE, Masayuki MIYAMA, Yoshio MATSUDA
Article type: PAPER
Subject area: Computer System
2010 Volume E93.D Issue 12 Pages
3284-3293
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
This paper describes an affine motion estimation processor for real-time video segmentation. The processor estimates the dominant motion of a target region with affine parameters. The processor is based on the Pseudo-M-estimator algorithm. Introduction of an image division method and a binary weight method to the original algorithm reduces data traffic and hardware costs. A pixel sampling method is proposed that reduces the clock frequency by 50%. The pixel pipeline architecture and a frame overlap method double throughput. The processor was prototyped on an FPGA; its function and performance were subsequently verified. It was also implemented as an ASIC. The core size is 5.0×5.0mm
2 in 0.18µm process, standard cell technology. The ASIC can accommodate a VGA 30fps video with 120MHz clock frequency.
View full abstract
-
Yusuke TANAKA, Hideki ANDO
Article type: PAPER
Subject area: Computer System
2010 Volume E93.D Issue 12 Pages
3294-3305
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
Two-step physical register deallocation (TSD) is an architectural scheme that enhances memory-level parallelism (MLP) by pre-executing instructions. Ideally, TSD allows exploitation of MLP under an unlimited number of physical registers, and consequently only a small register file is needed for MLP. In practice, however, the amount of MLP exploitable is limited, because there are cases where either 1) pre-execution is not performed; or 2) the timing of pre-execution is delayed. Both are due to data dependencies among the pre-executed instructions. This paper proposes the use of value prediction to solve these problems. This paper proposes the use of value prediction to solve these problems. Evaluation results using the SPECfp2000 benchmark confirm that the proposed scheme with value prediction for predicting addresses achieves equivalent IPC, with a smaller register file, to the previous TSD scheme. The reduction rate of the register file size is 21%.
View full abstract
-
JianFeng CUI, HeungSeok CHAE
Article type: PAPER
Subject area: Software Engineering
2010 Volume E93.D Issue 12 Pages
3306-3320
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
In the field of software reengineering, many component identification approaches have been proposed for evolving legacy systems into component-based systems. Understanding the behaviors of various component identification approaches is the first important step to meaningfully employ them for legacy systems evolution, therefore we performed an empirical study on component identification technology with considerations of their similarity measures, clustering approaches and stopping criteria. We proposed a set of evaluation criteria and developed the tool CIETool to automate the process of component identification and evaluation. The experimental results revealed that many components of poor quality were produced by the employed component identification approaches; that is, many of the identified components were tightly coupled, weakly cohesive, or had inappropriate numbers of implementation classes and interface operations. Finally, we presented an analysis on the component identification approaches according to the proposed evaluation criteria, which suggested that the weaknesses of these clustering approaches were the major reasons that caused components of poor-quality.
View full abstract
-
Soonghwan RO, Hanh Van NGUYEN, Woochul JUNG, Young Woo PAE, Jonathan P ...
Article type: PAPER
Subject area: Information Network
2010 Volume E93.D Issue 12 Pages
3321-3330
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
XVC (eXtensible Viewer Composition) is an in-vehicle user interface framework for telematics applications. It provides a document-oriented application model, which enables drivers to simultaneously make use of multiple information services, while maintaining satisfactory control of their vehicles. XVC is a new client model that makes use of the beneficial functions of in-vehicle navigation devices. This paper presents the results from usability tests performed on the XVC framework in order to evaluate how the XVC client affects drivers' navigation while using its functions. The evaluations are performed using the Advanced Automotive Simulator System located at KATECH (Korea Automobile Technology Institute). The advantages of the XVC framework are evaluated and compared to a non-XVC framework. The test results show that the XVC framework navigation device significantly reduces the scanning time needed while a driver obtains information from the navigation device.
View full abstract
-
Mi-Young PARK, Sang-Hwa CHUNG
Article type: PAPER
Subject area: Information Network
2010 Volume E93.D Issue 12 Pages
3331-3343
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
TCP's performance significantly degrades in multi-hop wireless networks because TCP's retransmission timeouts (RTOs) are frequently triggered regardless of congestion due to sudden delay and wireless transmission errors. Such RTOs non-related to congestions lead to TCP's unnecessary behaviors such as retransmitting all the outstanding packets which might be located in the bottleneck queue or reducing sharply its sending rate and increasing exponentially its back-off value even when the network is not congested. Since traditional TCP has no ability to identify if a RTO is triggered by congestion or not, it is unavoidable for TCP to underutilize available bandwidth by blindly reducing its sending rate for all the RTOs. In this paper, we propose an algorithm to detect the RTOs non-related to congestion in order to let TCP respond to the RTOs differently according to the cause. When a RTO is triggered, our algorithm estimates the queue usage in the network path during the go-back-N retransmissions, and decides if the RTO is triggered by congestion or not when the retransmissions end. If any RTO non-related to congestion is detected, our algorithm prevents TCP from increasing unnecessarily its back-off value as well as reducing needlessly its sending rate. Throughout the extensive simulation scenarios, we observed how frequently RTOs are triggered regardless of congestion, and evaluated our algorithm in terms of accuracy and goodput. The experiment results show that our algorithm has the highest accuracy among the previous works and the performance enhancement reaches up to 70% when our algorithm is applied to TCP.
View full abstract
-
Karolina NURZYNSKA, Mamoru KUBO, Ken-ichiro MURAMOTO
Article type: PAPER
Subject area: Pattern Recognition
2010 Volume E93.D Issue 12 Pages
3344-3351
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
This study presents three image processing systems for snow particle classification into snowflake and graupel. All of them are based on feature classification, yet as a novelty in all cases multiple features are exploited. Additionally, each of them is characterized by a different data flow. In order to compare the performances, we not only consider various features, but also suggest different classifiers. The best achieved results are for the snowflake discrimination method applied before statistical classifier, as the correct classification ratio in this case reaches 94%. In other cases the best results are around 88%.
View full abstract
-
Xu YANG, HuiLin XIONG, Xin YANG
Article type: PAPER
Subject area: Pattern Recognition
2010 Volume E93.D Issue 12 Pages
3352-3358
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
The performance of the kernel-based learning algorithms, such as SVM, depends heavily on the proper choice of the kernel parameter. It is desirable for the kernel machines to work on the optimal kernel parameter that adapts well to the input data and the learning tasks. In this paper, we present a novel method for selecting Gaussian kernel parameter by maximizing a class separability criterion, which measures the data distribution in the kernel-induced feature space, and is invariant under any non-singular linear transformation. The experimental results show that both the class separability of the data in the kernel-induced feature space and the classification performance of the SVM classifier are improved by using the optimal kernel parameter.
View full abstract
-
Kazunori KOMATANI, Yuichiro FUKUBAYASHI, Satoshi IKEDA, Tetsuya OGATA, ...
Article type: PAPER
Subject area: Speech and Hearing
2010 Volume E93.D Issue 12 Pages
3359-3367
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
We address the issue of out-of-grammar (OOG) utterances in spoken dialogue systems by generating help messages. Help message generation for OOG utterances is a challenge because language understanding based on automatic speech recognition (ASR) of OOG utterances is usually erroneous; important words are often misrecognized or missing from such utterances. Our
grammar verification method uses a weighted finite-state transducer, to accurately identify the grammar rule that the user intended to use for the utterance, even if important words are missing from the ASR results. We then use a ranking algorithm, RankBoost, to rank help message candidates in order of likely usefulness. Its features include the grammar verification results and the utterance history representing the user's experience.
View full abstract
-
Yusuke TAKANO, Kazuhiro KONDO
Article type: PAPER
Subject area: Speech and Hearing
2010 Volume E93.D Issue 12 Pages
3368-3376
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
We attempted to estimate subjective scores of the Japanese Diagnostic Rhyme Test (DRT), a two-to-one forced selection speech intelligibility test. We used automatic speech recognizers with language models that force one of the words in the word-pair, mimicking the human recognition process of the DRT. Initial testing was done using speaker-independent models, and they showed significantly lower scores than subjective scores. The acoustic models were then adapted to each of the speakers in the corpus, and then adapted to noise at a specified SNR. Three different types of noise were tested: white noise, multi-talker (babble) noise, and pseudo-speech noise. The match between subjective and estimated scores improved significantly with noise-adapted models compared to speaker-independent models and the speaker-adapted models, when the adapted noise level and the tested level match. However, when SNR conditions do not match, the recognition scores degraded especially when tested SNR conditions were higher than the adapted noise level. Accordingly, we adapted the models to mixed levels of noise,
i.e., multi-condition training. The adapted models now showed relatively high intelligibility matching subjective intelligibility performance over all levels of noise. The correlation between subjective and estimated intelligibility scores increased to 0.94 with multi-talker noise, 0.93 with white noise, and 0.89 with pseudo-speech noise, while the root mean square error (RMSE) reduced from more than 40 to 13.10, 13.05 and 16.06, respectively.
View full abstract
-
Dong YANG, Paul DIXON, Sadaoki FURUI
Article type: PAPER
Subject area: Natural Language Processing
2010 Volume E93.D Issue 12 Pages
3377-3383
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
This paper proposes a new hybrid method for machine transliteration. Our method is based on combining a newly proposed two-step conditional random field (CRF) method and the well-known joint source channel model (JSCM). The contributions of this paper are as follows: (1) A two-step CRF model for machine transliteration is proposed. The first CRF segments a character string of an input word into chunks and the second one converts each chunk into a character in the target language. (2) A joint optimization method of the two-step CRF model and a fast decoding algorithm are also proposed. Our experiments show that the joint optimization of the two-step CRF model works as well as or even better than the JSCM, and the fast decoding algorithm significantly decreases the decoding time. (3) A rapid development method based on a weighted finite state transducer (WFST) framework for the JSCM is proposed. (4) The combination of the proposed two-step CRF model and JSCM outperforms the state-of-the-art result in terms of top-1 accuracy.
View full abstract
-
Takahiro OTA, Hiroyoshi MORITA
Article type: PAPER
Subject area: Biological Engineering
2010 Volume E93.D Issue 12 Pages
3384-3391
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
An antidictionary is particularly useful for data compression, and on-line electrocardiogram (ECG) lossless compression algorithms using antidictionaries have been proposed. They work in real-time with constant memory and give better compression ratios than traditional lossless data compression algorithms, while they only deal with ECG data on a binary alphabet. This paper proposes on-line ECG lossless compression for a given data on a finite alphabet. The proposed algorithm gives not only better compression ratios than those algorithms but also uses less computational space than they do. Moreover, the proposed algorithm work in real-time. Its effectiveness is demonstrated by simulation results.
View full abstract
-
Dongsu KANG, CheeYang SONG, Doo-Kwon BAIK
Article type: LETTER
Subject area: Software System
2010 Volume E93.D Issue 12 Pages
3392-3395
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
This paper proposes a feature-based service identification method to improve productivity using a feature relationship; where a feature can express service properties. We define the distance measured between features by considering their selective (node) and relational (edge) attributes and present the service boundary concept. The result of an evaluation of the proposed method shows that it has higher productivity than existing methods.
View full abstract
-
Eun-Jun YOON, Muhammad KHURRAM KHAN, Kee-Young YOO
Article type: LETTER
Subject area: Information Network
2010 Volume E93.D Issue 12 Pages
3396-3399
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
In 2009, Jeong et al. proposed a secure binding encryption scheme and an efficient secret broadcast scheme. This paper points out that the schemes have some errors and cannot operate correctly, contrary to their claims. In addition, this paper also proposes improvements of Jeong et al.'s scheme that can withstand the proposed attacks.
View full abstract
-
Eun-Jun YOON, Muhammad Khurram KHAN, Kee-Young YOO
Article type: LETTER
Subject area: Information Network
2010 Volume E93.D Issue 12 Pages
3400-3402
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
Quite recently [IEEE Commu. Letters, Vol.14, No.1, 2010], Choi et al. proposed a handover authentication scheme using credentials based on chameleon hashing, claiming to provide several security features including Perfect Forward/Backward Secrecy (PFS/PBS). This paper examines the security of the scheme and shows that the scheme still fails to achieve PFS/PBS unlike their claims.
View full abstract
-
Kyungbaek KIM
Article type: LETTER
Subject area: Dependable Computing
2010 Volume E93.D Issue 12 Pages
3403-3406
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
When P2P systems are used for data sensitive systems, the data availability has become an important issue. The availability-based replication using individual node availability is the most popular method keeping high data availability efficiently. However, since the individual node availability is derived by the individual lifetime information of each node, the availability-based replication may select useless replicas. In this paper, we explore the
relative MTTF (Mean Time To Failure)-based incentive scheme for the more efficient availability-based replication. The relative
MTTF is used to classify the
guaranteed replicas which can get the
incentive node availability, and these replicas help reduce the data traffic and the number of replicas without losing the target data availability. Results from trace-driven simulations show that the replication using our relative
MTTF-based incentive scheme achieves the same target data availability with 41% less data traffic and 24% less replicas.
View full abstract
-
Kazuteru NAMBA, Kengo NAKASHIMA, Hideo ITO
Article type: LETTER
Subject area: Dependable Computing
2010 Volume E93.D Issue 12 Pages
3407-3409
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
This paper presents a construction of a single-event-upset (SEU) tolerant reset-set (RS) flip-flop (FF). The proposed RS-FF consists of four identical parts which form an interlocking feedback loop just like DICE. The area and average power consumption of the proposed RS-FFs are 1.10 ∼ 1.48 and 1.20 ∼ 1.63 times smaller than those of the conventional SEU tolerant RS-FFs, respectively.
View full abstract
-
Hao BAI, Chang-zhen HU, Gang ZHANG, Xiao-chuan JING, Ning LI
Article type: LETTER
Subject area: Dependable Computing
2010 Volume E93.D Issue 12 Pages
3410-3413
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
The letter proposes a novel binary vulnerability analyzer for executable programs that is based on the Hidden Markov Model. A vulnerability instruction library (VIL) is primarily constructed by collecting binary frames located by double precision analysis. Executable programs are then converted into structurized code sequences with the VIL. The code sequences are essentially context-sensitive, which can be modeled by Hidden Markov Model (HMM). Finally, the HMM based vulnerability analyzer is built to recognize potential vulnerabilities of executable programs. Experimental results show the proposed approach achieves lower false positive/negative rate than latest static analyzers.
View full abstract
-
Cheng LU, Mrinal MANDAL
Article type: LETTER
Subject area: Biological Engineering
2010 Volume E93.D Issue 12 Pages
3414-3417
Published: December 01, 2010
Released on J-STAGE: December 01, 2010
JOURNAL
FREE ACCESS
Accurate registration is crucial for medical image analysis. In this letter, we proposed an improved Demons technique (IDT) for medical image registration. The IDT improves registration quality using orthogonal gradient information. The advantage of the proposed IDT is assessed using 14 medical image pairs. Experimental results show that the proposed technique provides about 8% improvement over existing Demons-based techniques in terms of registration accuracy.
View full abstract