IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E94.D, Issue 11
Displaying 1-28 of 28 articles from this issue
Special Section on Information and Communication System Security
  • Yutaka MIYAKE
    2011 Volume E94.D Issue 11 Pages 2067-2068
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Download PDF (59K)
  • Chia-Yin LEE, Zhi-Hui WANG, Lein HARN, Chin-Chen CHANG
    Article type: INVITED PAPER
    2011 Volume E94.D Issue 11 Pages 2069-2076
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Group key establishment is an important mechanism to construct a common session key for group communications. Conventional group key establishment protocols use an on-line trusted key generation center (KGC) to transfer the group key for each participant in each session. However, this approach requires that a trusted server be set up, and it incurs communication overhead costs. In this article, we address some security problems and drawbacks associated with existing group key establishment protocols. Besides, we use the concept of secret sharing scheme to propose a secure key transfer protocol to exclude impersonators from accessing the group communication. Our protocol can resist potential attacks and also reduce the overhead of system implementation. In addition, comparisons of the security analysis and functionality of our proposed protocol with some recent protocols are included in this article.
    Download PDF (223K)
  • Heung-Youl YOUM
    Article type: INVITED PAPER
    2011 Volume E94.D Issue 11 Pages 2077-2086
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    As an increasing number of businesses and services depend on the Internet, protecting them against DDoS (Distributed Denial of Service) attacks becomes a critical issue. A traceback is used to discover technical information concerning the ingress points, paths, partial paths or sources of a packet or packets causing a problematic network event. The traceback mechanism is a useful tool to identify the attack source of the (DDoS) attack, which ultimately leads to preventing against the DDoS attack. There are numerous traceback mechanisms that have been proposed by many researchers. In this paper, we analyze the existing traceback mechanisms, describe the common security capabilities of traceback mechanisms, and evaluate them in terms of the various criteria. In addition, we identify typical application of traceback mechanisms.
    Download PDF (1802K)
  • Masakatu MORII, Yosuke TODO
    Article type: INVITED PAPER
    2011 Volume E94.D Issue 11 Pages 2087-2094
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    In recent years, wireless LAN systems are widely used in campuses, offices, homes and so on. It is important to discuss the security aspect of wireless LAN networks in order to protect data confidentiality and integrity. The IEEE Standards Association formulated some security protocols, for example, Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access Temporal Key Integrity Protocol (WPA-TKIP). However, these protocols have vulnerability for secure communication. In 2008, we proposed an efffective key recovery attack against WEP and it is called the TeAM-OK attack. In this paper, first, we present a different interpretation and the relation between other attacks and the TeAM-OK attack against WEP. Second, we present some existing attacks against WPA-TKIP and these attacks are not executable in a realistic environment. Then we propose an attack that is executable in a realistic environment against WPA-TKIP. This attack exploits the vulnerability implementation in the QoS packet processing feature of IEEE 802.11e. The receiver receives a falsification packet constructed as part of attack regardless of the setting of IEEE 802.11e. This vulnerability removes the attacker's condition that access points support IEEE 802.11e. We confirm that almost all wireless LAN implementations have this vulnerability. Therefore, almost all WPA-TKIP implementations cannot protect a system against the falsification attack in a realistic environment.
    Download PDF (507K)
  • SeongHan SHIN, Kazukuni KOBARA, Hideki IMAI
    Article type: PAPER
    2011 Volume E94.D Issue 11 Pages 2095-2110
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    An anonymous password-authenticated key exchange (PAKE) protocol is designed to provide both password-only authentication and client anonymity against a semi-honest server, who honestly follows the protocol. In INDOCRYPT2008, Yang and Zhang [26] proposed a new anonymous PAKE (NAPAKE) protocol and its threshold (D-NAPAKE) which they claimed to be secure against insider attacks. In this paper, we first show that the D-NAPAKE protocol [26]is completely insecure against insider attacks unlike their claim. Specifically, only one legitimate client can freely impersonate any subgroup of clients (the threshold t>1) to the server. After giving a security model that captures insider attacks, we propose a threshold anonymous PAKE (called, TAP++) protocol which provides security against insider attacks. Moreover, we prove that the TAP++ protocol has semantic security of session keys against active attacks as well as insider attacks under the computational Diffie-Hellman problem, and provides client anonymity against a semi-honest server, who honestly follows the protocol. Finally, several discussions are followed: 1) We also show another threshold anonymous PAKE protocol by applying our RATIONALE to the non-threshold anonymous PAKE (VEAP) protocol [23]; and 2) We give the efficiency comparison, security consideration and implementation issue of the TAP++ protocol.
    Download PDF (696K)
  • Tetsuya IZU, Yumi SAKEMI, Masahiko TAKENAKA
    Article type: PAPER
    2011 Volume E94.D Issue 11 Pages 2111-2118
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    EMV signature is one of specifications for authenticating credit and debit card data, which is based on ISO/IEC 9796-2 signature scheme. At CRYPTO 2009, Coron, Naccache, Tibouchi, and Weinmann proposed a new forgery attack against the signature ISO/IEC 9796-2 (CNTW attack) [2]. They also briefly discussed the possibility when the attack is applied to the EMV signatures. They showed that the forging cost is $45,000 and concluded that the attack could not forge them for operational reason. However their results are derived from not fully analysis under only one condition. The condition they adopt is typical case. For security evaluation, fully analysis and an estimation in worst case are needed. This paper shows cost-estimation of CNTW attack against EMV signature in detail. We constitute an evaluate model and show cost-estimations under all conditions that Coron et al. do not estimate. As results, this paper contribute on two points. One is that our detailed estimation reduced the forgery cost from $45,000 to $35,200 with same condition as [2]. Another is to clarify a fact that EMV signature can be forged with less than $2,000 according to a condition. This fact shows that CNTW attack might be a realistic threat.
    Download PDF (198K)
  • Ping DU, Akihiro NAKAO
    Article type: PAPER
    2011 Volume E94.D Issue 11 Pages 2119-2128
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    In cloud computing, a cloud user pays proportionally to the amount of the consumed resources (bandwidth, memory, and CPU cycles etc.). We posit that such a cloud computing system is vulnerable to DDoS (Distributed Denial-of-Service) attacks against quota. Attackers can force a cloud user to pay more and more money by exhausting its quota without crippling its execution system or congesting links. In this paper, we address this issue and claim that cloud should enable users to pay only for their admitted traffic. We design and prototype such a charging model in a CoreLab testbed infrastructure and show an example application.
    Download PDF (597K)
  • Yuan-Cheng LAI, Ying-Dar LIN, Fan-Cheng WU, Tze-Yau HUANG, Frank C. LI ...
    Article type: PAPER
    2011 Volume E94.D Issue 11 Pages 2129-2138
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    A buffer overflow attack occurs when a program writes data outside the allocated memory in an attempt to invade a system. Approximately forty percent of all software vulnerabilities over the past several years are attributed to buffer overflow. Taint tracking is a novel technique to prevent buffer overflow. Previous studies on taint tracking ran a victim's program on an emulator to dynamically instrument the code for tracking the propagation of taint data in memory and checking whether malicious code is executed. However, the critical problem of this approach is its heavy performance overhead. Analysis of this overhead shows that 60% of the overhead is from the emulator, and the remaining 40% is from dynamic instrumentation and taint information maintenance. This article proposes a new taint-style system called Embedded TaintTracker to eliminate the overhead in the emulator and dynamic instrumentation by compressing a checking mechanism into the operating system (OS) kernel and moving the instrumentation from runtime to compilation time. Results show that the proposed system outperforms the previous work, TaintCheck, by at least 8 times on throughput degradation, and is about 17.5 times faster than TaintCheck when browsing 1KB web pages.
    Download PDF (1131K)
  • Nur Rohman ROSYID, Masayuki OHRUI, Hiroaki KIKUCHI, Pitikhate SOORAKSA ...
    Article type: PAPER
    2011 Volume E94.D Issue 11 Pages 2139-2149
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Overcoming the highly organized and coordinated malware threats by botnets on the Internet is becoming increasingly difficult. A honeypot is a powerful tool for observing and catching malware and virulent activity in Internet traffic. Because botnets use systematic attack methods, the sequences of malware downloaded by honeypots have particular forms of coordinated pattern. This paper aims to discover new frequent sequential attack patterns in malware automatically. One problem is the difficulty in identifying particular patterns from full yearlong logs because the dataset is too large for individual investigations. This paper proposes the use of a data-mining algorithm to overcome this problem. We implement the PrefixSpan algorithm to analyze malware-attack logs and then show some experimental results. Analysis of these results indicates that botnet attacks can be characterized either by the download times or by the source addresses of the bots. Finally, we use entropy analysis to reveal how frequent sequential patterns are involved in coordinated attacks.
    Download PDF (1509K)
  • Junji NAKAZATO, Jungsuk SONG, Masashi ETO, Daisuke INOUE, Koji NAKAO
    Article type: PAPER
    2011 Volume E94.D Issue 11 Pages 2150-2158
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    With the rapid development and proliferation of the Internet, cyber attacks are increasingly and continually emerging and evolving nowadays. Malware - a generic term for computer viruses, worms, trojan horses, spywares, adwares, and bots - is a particularly lethal security threat. To cope with this security threat appropriately, we need to identify the malwares' tendency/characteristic and analyze the malwares' behaviors including their classification. In the previous works of classification technologies, the malwares have been classified by using data from dynamic analysis or code analysis. However, the works have not been succeeded to obtain efficient classification with high accuracy. In this paper, we propose a new classification method to cluster malware more effectively and more accurately. We firstly perform dynamic analysis to automatically obtain the execution traces of malwares. Then, we classify malwares into some clusters using their characteristics of the behavior that are derived from Windows API calls in parallel threads. We evaluated our classification method using 2,312 malware samples with different hash values. The samples classified into 1,221 groups by the result of three types of antivirus softwares were classified into 93 clusters. 90% of the samples used in the experiment were classified into 20 clusters at most. Moreover, it ensured that 39 malware samples had characteristics different from other samples, suggesting that these may be new types of malware. The kinds of Windows API calls confirmed the samples classified into the same cluster had the same characteristics. We made clear that antivirus softwares named different name to malwares that have same behavior.
    Download PDF (945K)
  • Gregory BLANC, Youki KADOBAYASHI
    Article type: PAPER
    2011 Volume E94.D Issue 11 Pages 2159-2166
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Modern web applications incorporate many programmatic frameworks and APIs that are often pushed to the client-side with most of the application logic while contents are the result of mashing up several resources from different origins. Such applications are threatened by attackers that often attempts to inject directly, or by leveraging a stepstone website, script codes that perform malicious operations. Web scripting based malware proliferation is being more and more industrialized with the drawbacks and advantages that characterize such approach: on one hand, we are witnessing a lot of samples that exhibit the same characteristics which make these easy to detect, while on the other hand, professional developers are continuously developing new attack techniques. While obfuscation is still a debated issue within the community, it becomes clear that, with new schemes being designed, this issue cannot be ignored anymore. Because many proposed countermeasures confess that they perform better on unobfuscated contents, we propose a 2-stage technique that first relieve the burden of obfuscation by emulating the deobfuscation stage before performing a static abstraction of the analyzed sample's functionalities in order to reveal its intent. We support our proposal with evidence from applying our technique to real-life examples and provide discussion on performance in terms of time, as well as possible other applications of proposed techniques in the areas of web crawling and script classification. Additionally, we claim that such approach can be generalized to other scripting languages similar to JavaScript.
    Download PDF (3452K)
  • Eun-Jun YOON, Kee-Young YOO
    Article type: LETTER
    2011 Volume E94.D Issue 11 Pages 2167-2170
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    In 2010, Guo and Zhang proposed a group key agreement protocol based on the chaotic hash function. This letter points out that Guo-Zhang's protocol is still vulnerable to off-line password guessing attacks, stolen-verifier attacks and reflection attacks.
    Download PDF (236K)
  • Fagen LI, Jiang DENG, Tsuyoshi TAKAGI
    Article type: LETTER
    2011 Volume E94.D Issue 11 Pages 2171-2172
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Authenticated encryption schemes are very useful for private and authenticated communication. In 2010, Rasslan and Youssef showed that the Hwang et al.'s authenticated encryption scheme is not secure by presenting a message forgery attack. However, Rasslan and Youssef did not give how to solve the security issue. In this letter, we give an improvement of the Hwang et al.'s scheme. The improved scheme not only solves the security issue of the original scheme, but also maintains its efficiency.
    Download PDF (56K)
Regular Section
  • Yuanwu LEI, Yong DOU, Jie ZHOU
    Article type: PAPER
    Subject area: Computer System
    2011 Volume E94.D Issue 11 Pages 2173-2183
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Many scientific applications require efficient variable-precision floating-point arithmetic. This paper presents a special-purpose Very Large Instruction Word (VLIW) architecture for variable precision floating-point arithmetic (VV-Processor) on FPGA. The proposed processor uses a unified hardware structure, equipped with multiple custom variable-precision arithmetic units, to implement various variable-precision algebraic and transcendental functions. The performance is improved through the explicitly parallel technology of VLIW instruction and by dynamically varying the precision of intermediate computation. We take division and exponential function as examples to illustrate the design of variable-precision elementary algorithms in VV-Processor. Finally, we create a prototype of VV-Processor unit on a Xilinx XC6VLX760-2FF1760 FPGA chip. The experimental results show that one VV-Processor unit, running at 253MHz, outperforms the approach of a software-based library running on an Intel Core i3 530 CPU at 2.93GHz by a factor of 5X-37X for basic variable-precision arithmetic operations and elementary functions.
    Download PDF (2154K)
  • Cheng-Min LIN, Shyi-Shiou WU, Tse-Yi CHEN
    Article type: PAPER
    Subject area: Computer System
    2011 Volume E94.D Issue 11 Pages 2184-2190
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Universal Plug and Play (UPnP) allows devices automatic discovery and control of services available in those devices connected to a Transmission Control Protocol/ Internet Protocol (TCP/IP) network. Although many products are designed using UPnP, little attention has been given to UPnP related to modeling and performance analysis. This paper uses a framework of Generalized Stochastic Petri Net (GSPN) to model and analyze the behavior of UPnP systems. The framework includes modeling UPnP, reachability decomposition, GSPN analysis, and reward assignment. Then, the Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption, system utilization and network throughput. Through quantitative analysis, the steady states in the operation and notification stage dominate the system performance, and the control point is better than the device in power consumption but the device outperforms the control point in evaluating utilization. The framework and numerical results are useful to improve the quality of services provided in UPnP devices.
    Download PDF (706K)
  • Gyeongyeon KANG, Yoshiaki TANIGUCHI, Go HASEGAWA, Hirotaka NAKANO
    Article type: PAPER
    Subject area: Information Network
    2011 Volume E94.D Issue 11 Pages 2191-2200
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    In time division multiple access (TDMA)-based wireless mesh networks, interference relationships should be considered when time slots are assigned to links. In graph theory-based time slot assignment algorithms, the protocol interference model is widely used to determine radio interference information, although it is an inaccurate model of actual radio interference. On the other hand, the signal-to-interference-plus-noise-ratio model (SINR model) gives more accurate interference relationships but is difficult to apply to time slot assignment algorithms since the radio interference information cannot be determined before time slot assignment. In this paper, we investigate the effect of the parameters of the protocol interference model on the accuracy of the interference relationships determined using this model. Specifically, after assigning time slots to links based on the protocol interference model with various interference ratios, which is the major parameter of the protocol interference model, we compare the interference relationship among links in the protocol interference and SINR models. Through simulation experiments, we show that accuracy of the protocol interference model is improved by up to 15% by adjusting the interference ratios of the protocol interference model.
    Download PDF (657K)
  • Yuta NAKASHIMA, Ryosuke KANETO, Noboru BABAGUCHI
    Article type: PAPER
    Subject area: Information Network
    2011 Volume E94.D Issue 11 Pages 2201-2211
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Recently, a number of location-based services such as navigation and mobile advertising have been proposed. Such services require real-time user positions. Since a global positioning system (GPS), which is one of the most well-known techniques for real-time positioning, is unsuitable for indoor uses due to unavailability of GPS signals, many indoor positioning systems (IPSs) using WLAN, radio frequency identification tags, and so forth have been proposed. However, most of them suffer from high installation costs. In this paper, we propose a novel IPS for real-time positioning that utilizes a digital audio watermarking technique. The proposed IPS first embeds watermarks into an audio signal to generate watermarked signals, each of which is then emitted from a corresponding speaker installed in a target environment. A user of the proposed IPS receives the watermarked signals with a mobile device equipped with a microphone, and the watermarks are detected in the received signal. For positioning, we model various effects upon watermarks due to propagation in the air, i.e., delays, attenuation, and diffraction. The model enables the proposed IPS to accurately locate the user based on the watermarks detected in the received signal. The proposed IPS can be easily deployed with a low installation cost because the IPS can work with off-the-shelf speakers that have been already installed in most of the indoor environments such as department stores, amusement arcades, and airports. We experimentally evaluate the accuracy of positioning and show that the proposed IPS locates the user in a 6m by 7.5m room with root mean squared error of 2.25m on average. The results also demonstrate the potential capability of real-time positioning with the proposed IPS.
    Download PDF (1283K)
  • Chuanjun REN, Xiaomin JIA, Hongbing HUANG, Shiyao JIN
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 11 Pages 2212-2218
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    The description and analysis of emergence in complex adaptive system has recently become a topic of great interest in the field of systems, and lots of ideas and methods have been proposed. A Sign-based model of Stigmergy is proposed in this paper. Stigmergy is widely used in complex systems. We pick up “Sign” as a key notion to understand it. A definition of “Sign” is given, which reveals the Sign's nature and exploit the significations and relationships carried by the “Sign”. Then, a Sign-based model of Stigmergy is consequently developed, which captures the essential characteristics of Stigmergy. The basic architecture of Stigmergy as well as its constituents are presented and then discussed. The syntax and operational semantics of Stigmergy configurations are given. We illustrate the methodology of analyzing emergence in CAS by using our model.
    Download PDF (1043K)
  • Won-Gyo JUNG, Sang-Sung PARK, Dong-Sik JANG
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 11 Pages 2219-2226
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Whether a patent is registered or not is usually based on the subjective judgment of the patent examiners. However, the patent examiners may determine whether the patent is registered or not according to their personal knowledge, backgrounds etc. In this paper, we propose a novel patent registration method based on patent data. The method estimates whether a patent is registered or not by utilizing the objective past history of patent data instead of existing methods of subjective judgments. The proposed method constructs an estimation model by applying multivariate statistics algorithm. In the prediction model, the application date, activity index, IPC code and similarity of registration refusal are set to the input values, and patent registration and rejection are set to the output values. We believe that our method will contribute to improved reliability of patent registration in that it achieves highly reliable estimation results through the past history of patent data, contrary to most previous methods of subjective judgments by patent agents.
    Download PDF (1675K)
  • Danushka BOLLEGALA, Yutaka MATSUO, Mitsuru ISHIZUKA
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 11 Pages 2227-2233
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Measuring the relational similarity between word pairs is important in numerous natural language processing tasks such as solving word analogy questions, classifying noun-modifier relations and disambiguating word senses. We propose a supervised classification method to measure the similarity between semantic relations that exist between words in two word pairs. First, each pair of words is represented by a vector of automatically extracted lexical patterns. Then a binary Support Vector Machine is trained to recognize word pairs with similar semantic relations to a given word pair. To train and evaluate the proposed method, we use a benchmark dataset that contains 374 SAT multiple-choice word-analogy questions. To represent the relations that exist between two word pairs, we experiment with 11 different feature functions, including both symmetric and asymmetric feature functions. Our experimental results show that the proposed method outperforms several previously proposed relational similarity measures on this benchmark dataset, achieving an SAT score of 46.9.
    Download PDF (326K)
  • Liang SUN, Shinichi YOSHIDA, Yanchun LIANG
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 11 Pages 2234-2243
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Support vector clustering (SVC), a recently developed unsupervised learning algorithm, has been successfully applied to solving many real-life data clustering problems. However, its effectiveness and advantages deteriorate when it is applied to solving complex real-world problems, e.g., those with large proportion of noise data points and with connecting clusters. This paper proposes a support vector and K-Means based hybrid algorithm to improve the performance of SVC. A new SVC training method is developed based on analysis of a Gaussian kernel radius function. An empirical study is conducted to guide better selection of the standard deviation of the Gaussian kernel. In the proposed algorithm, firstly, the outliers which increase problem complexity are identified and removed by training a global SVC. The refined data set is then clustered by a kernel-based K-Means algorithm. Finally, several local SVCs are trained for the clusters and then each removed data point is labeled according to the distance from it to the local SVCs. Since it exploits the advantages of both SVC and K-Means, the proposed algorithm is capable of clustering compact and arbitrary organized data sets and of increasing robustness to outliers and connecting clusters. Experiments are conducted on 2-D data sets generated by mixture models and benchmark data sets taken from the UCI machine learning repository. The cluster error rate is lower than 3.0% for all the selected data sets. The results demonstrate that the proposed algorithm compared favorably with existing SVC algorithms.
    Download PDF (3088K)
  • Takanori AYANO
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 11 Pages 2244-2249
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Let (X,Y) be a $\\mathbb{R}^d\\ imes\\mathbb{R}$-valued random vector. In regression analysis one wants to estimate the regression function $m(x):={\\bf E}(Y|X=x)$ from a data set. In this paper we consider the convergence rate of the error for the k nearest neighbor estimators in case that m is (p,C)-smooth. It is known that the minimax rate is unachievable by any k nearest neighbor estimator for p > 1.5 and d=1. We generalize this result to any d ≥ 1. Throughout this paper, we assume that the data is independent and identically distributed and as an error criterion we use the expected L2 error.
    Download PDF (142K)
  • Masami AKAMINE, Jitendra AJMERA
    Article type: PAPER
    Subject area: Speech and Hearing
    2011 Volume E94.D Issue 11 Pages 2250-2258
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    This paper proposes likelihood smoothing techniques to improve decision tree-based acoustic models, where decision trees are used as replacements for Gaussian mixture models to compute the observation likelihoods for a given HMM state in a speech recognition system. Decision trees have a number of advantageous properties, such as not imposing restrictions on the number or types of features, and automatically performing feature selection. This paper describes basic configurations of decision tree-based acoustic models and proposes two methods to improve the robustness of the basic model: DT mixture models and soft decisions for continuous features. Experimental results for the Aurora 2 speech database show that a system using decision trees offers state-of-the-art performance, even without taking advantage of its full potential and soft decisions improve the performance of DT-based acoustic models with 16.8% relative error rate reduction over hard decisions.
    Download PDF (497K)
  • Seung-Wan JUNG, Young Jin NAM, Dae-Wha SEO
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2011 Volume E94.D Issue 11 Pages 2259-2270
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    Recently, the need for multimedia devices, such as mobile phones, digital TV, PMP, digital camcorders, digital cameras has increased. These devices provide various services for multimedia file manipulation, allowing multimedia contents playback, multimedia file editing, etc. Additionally, digital TV provides a recorded multimedia file copy to a portable USB disk. However, Linux Ext3 file system, as employed by these devices, has a lot of drawbacks, as it required a considerable amount of time and disk I/Os to store large-size edited multimedia files, and it is hard to access for typical PC users. Therefore, in this paper a design and implementation of an amortized Ext3 with FWAE (Fast Writing-After-Editing) for WinXP-based multimedia applications is described. The FWAE is a fast and efficient multimedia file editing/storing technique for the Ext3 that exploits inode block pointer re-setting and shared data blocks by simply modifying metadata information. Individual experiments in this research show that the amortized Ext3 with FWAE for WinXP not only dramatically improves written performance of the Ext3 by 16 times on average with various types of edited multimedia files but also notably reduces the amount of consumed disk space through data block sharing. Also, it provides ease and comfort to use for typical PC users unfamiliar with Linux OS.
    Download PDF (3621K)
  • Bin-Shyan JONG, Chi-Kang KAO, Juin-Ling TSENG, Tsong-Wuu LIN
    Article type: PAPER
    Subject area: Computer Graphics
    2011 Volume E94.D Issue 11 Pages 2271-2279
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    This paper introduces a new dynamic 3D mesh representation that provides 3D animation support of progressive display and drastically reduces the amount of storage space required for 3D animation. The primary purpose of progressive display is to allow viewers to get animation as quickly as possible, rather than having to wait until all data has been downloaded. In other words, this method allows for the simultaneous transmission and playing of 3D animation. Experiments show that coarser 3D animation could be reconstructed with as little as 150KB of data transferred. Using the sustained transmission of refined operators, viewers feel that resolution approaches that of the original animation. The methods used in this study are based on a compression technique commonly used in 3D animation - clustered principle component analysis, using the linearly independent rules of principle components, so that animation can be stored using smaller amounts of data. This method can be coupled with streaming technology to reconstruct animation through iterative updating. Each principle component is a portion of the streaming data to be stored and transmitted after compression, as well as a refined operator during the animation update process. This paper considers errors and rate-distortion optimization, and introduces weighted progressive transmitting (WPT), using refined sequences from optimized principle components, so that each refinement yields an increase in quality. In other words, with identical data size, this method allows each principle component to reduce allowable error and provide the highest quality 3D animation.
    Download PDF (2518K)
  • Haoru SU, Sunshin AN
    Article type: LETTER
    Subject area: Information Network
    2011 Volume E94.D Issue 11 Pages 2280-2283
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    To solve the RFID reader collision problem, a Multi-dimensional Channel Management (MCM) mechanism is proposed. A reader selects an idle channel which has the maximum distance with the used channels. A backoff scheme is used before channel acquisition. The simulation results show MCM has better performance than other mechanisms.
    Download PDF (370K)
  • Shi-Ze GUO, Zhe-Ming LU, Zhe CHEN, Hao LUO
    Article type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 11 Pages 2284-2287
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    This Letter defines thirteen useful correlation measures for directed weighted complex network analysis. First, in-strength and out-strength are defined for each node in the directed weighted network. Then, one node-based strength-strength correlation measure and four arc-based strength-strength correlation measures are defined. In addition, considering that each node is associated with in-degree, out-degree, in-strength and out-strength, four node-based strength-degree correlation measures and four arc-based strength-degree correlation measures are defined. Finally, we use these measures to analyze the world trade network and the food web. The results demonstrate the effectiveness of the proposed measures for directed weighted networks.
    Download PDF (2027K)
  • Jeonghoon LEE, Yoon-Joon LEE
    Article type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 11 Pages 2288-2292
    Published: November 01, 2011
    Released on J-STAGE: November 01, 2011
    JOURNAL FREE ACCESS
    In processing stream data, time is one of the most significant facts not only because the size of data is dramatically increased but because the characteristics of data is varying over time. To learn stream data evolving over time effectively, it is required to detect the drift of concept. We present a window adaptation function on domain value (WAV) to determine the size of windowed batch for learning algorithms of stream data and a method to detect the change of data characteristics with a criterion function utilizing correlation. When applying our adaptation function to a clustering task on a multi-stream data model, the result of learning synopsis of windowed batch determined by it shows its effectiveness. Our criterion function with correlation information of value distribution over time can be the reasonable threshold to detect the change between windowed batches.
    Download PDF (327K)
feedback
Top