IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E97.D , Issue 7
Showing 1-35 articles out of 35 articles from the selected issue
Special Section on Cloud and Services Computing
  • Yohei MURAKAMI
    2014 Volume E97.D Issue 7 Pages 1699
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Download PDF (69K)
  • Yu GU, Chuanyi LIU, Dongsheng WANG
    Type: PAPER
    2014 Volume E97.D Issue 7 Pages 1700-1708
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Cloud computing has rising as a new popular service paradigm with typical advantages as ease of use, unlimited resources and pay-as-you-go pricing model. Cloud resources are more flexible and cost-effective than private or colocation resources thus more suitable for storing the outdated backup data that are infrequently accessed by continuous data protection (CDP) systems. However, the cloud achieves low cost at the same time may slow down the recovery procedure due to its low bandwidth and high latency. In this paper, a novel block-level CDP system architecture: MYCDP is proposed to utilize cloud resources as the back-end storage. Unlike traditional delta-encoding based CDP approaches which should traverse all the dependent versions and decode the recovery point, MYCDP adopts data deduplication mechanism to eliminate data redundancy between all versions of all blocks, and constructs a version index for all versions of the protected storage, thus it can use a query-and-fetch process to recover version data. And with a specific version index data structure and a disk/memory hybrid cache module, MYCDP reduces the storage space consumption and data transfer between local and cloud. It also supports deletion of arbitrary versions without risk of invalidating some other versions. Experimental results demonstrate that MYCDP can achieve much lower cost than traditional local based CDP approaches, while remaining almost the same recovery speed with the local based deduplication approach for most recovery cases. Furthermore, MYCDP can obtain both faster recovery and lower cost than cloud based delta-encoding CDP approaches for any recovery points. And MYCDP gets more profits while protecting multiple systems together.
    Download PDF (545K)
  • Shintaro YAMAMOTO, Shinsuke MATSUMOTO, Sachio SAIKI, Masahide NAKAMURA
    Type: PAPER
    2014 Volume E97.D Issue 7 Pages 1709-1718
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Smart city services are implemented using various data collected from houses and infrastructure within a city. As the volume and variety of the smart city data becomes huge, individual services have suffered from expensive computation effort and large processing time. In order to reduce the effort and time, this paper proposes a concept of Materialized View as a Service (MVaaS). Using the MVaaS, every application can easily and dynamically construct its own materialized view, in which the raw data is converted and stored in a convenient format with appropriate granularity. Thus, once the view is constructed, the application can quickly access necessary data. In this paper, we design a framework of MVaaS specifically for large-scale house log, managed in a smart-city data platform. In the framework, each application first specifies how the raw data should be filtered, grouped and aggregated. For a given data specification, MVaaS dynamically constructs a MapReduce batch program that converts the raw data into a desired view. The batch is then executed on Hadoop, and the resultant view is stored in HBase. We present case studies using house log in a real home network system. We also conduct an experimental evaluation to compare the response time between cases with and without MVaaS.
    Download PDF (1115K)
  • Xiaoyun LIU, Gongjun YAN, Danda B. RAWAT, Shugang DENG
    Type: PAPER
    2014 Volume E97.D Issue 7 Pages 1719-1726
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    The past decade has witnessed a growing interest in vehicular networking. Initially motivated by traffic safety, vehicles equipped with computing, communication and sensing capabilities will be organized into ubiquitous and pervasive networks with a significant Internet presence while on the move. Large amount of data can be generated, collected, and processed on the vehicular networks. Big data on vehicular networks include useful and sensitive information which could be exploited by malicious intruders. But intrusion detection in vehicular networks is challenging because of its unique features of vehicular networks: short range wireless communication, large amount of nodes, and high mobility of nodes. Traditional methods are hard to detect intrusion in such sophisticated environment, especially when the attack pattern is unknown, therefore, it can result unacceptable false negative error rates. As a novel attempt, the main goal of this research is to apply data mining methodology to recognize known attacks and uncover unknown attacks in vehicular networks. We are the first to attempt to adapt data mining method for intrusion detection in vehicular networks. The main contributions include: 1) specially design a decentralized vehicle networks that provide scalable communication and data availability about network status; 2) applying two data mining models to show feasibility of automated intrusion detection system in vehicular networks; 3) find the detection patterns of unknown intrusions.
    Download PDF (1863K)
  • Han-Peng JIANG, Ming-Lung WENG, Wei-Mei CHEN
    Type: LETTER
    2014 Volume E97.D Issue 7 Pages 1727-1730
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Now that the subject of green computing is receiving a lot of attention, the energy consumption of datacenters has emerged as a significant issue. Consolidation of Virtual Machines (VMs) reduces the energy consumption since VM live migration not only optimizes VM placement, but also switches idle nodes to sleep mode. However, VM migration may negatively impact the performance of the system and lead to violations in SLA (Service Level Agreement) requirements between end users and cloud providers. In this study, we propose a VM consolidation mechanism that reduces the energy consumption of datacenters, eliminates unnecessary migrations, and minimizes the SLA violations. Compared to previous studies, the proposed policy shows a reduction of 2% to 3% in energy consumption, 13% to 41% in VM migration frequency, and 15% to 50% in SLA violations.
    Download PDF (515K)
  • Wei WEI, Yang LIU, Yuhong ZHANG
    Type: LETTER
    2014 Volume E97.D Issue 7 Pages 1731-1734
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    This letter proposes an efficient Two-stage Resource scheduling algorithm for cloud based Live Media Streaming system (TRLMS). It transforms the cloud-based resource scheduling problem to a min-cost flow problem in a graph, and solves it by an improved Successive Short Path (SSP) algorithm. Simulation results show that TRLMS can enhance user demand satisfaction by 17.1% than mean-based method, and its time complexity is much lower than original SSP algorithm.
    Download PDF (286K)
Regular Section
  • Jung-Ho UM, Miyoung JANG, Jae-Woo CHANG
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 7 Pages 1735-1745
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    With the advances in wireless Internet and mobile positioning technology, location-based services (LBSs) have become popular. In LBSs, users must send their exact locations in order to use the services, but they may be subject to several privacy threats. To solve this problem, query processing algorithms based on a cloaking method have been proposed. The algorithms use spatial cloaking methods to blur the user's exact location in a region satisfying the required privacy threshold (k). With the cloaked region, an LBS server can execute a spatial query processing algorithm preserving their privacy. However, the existing algorithms cannot provide good query processing performance. To resolve this problem, we, in this paper, propose a k-NN query processing algorithm based on network Voronoi diagram for spatial networks. Therefore, our algorithm can reduce network expansion overhead and share the information of the expanded road network. In order to demonstrate the efficiency of our algorithms, we have conducted extensive performance evaluations. The results show that our algorithm achieves better performance on retrieval time than the existing algorithms, such as PSNN and kRNN. This is because our k-NN query processing algorithm can greatly reduce a network expansion cost for retrieving k POIs.
    Download PDF (1475K)
  • Masaya SHIMAKAWA, Shigeki HAGIHARA, Naoki YONEZAKI
    Type: PAPER
    Subject area: Software System
    2014 Volume E97.D Issue 7 Pages 1746-1755
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Many fatal accidents involving safety-critical reactive systems have occurred in unexpected situations that were not considered during the design and test phases of development. To prevent such accidents, reactive systems should be designed to respond appropriately to any request from an environment at any time. Verifying this property during the specification phase reduces development reworking. This property of a specification is commonly known as realizability. Realizability checking for reactive system specifications involves complex and intricate analysis. The complexity of realizability problems is 2EXPTIME-complete. To detect typical simple deficiencies in specifications efficiently, we introduce the notion of bounded strong satisfiability (a necessary condition for realizability), and present a method for checking this property. Bounded strong satisfiability is the property that, for all input patterns represented by loop structures of a given size k, there is a response that satisfies a given specification. We report a checking method based on a satisfiability solver, and show that the complexity of the bounded strong satisfiability problem is co-NEXPTIME-complete. Moreover, we report experimental results showing that our method is more efficient than existing approaches.
    Download PDF (414K)
  • Tao ZHANG, Byungjeong LEE
    Type: PAPER
    Subject area: Software Engineering
    2014 Volume E97.D Issue 7 Pages 1756-1768
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Software products are increasingly complex, so it is becoming more difficult to find and correct bugs in large programs. Software developers rely on bug reports to fix bugs; thus, bug-tracking tools have been introduced to allow developers to upload, manage, and comment on bug reports to guide corrective software maintenance. However, the very high frequency of duplicate bug reports means that the triagers who help software developers in eliminating bugs must allocate large amounts of time and effort to the identification and analysis of these bug reports. In addition, classifying bug reports can help triagers arrange bugs in categories for the fixers who have more experience for resolving historical bugs in the same category. Unfortunately, due to a large number of submitted bug reports every day, the manual classification for these bug reports increases the triagers' workload. To resolve these problems, in this study, we develop a novel technique for automatic duplicate detection and classification of bug reports, which reduces the time and effort consumed by triagers for bug fixing. Our novel technique uses a support vector machine to check whether a new bug report is a duplicate. The concept profile is also used to classify the bug reports into related categories in a taxonomic tree. Finally, we conduct experiments that demonstrate the feasibility of our proposed approach using bug reports extracted from the large-scale open source project Mozilla.
    Download PDF (1347K)
  • Xin XIA, Xiaozhen ZHOU, David LO, Xiaoqiong ZHAO, Ye WANG
    Type: PAPER
    Subject area: Software Engineering
    2014 Volume E97.D Issue 7 Pages 1769-1780
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    A build system converts source code, libraries and other data into executable programs by orchestrating the execution of compilers and other tools. The whole building process is managed by a software build system, such as Make, Ant, CMake, Maven, Scons, and QMake. Many studies have investigated bugs and fixes in several systems, but to our best knowledge, none focused on bugs in build systems. One significant feature of software build systems is that they should work on various platforms, i.e., various operating systems (e.g., Windows, Linux), various development environments (e.g., Eclipse, Visual Studio), and various programming languages (e.g., C, C++, Java, C#), so the study of software build systems deserves special consideration. In this paper, we perform an empirical study on bugs in software build systems. We analyze four software build systems, Ant, Maven, CMake and QMake, which are four typical and widely-used software build systems, and can be used to build Java, C, C++ systems. We investigate their bug database and code repositories, randomly sample a set of bug reports and their fixes (800 bugs reports totally, and 199, 250, 200, and 151 bug reports for Ant, Maven, CMake and QMake, respectively), and manually assign them into various categories. We find that 21.35% of the bugs belong to the external interface category, 18.23% of the bugs belong to the logic category, and 12.86% of the bugs belong to the configuration category. We also investigate the relationship between bug categories and bug severities, bug fixing time, and number of bug comments.
    Download PDF (829K)
  • Xiaojuan LIAO, Miyuki KOSHIMURA, Hiroshi FUJITA, Ryuzo HASEGAWA
    Type: PAPER
    Subject area: Information Network
    2014 Volume E97.D Issue 7 Pages 1781-1789
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Coalition Structure Generation (CSG) is a main research issue in the domain of coalition games. A majority of existing works assume that the value of a coalition is independent of others in the coalition structure. Recently, there has been interest in a more realistic settings, where the value of a coalition is affected by the formation of other coalitions. This effect is known as externality. The focus of this paper is to make use of Maximum Satisfiability (MaxSAT) to solve the CSG problem where externalities may exist. In order to reduce the exponentially growing number of possible solutions in the CSG problem, we follow the previous works by representing the CSG problem as sets of rules in MC-nets (without externalities) and embedded MC-nets (with externalities). Specifically, enlightened by the previous MC-net-based algorithms exploiting the constraints among rule relations to solve the CSG problem, we encode such constraints into weighted partial MaxSAT (WPM) formulas. Experimental results demonstrate that an off-the-shelf MaxSAT solver achieves significant improvements compared to the previous algorithm for the same set of problem instances.
    Download PDF (395K)
  • Fangming ZHAO, Takashi NISHIDE, Kouichi SAKURAI
    Type: PAPER
    Subject area: Information Network
    2014 Volume E97.D Issue 7 Pages 1790-1803
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    We consider the problems of access control and encrypted keyword search for cryptographic cloud storage in such a way that they can be implemented for a multiple users setting. Our fine-grained access control aware multi-user secure keyword search approach interdependently harmonizes these two security notions, access control and encrypted keyword search. Owing to the shrinkage of the cloud server's search space to the user's decryptable subset, the proposed scheme both decreases information leakage and is shown to be efficient by the results of our contrastive performance simulation.
    Download PDF (465K)
  • Byoung-Dai LEE, Kwang-Ho LIM, Yoon-Ho CHOI, Namgi KIM
    Type: PAPER
    Subject area: Information Network
    2014 Volume E97.D Issue 7 Pages 1804-1811
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    In recent years, computation offloading, through which applications on a mobile device can offload their computations onto more resource-rich clouds, has emerged as a promising technique to reduce battery consumption as well as augment the devices' limited computation and memory capabilities. In order for computation offloading to be energy-efficient, an accurate estimate of battery consumption is required to decide between local processing and computation offloading. In this paper, we propose a novel technique for estimating battery consumption without requiring detailed information about the mobile application's internal structure or its execution behavior. In our approach, the relationship is derived between variables that affect battery consumption (i.e., the input to the application, the transmitted data, and resource status) and the actual consumed energy from the application's past run history. We evaluated the performance of the proposed technique using two different types of mobile applications over different wireless network environments such as 3G, Wi-Fi, and LTE. The experimental results show that our technique can provide tolerable estimation accuracy and thus make correct decisions between local processing and computation offloading.
    Download PDF (760K)
  • Xiaojuan LIAO, Miyuki KOSHIMURA, Hiroshi FUJITA, Ryuzo HASEGAWA
    Type: PAPER
    Subject area: Information Network
    2014 Volume E97.D Issue 7 Pages 1812-1821
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Coalition Structure Generation (CSG) means partitioning agents into exhaustive and disjoint coalitions so that the sum of values of all the coalitions is maximized. Solving this problem could be facilitated by employing some compact representation schemes, such as marginal contribution network (MC-net). In MC-net, the CSG problem is represented by a set of rules where each rule is associated with a real-valued weights, and the goal is to maximize the sum of weights of rules under some constraints. This naturally leads to a combinatorial optimization problem that could be solved with weighted partial MaxSAT (WPM). In general, WPM deals with only positive weights while the weights involved in a CSG problem could be either positive or negative. With this in mind, in this paper, we propose an extension of WPM to handle negative weights and take advantage of the extended WPM to solve the MC-net-based CSG problem. Specifically, we encode the relations between each pair of agents and reform the MC-net as a set of Boolean formulas. Thus, the CSG problem is encoded as an optimization problem for WPM solvers. Furthermore, we apply this agent relation-based WPM with minor revision to solve the extended CSG problem where the value of a coalition is affected by the formation of other coalitions, a coalition known as externality. Experiments demonstrate that, compared to the previous encoding, our proposed method speeds up the process of solving the CSG problem significantly, as it generates fewer number of Boolean variables and clauses that need to be examined by WPM solver.
    Download PDF (533K)
  • Tuan Duong NGUYEN, Marthinus Christoffel DU PLESSIS, Takafumi KANAMORI ...
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2014 Volume E97.D Issue 7 Pages 1822-1829
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    We address the problem of estimating the difference between two probability densities. A naive approach is a two-step procedure that first estimates two densities separately and then computes their difference. However, such a two-step procedure does not necessarily work well because the first step is performed without regard to the second step and thus a small error in the first stage can cause a big error in the second stage. Recently, a single-shot method called the least-squares density-difference (LSDD) estimator has been proposed. LSDD directly estimates the density difference without separately estimating two densities, and it was demonstrated to outperform the two-step approach. In this paper, we propose a variation of LSDD called the constrained least-squares density-difference (CLSDD) estimator, and theoretically prove that CLSDD improves the accuracy of density difference estimation for correctly specified parametric models. The usefulness of the proposed method is also demonstrated experimentally.
    Download PDF (569K)
  • Kyusong LEE, Soo-ok KWEON, Sungjin LEE, Hyungjong NOH, Gary Geunbae LE ...
    Type: PAPER
    Subject area: Educational Technology
    2014 Volume E97.D Issue 7 Pages 1830-1841
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    This study examines the dialog-based language learning game (DB-LLG) realized in a 3D environment built with game contents. We designed the DB-LLG to communicate with users who can conduct interactive conversations with game characters in various immersive environments. From the pilot test, we found that several technologies were identified as essential in the construction of the DB-LLG such as dialog management, hint generation, and grammar error detection and feedback. We describe the technical details of our system POSTECH immersive English study (Pomy). We evaluated the performance of each technology using a simulator and field tests with users.
    Download PDF (1408K)
  • Akira TAMAMORI, Yoshihiko NANKAKU, Keiichi TOKUDA
    Type: PAPER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 7 Pages 1842-1854
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    In this paper, a novel statistical model based on 2-D HMMs for image recognition is proposed. Recently, separable lattice 2-D HMMs (SL2D-HMMs) were proposed to model invariance to size and location deformation. However, their modeling accuracy is still insufficient because of the following two assumptions, which are inherited from 1-D HMMs: i) the stationary statistics within each state and ii) the conditional independent assumption of state output probabilities. To overcome these shortcomings in 1-D HMMs, trajectory HMMs were proposed and successfully applied to speech recognition and speech synthesis. This paper derives 2-D trajectory HMMs by reformulating the likelihood of SL2D-HMMs through the imposition of explicit relationships between static and dynamic features. The proposed model can efficiently capture dependencies between adjacent observations without increasing the number of model parameters. The effectiveness of the proposed model was evaluated in face recognition experiments on the XM2VTS database.
    Download PDF (1675K)
  • Raissa RELATOR, Yoshihiro HIROHASHI, Eisuke ITO, Tsuyoshi KATO
    Type: PAPER
    Subject area: Pattern Recognition
    2014 Volume E97.D Issue 7 Pages 1855-1863
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Classification tasks in computer vision and brain-computer interface research have presented several applications such as biometrics and cognitive training. However, like in any other discipline, determining suitable representation of data has been challenging, and recent approaches have deviated from the familiar form of one vector for each data sample. This paper considers a kernel between vector sets, the mean polynomial kernel, motivated by recent studies where data are approximated by linear subspaces, in particular, methods that were formulated on Grassmann manifolds. This kernel takes a more general approach given that it can also support input data that can be modeled as a vector sequence, and not necessarily requiring it to be a linear subspace. We discuss how the kernel can be associated with the Projection kernel, a Grassmann kernel. Experimental results using face image sequences and physiological signal data show that the mean polynomial kernel surpasses existing subspace-based methods on Grassmann manifolds in terms of predictive performance and efficiency.
    Download PDF (1100K)
  • David B.L. BONG, Bee Ee KHOO
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 7 Pages 1864-1871
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Blur distortion is a common artifact in image communication and affects the perceived sharpness of a digital image. In this paper, we capitalize on the mathematical knowledge of Gaussian convolution and propose a strategy to minimally reblur test images. From the reblur algorithm, synthetic reblur images are created. We propose a new blind blur metric which makes use of the reblur images to produce blur scores. Compared to other no-reference blur assessments, the proposed method has the advantages of fast computation and training-free operation. Experiment results also show that the proposed method can produce blur scores which are highly correlated with human perception of blurriness.
    Download PDF (1500K)
  • Du Sic YOO, Min Kyu PARK, Moon Gi KANG
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 7 Pages 1872-1884
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Most images obtained with imaging sensors contain Bayer patterns and suffer from blurring caused by the lens. In order to convert a blurred Bayer-patterned image into a viewable image, demosaicing and deblurring are needed. These concepts have been major research areas in digital image processing for several decades. Despite their importance, their performance and efficiency are not satisfactory when considered independently. In this paper, we propose a joint deblurring and demosaicing method in which edge direction and edge strength are estimated in the Bayer domain and then edge adaptive deblurring and edge-oriented interpolation are performed simultaneously from the estimated edge information. Experimental results show that the proposed method produces better image quality than conventional algorithms in both objective and subjective terms.
    Download PDF (3845K)
  • Kazuya IWAI, Sho TAKAHASHI, Takahiro OGAWA, Miki HASEYAMA
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 7 Pages 1885-1892
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    In this paper, an accurate player tracking method in far-view soccer videos based on a composite energy function is presented. In far-view soccer videos, player tracking methods that perform processing based only on visual features cannot accurately track players since each player region becomes small, and video coding causes color bleeding between player regions and the soccer field. In order to solve this problem, the proposed method performs player tracking on the basis of the following three elements. First, we utilize visual features based on uniform colors and player shapes. Second, since soccer players play in such a way as to maintain a formation, which is a positional pattern of players, we use this characteristic for player tracking. Third, since the movement direction of each player tends to change smoothly in successive frames of soccer videos, we also focus on this characteristic. Then we adopt three energies: a potential energy based on visual features, an elastic energy based on formations and a movement direction-based energy. Finally, we define a composite energy function that consists of the above three energies and track players by minimizing this energy function. Consequently, the proposed method achieves accurate player tracking in far-view soccer videos.
    Download PDF (1085K)
  • Xiao ZHAO, Sihui LI, Yun YANG, Yuyan CHAO, Lifeng HE
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 7 Pages 1893-1896
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    This paper proposes a new algorithm for substring searching. Our algorithm is an improvement on the famous BM algorithm. When a mismatch happens while searching a substring (pattern), the BM algorithm will use two strategies to calculate shifting distances of the substring respectively and selects the larger one. In comparison, our algorithm uses each of the two strategies for their most suitable cases separately without a selection operation. Experimental results demonstrated that our algorithm is more efficient than the BM algorithm and the Quick Search algorithm, especially for binary strings and DNA strings.
    Download PDF (288K)
  • Ki-Ro KIM, Seung-Jun YU, Hyoung-Kyu SONG
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 7 Pages 1897-1900
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Transmit diversity generally requires more than one antenna at the transmitter. However, many wireless devices are limited by size or hardware complexity. Cooperative diversity techniques were proposed to overcome this limitation. Cooperative communication has been widely investigated to improve the performance of wireless communication. Unfortunately, most existing cooperative schemes have the common fault of decreased transmission rate because the destination should receive the decodable compositions of symbols from the source and the relay. In this letter, we propose a new cooperative model that uses spatial phase coding (SPC) for orthogonal frequency division multiple access (OFDMA). This technique is free from the rate-loss and allows seamless cooperative communication while its diversity gain matches that of the conventional multiple antenna technique. The proposed technique is evaluated in terms of bit error rate (BER) and simulation results show that the proposed cooperative scheme approaches the performance of conventional multiple antenna system when the link between users is guaranteed.
    Download PDF (546K)
  • Xiangyu ZHANG, Yangdong DENG, Shuai MU
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2014 Volume E97.D Issue 7 Pages 1901-1904
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    General purpose computing on GPU (GPGPU) has become a popular computing model for high-performance, data-intensive applications. Accordingly, there is a strong need to develop highly efficient data structures to ease the development of GPGPU applications. In this work, we proposed an efficient concurrent queue data structure for GPU computing. The GPU based provably correct, lock-free FIFO queue allows a massive number of concurrent producers and consumers. Warp-centric en-queue and de-queue procedures are introduced to better match the underlying Single-Instruction, Multiple-Thread execution model of modern GPUs. It outperforms the best previous GPU queues by up to 40 fold. The correctness of the proposed queue operations is formally validated by linearizability criteria.
    Download PDF (852K)
  • Woo Hyun AHN, Joon-Woo CHOI, Jaewon OH, Seung-Ho LIM, Kyungbaek KIM
    Type: LETTER
    Subject area: Software System
    2014 Volume E97.D Issue 7 Pages 1905-1909
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Virtual memory systems page out a cluster of contiguous modified pages in virtual memory to a swap disk at one disk I/O but cannot find large clusters in applications mainly changing non-contiguous pages. Our proposal stores small clusters at one disk I/O. This decreases disk writes for paging out small clusters, thus improving page-out performance.
    Download PDF (513K)
  • Byungnam LIM, Yon Dohn CHUNG
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2014 Volume E97.D Issue 7 Pages 1910-1913
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Graph matching is to find an independent edge set in a graph. It can be used for various purposes such as finding a cover in a graph, chemical structural computations, multi-level graph partitioning and so on. When a graph is too large to be handled by a single machine, we should use multiple machines. In this paper, we use Pregel, a cloud graph processing architecture which is able to process massive scale graph data in scalable and fault-tolerant ways. We propose a parallel maximal matching algorithm described in the Pregel's vertex-centric BSP model. We test our algorithm on an 8 node cluster and the results show that our algorithm can realize high quality matching for a large graph in a short time. Also, our algorithm is linearly scalable with the number of machines.
    Download PDF (380K)
  • Sen WANG, Jun BI, Jianping WU
    Type: LETTER
    Subject area: Information Network
    2014 Volume E97.D Issue 7 Pages 1914-1917
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Caching is considered widely as an efficient way to reduce access latency and network bandwidth consumption. En-route caching, where caches are associated with routing nodes in the network, is proposed in the context of Web cache to exploit fully the potential of caching. To make sensible replacement and placement decision for en-route caching, traditional caching schemes either engage computation-intensive algorithm like dynamic programming or suffer from inferior performance in terms of average access latency. In this article, we propose a new caching scheme with cost-based replacement and random placement, which is named CRRP. The cost-based replacement of CRRP introduces probing request to timely perceive cost change and the random placement is independent of current caching state, of O(1) computational complexity of placement decision. Through extensive simulations, we show that CRRP outperforms a wide range of caching schemes and is very close to the traditional dynamic-programming-based algorithm, in terms of average access delay.
    Download PDF (378K)
  • Kyungbaek KIM
    Type: LETTER
    Subject area: Information Network
    2014 Volume E97.D Issue 7 Pages 1918-1922
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    In the design of distributed systems, defending against Sybil attack is an important issue. Recently, OSN (Online Social Network)-based Sybil defending approaches, which use the fast mixing property of a social network graph with sufficient length of random walks and provide Sybil-resistant trust values, have been proposed. However, because of the probabilistic property of the previous approaches, some honest (non-Sybil) identities obtain low trust value and they are mistakenly considered as Sybil identities. A simple solution of boosting the trust value of honest identities is using longer random walks, but this direct boosting method also increases trust values of Sybil identities significantly. In this paper, a two-step boosting method is proposed to increase the Sybil-resistant trust value of honest identities reasonably and to prevent Sybil identities from having high trust values. The proposed boosting method is composed of two steps: initializing the trust value with a reasonably long random walks and boosting the trust value by using much longer random walks than the first step. The proposed method is evaluated by using sampled social network graphs of Facebook, and it is observed that the proposed method reduces the portion of honest identities mistakenly considered as Sybil identities substantially (from 30% to 1.3%) and keeps the low trust values of Sybil identities.
    Download PDF (364K)
  • Chieko KATO, Kensei TSUCHIDA, Futoshi SUGIMOTO, Yasunori SHIONO, Takeh ...
    Type: LETTER
    Subject area: Rehabilitation Engineering and Assistive Technology
    2014 Volume E97.D Issue 7 Pages 1923-1926
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Recently, there are many Japanese citizens living abroad in Asia, including developing countries. However, not many studies have been conducted regarding their mental health. The purpose of this study was to see what kinds of stress people experience when living abroad. Japanese workers living abroad, including some who are married to foreign nationals, and their families were asked seven questions in a survey, and they provided answers to questions in agreement with the intent and purpose of this study. Morphological analysis of the results and category classification by word class was carried out. This category was arranged by word classes. Additionally, the tendencies of responses were categorized according to the KJ method. In response to the question, “Do you have any trouble because of cultural differences?” these responses were classified according to common features. A fuzzy cluster analysis was carried out based on this information. Meaningful clusters were obtained by fuzzy cluster analysis. Differences in the values of stress and family culture can best be described by fuzzy cluster analysis.
    Download PDF (299K)
  • Peng YE, Fang LIU, Zhiyong ZHAO
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2014 Volume E97.D Issue 7 Pages 1927-1929
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    Gaussian mixture model (GMM) has recently been applied for image registration given its robustness and efficiency. However, in previous GMM methods, all the feature points are treated identically. By incorporating local class features, this letter proposes a multiple Gaussian mixture models (M-GMM) method for image registration. The proposed method can achieve higher accuracy results with less registration time. Experiments on real image pairs further proved the superiority of the proposed method.
    Download PDF (1910K)
  • Lei LEI, Dae-Hwan KIM, Won-Jae PARK, Sung-Jea KO
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 7 Pages 1930-1932
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we propose a simple and efficient face representation feature that adopts the eigenfaces of Local Binary Pattern (LBP) space, referred to as the LBP eigenfaces, for robust face recognition. In the proposed method, LBP eigenfaces are generated by first mapping the original image space to the LBP space and then projecting the LBP space to the LBP eigenface subspace by Principal Component Analysis (PCA). Therefore, LBP eigenfaces capture both the local and global structures of face images. In the experiments, the proposed LBP eigenfaces are integrated into two types of classification methods, Nearest Neighbor (NN) and Collaborative Representation-based Classification (CRC). Experimental results indicate that the classification with the LBP eigenfaces outperforms that with the original eigenfaces and LBP histogram.
    Download PDF (569K)
  • Xing ZHANG, Keli HU, Lei WANG, Xiaolin ZHANG, Yingguan WANG
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 7 Pages 1933-1936
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    In this study, we address the problem of salient region detection. Recently, saliency detection with contrast based approaches has shown to give promising results. However, different individual features exhibit different performance. In this paper, we show that the combination of color uniqueness and color spatial distribution is an effective way to detect saliency. A Color Adaptive Thresholding Watershed Fusion Segmentation (CAT-WFS) method is first given to retain boundary information and delete unnecessary details. Based on the segmentation, color uniqueness and color spatial distribution are defined separately. The color uniqueness denotes the color rareness of salient object, while the color spatial distribution represents the color attribute of the background. Aiming at highlighting the salient object and downplaying the background, we combine the two characters to generate the final saliency map. Experimental results demonstrate that the proposed algorithm outperforms existing salient object detection methods.
    Download PDF (2079K)
  • Song GAO, Chunheng WANG, Baihua XIAO, Cunzhao SHI, Wen ZHOU, Zhong ZHA ...
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 7 Pages 1937-1941
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    In this paper, we propose a representation method based on local spatial strokes for scene character recognition. High-level semantic information, namely co-occurrence of several strokes is incorporated by learning a sparse dictionary, which can further restrain noise brought by single stroke detectors. The encouraging results outperform state-of-the-art algorithms.
    Download PDF (1701K)
  • Song GAO, Chunheng WANG, Baihua XIAO, Cunzhao SHI, Wen ZHOU, Zhong ZHA ...
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2014 Volume E97.D Issue 7 Pages 1942-1946
    Published: 2014
    Released: July 01, 2014
    JOURNALS FREE ACCESS
    This paper tries to model spatial layout beyond the traditional spatial pyramid (SP) in the coding/pooling scheme for scene text character recognition. Specifically, we propose a novel method to build a dictionary called spatiality embedded dictionary (SED) in which each codeword represents a particular character stroke and is associated with a local response region. The promising results outperform other state-of-the-art algorithms.
    Download PDF (680K)
Errata
feedback
Top