IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E98.D , Issue 12
Showing 1-39 articles out of 39 articles from the selected issue
Special Section on Parallel and Distributed Computing and Networking
  • Yasuhiko NAKASHIMA
    2015 Volume E98.D Issue 12 Pages 2047
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Download PDF (64K)
  • Ryo HAMAMOTO, Tutomu MURASE, Chisa TAKANO, Hiroyasu OBATA, Kenji ISHID ...
    Type: PAPER
    Subject area: Wireless System
    2015 Volume E98.D Issue 12 Pages 2048-2059
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In recent times, wireless Local Area Networks (wireless LANs) based on the IEEE 802.11 standard have been spreading rapidly, and connecting to the Internet using wireless LANs has become more common. In addition, public wireless LAN service areas, such as train stations, hotels, and airports, are increasing and tethering technology has enabled smartphones to act as access points (APs). Consequently, there can be multiple APs in the same area. In this situation, users must select one of many APs. Various studies have proposed and evaluated many AP selection methods; however, existing methods do not consider AP mobility. In this paper, we propose an AP selection method based on cooperation among APs and user movement. Moreover, we demonstrate that the proposed method dramatically improves throughput compared to an existing method.
    Download PDF (2150K)
  • Hiroyasu OBATA, Ryo HAMAMOTO, Chisa TAKANO, Kenji ISHIDA
    Type: PAPER
    Subject area: Wireless System
    2015 Volume E98.D Issue 12 Pages 2060-2070
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Wireless local area networks (LANs) based on the IEEE802.11 standard usually use carrier sense multiple access with collision avoidance (CSMA/CA) for media access control. However, in CSMA/CA, if the number of wireless terminals increases, the back-off time derived by the initial contention window (CW) tends to conflict among wireless terminals. Consequently, a data frame collision often occurs, which sometimes causes the degradation of the total throughput in the transport layer protocols. In this study, to improve the total throughput, we propose a new media access control method, SP-MAC, which is based on the synchronization phenomena of coupled oscillators. Moreover, this study shows that SP-MAC drastically decreases the data frame collision probability and improves the total throughput when compared with the original CSMA/CA method.
    Download PDF (1556K)
  • Gang DENG, Hong WANG, Zhenghu GONG, Lin CHEN, Xu ZHOU
    Type: PAPER
    Subject area: Network
    2015 Volume E98.D Issue 12 Pages 2071-2081
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Address configuration is a key problem in data center networks. The core issue of automatic address configuration is assigning logical addresses to the physical network according to a blueprint, namely logical-to-device ID mapping, which can be formulated as a graph isomorphic problem and is hard. Recently years, some work has been proposed for this problem, such as DAC and ETAC. DAC adopts a sub-graph isomorphic algorithm. By leveraging the structure characteristic of data center network, DAC can finish the mapping process quickly when there is no malfunction. However, in the presence of any malfunctions, DAC need human effort to correct these malfunctions and thus is time-consuming. ETAC improves on DAC and can finish mapping even in the presence of malfunctions. However, ETAC also suffers from some robustness and efficiency problems. In this paper, we present GA-MAP, a data center networks address mapping algorithm based on genetic algorithm. By intelligently leveraging the structure characteristic of data center networks and the global search characteristic of genetic algorithm, GA-MAP can solve the address mapping problem quickly. Moreover, GA-MAP can even finish address mapping when physical network involved in malfunctions, making it more robust than ETAC. We evaluate GA-MAP via extensive simulation in several of aspects, including computation time, error-tolerance, convergence characteristic and the influence of population size. The simulation results demonstrate that GA-MAP is effective for data center addresses mapping.
    Download PDF (5732K)
  • Junjun ZHENG, Hiroyuki OKAMURA, Tadashi DOHI
    Type: PAPER
    Subject area: Network
    2015 Volume E98.D Issue 12 Pages 2082-2090
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Survivability is the capability of a system to provide its services in a timely manner even after intrusion and compromise occur. In this paper, we focus on the quantitative analysis of survivability of virtual machine (VM) based intrusion tolerant system in the presence of Byzantine failures due to malicious attacks. Intrusion tolerant system has the ability of a system to continuously provide correct services even if the system is intruded. This paper introduces a scheme of the intrusion tolerant system with virtualization, and derives the success probability for one request by a Markov chain under the environment where VMs have been intruded due to a security hole by malicious attacks. Finally, in numerical experiments, we evaluate the performance of VM-based intrusion tolerant system from the viewpoint of survivability.
    Download PDF (605K)
  • Xia YIN, Jiangyuan YAO, Zhiliang WANG, Xingang SHI, Jun BI, Jianping W ...
    Type: PAPER
    Subject area: Network
    2015 Volume E98.D Issue 12 Pages 2091-2104
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The researches on model-based testing mainly focus on the models with single component, such as FSM and EFSM. For the network protocols which have multiple components communicating with messages, CFSM is a widely accepted solution. But in some network protocols, parallel and data-shared components maybe exist in the same network entity. It is infeasible to precisely specify such protocol by existing models. In this paper we present a new model, Parallel Parameterized Extended Finite State Machine (PaP-EFSM). A protocol system can be modeled with a group of PaP-EFSMs. The PaP-EFSMs work in parallel and they can read external variables form each other. We present a 2-stage test generation approach for our new models. Firstly, we generate test sequences for internal variables of each machine. They may be non-executable due to external variables. Secondly, we process the external variables. We make the sequences for internal variables executable and generate more test sequences for external variables. For validation, we apply this method to the conformance testing of real-life protocols. The devices from different vendors are tested and implementation faults are exposed.
    Download PDF (1684K)
  • Xiaoting WANG, Yiwen WANG, Shichao LI, Ping LI
    Type: PAPER
    Subject area: Switching System
    2015 Volume E98.D Issue 12 Pages 2105-2115
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The crossbar-based switch fabric is widely used in today's high performance switches, due to its internally nonblocking and simply implementation properties. Usually there are two main switching architectures for crossbar-based switch fabric: internally bufferless crossbar switch and crosspoint buffered crossbar switch. As internally bufferless crossbar switch requires a complex centralized scheduler which limits its scalability to high speeds, crosspoint buffered crossbar switch has gained more attention because of its simpler distributed scheduling algorithm and better switching performance. However, almost all the scheduling algorithms proposed previously for crosspoint buffered crossbar switch either have unsatisfactory scheduling performance under non-uniform traffic patterns or show poor service fairness between input traffic flows. In order to overcome the disadvantages of existing algorithms, in this paper we propose two novel high performance scheduling algorithms named MCQF_RR and IMCQF_RR for crosspoint buffered crossbar switches. Both algorithms have a time complexity of O(log N), where N is the number of input/output ports of the switch. MCQF_RR takes advantage of the combined weight information about queue length and service waiting time of input queues to perform scheduling. In order to further reduce the scheduling complexity and make it feasible for high speed switches, IMCQF_RR uses the compressed queue length information instead of original queue length information to schedule cells in input VOQs. Simulation results show that our novel scheduling algorithms MCQF_RR and IMCQF_RR can demonstrate excellent delay performance comparable to existing high performance scheduling algorithms under both uniform and non-uniform traffic patterns, while maintain good service fairness performance under severe non-uniform traffic patterns.
    Download PDF (798K)
  • Hon-Chan CHEN, Tzu-Liang KUNG, Yun-Hao ZOU, Hsin-Wei MAO
    Type: PAPER
    Subject area: Switching System
    2015 Volume E98.D Issue 12 Pages 2116-2122
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In this paper, we investigate the fault-tolerant Hamiltonian problems of crossed cubes with a faulty path. More precisely, let P denote any path in an n-dimensional crossed cube CQn for n ≥ 5, and let V(P) be the vertex set of P. We show that CQn-V(P) is Hamiltonian if |V(P)|n and is Hamiltonian connected if |V(P)|n-1. Compared with the previous results showing that the crossed cube is (n-2)-fault-tolerant Hamiltonian and (n-3)-fault-tolerant Hamiltonian connected for arbitrary faults, the contribution of this paper indicates that the crossed cube can tolerate more faulty vertices if these vertices happen to form some specific types of structures.
    Download PDF (588K)
  • Huan WANG, Hideroni NAKAZATO
    Type: PAPER
    Subject area: Grid System
    2015 Volume E98.D Issue 12 Pages 2123-2131
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Peer-to-peer (P2P)-Grid systems are being investigated as a platform for converging the Grid and P2P network in the construction of large-scale distributed applications. The highly dynamic nature of P2P-Grid systems greatly affects the execution of the distributed program. Uncertainty caused by arbitrary node failure and departure significantly affects the availability of computing resources and system performance. Checkpoint-and-restart is the most common scheme for fault tolerance because it periodically saves the execution progress onto stable storage. In this paper, we suggest a checkpoint-and-restart mechanism as a fault-tolerant method for applications on P2P-Grid systems. Failure detection mechanism is a necessary prerequisite to fault tolerance and fault recovery in general. Given the highly dynamic nature of nodes within P2P-Grid systems, any failure should be detected to ensure effective task execution. Therefore, failure detection mechanism as an integral part of P2P-Grid systems was studied. We discussed how the design of various failure detection algorithms affects their performance in average failure detection time of nodes. Numerical analysis results and implementation evaluation are also provided to show different average failure detection times in real systems for various failure detection algorithms. The comparison shows the shortest average failure detection time by 8.8s on basis of the WP failure detector. Our lowest mean time to recovery (MTTR) is also proven to have a distinct advantage with a time consumption reduction of about 5.5s over its counterparts.
    Download PDF (1666K)
  • Yuto MIYAKOSHI, Shinya YASUDA, Kan WATANABE, Masaru FUKUSHI, Yasuyuki ...
    Type: PAPER
    Subject area: Grid System
    2015 Volume E98.D Issue 12 Pages 2132-2140
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    This paper addresses the problem of job scheduling in volunteer computing (VC) systems where each computation job is replicated and allocated to multiple participants (workers) to remove incorrect results by a voting mechanism. In the job scheduling of VC, the number of workers to complete a job is an important factor for the system performance; however, it cannot be fixed because some of the workers may secede in real VC. This is the problem that existing methods have not considered in the job scheduling. We propose a dynamic job scheduling method which considers the expected probability of completion (EPC) for each job based on the probability of worker's secession. The key idea of the proposed method is to allocate jobs so that EPC is always greater than a specified value (SPC). By setting SPC as a reasonable value, the proposed method enables to complete jobs without excess allocation, which leads to the higher performance of VC systems. We assume in this paper that worker's secession probability follows Weibull-distribution which is known to reflect more practical situation. We derive parameters for the distribution using actual trace data and compare the performance of the proposed and the previous method under the Weibull-distribution model, as well as the previous constant probability model. Simulation results show that the performance of the proposed method is up to 5 times higher than that of the existing method especially when the time for completing jobs is restricted, while keeping the error rate lower than a required value.
    Download PDF (804K)
  • Yoshikazu INAGAKI, Shinya TAKAMAEDA-YAMAZAKI, Jun YAO, Yasuhiko NAKASH ...
    Type: PAPER
    Subject area: Architecture
    2015 Volume E98.D Issue 12 Pages 2141-2149
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The Energy-aware Multi-mode Accelerator eXtension [24],[25] (EMAX) is equipped with distributed single-port local memories and ring-formed interconnections. The accelerator is designed to achieve extremely high throughput for scientific computations, big data, and image processing as well as low-power consumption. However, before mapping algorithms on the accelerator, application developers require sufficient knowledge of the hardware organization and specially designed instructions. They also need significant effort to tune the code for improving execution efficiency when no well-designed compiler or library is available. To address this problem, we focus on library support for stencil (nearest-neighbor) computations that represent a class of algorithms commonly used in many partial differential equation (PDE) solvers. In this research, we address the following topics: (1) system configuration, features, and mnemonics of EMAX; (2) instruction mapping techniques that reduce the amount of data to be read from the main memory; (3) performance evaluation of the library for PDE solvers. With the features of a library that can reuse the local data across the outer loop iterations and map many instructions by unrolling the outer loops, the amount of data to be read from the main memory is significantly reduced to a minimum of 1/7 compared with a hand-tuned code. In addition, the stencil library reduced the execution time 23% more than a general-purpose processor.
    Download PDF (1657K)
  • Shinya TAKAMAEDA-YAMAZAKI, Hiroshi NAKATSUKA, Yuichiro TANAKA, Kenji K ...
    Type: PAPER
    Subject area: Architecture
    2015 Volume E98.D Issue 12 Pages 2150-2158
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Soft processors are widely used in FPGA-based embedded computing systems. For such purposes, efficiency in resource utilization is as important as high performance. This paper proposes Ultrasmall, a new soft processor architecture for FPGAs. Ultrasmall supports a subset of the MIPS-I instruction set architecture and employs an area efficient microarchitecture to reduce the use of FPGA resources. While supporting the original 32-bit ISA, Ultrasmall uses a 2-bit serial ALU for all of its operations. This approach significantly reduces the resource utilization instead of increasing the performance overheads. In addition to these device-independent optimizations, we applied several device-dependent optimizations for Xilinx Spartan-3E FPGAs using 4-input lookup tables (LUTs). Optimizations using specific primitives aggressively reduce the number of occupied slices. Our evaluation result shows that Ultrasmall occupies only 84% of the previous small soft processor. In addition to the utilized resource reduction, Ultrasmall achieves 2.9 times higher performance than the previous approach.
    Download PDF (1101K)
  • Takahiro HIROFUCHI, Isaku YAMAHATA, Satoshi ITOH
    Type: PAPER
    Subject area: Operating System
    2015 Volume E98.D Issue 12 Pages 2159-2167
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Postcopy live migration is a promising alternative of virtual machine (VM) migration, which transfers memory pages after switching the execution host of a VM. It allows a shorter and more deterministic migration time than precopy migration. There is, however, a possibility that postcopy migration would degrade VM performance just after switching the execution host. In this paper, we propose a performance improvement technique of postcopy migration, extending the para-virtualized page fault mechanism of a virtual machine monitor. When the guest operating system accesses a not-yet-transferred memory page, our proposed mechanism allows the guest kernel to defer the execution of the current process until the page data is transferred. In parallel with the page transfer, the guest kernel can yield VCPU to other active processes. We implemented the proposed technique in our postcopy migration mechanism for Qemu/KVM. Through experiments, we confirmed that our technique successfully alleviated performance degradation of postcopy migration for web server and database benchmarks.
    Download PDF (2222K)
  • Takatsugu ONO, Yotaro KONISHI, Teruo TANIMOTO, Noboru IWAMATSU, Takash ...
    Type: PAPER
    Subject area: Storage System
    2015 Volume E98.D Issue 12 Pages 2168-2177
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Big data analysis and a data storing applications require a huge volume of storage and a high I/O performance. Applications can achieve high level of performance and cost efficiency by exploiting the high I/O performance of direct attached storages (DAS) such as internal HDDs. With the size of stored data ever increasing, it will be difficult to replace servers since internal HDDs contain huge amounts of data. Generally, the data is copied via Ethernet when transferring the data from the internal HDDs to the new server. However, the amount of data will continue to rapidly increase, and thus, it will be hard to make these types of transfers through the Ethernet since it will take a long time. A storage area network such as iSCSI can be used to avoid this problem because the data can be shared with the servers. However, this decreases the level of performance and increases the costs. Improving the flexibility without incurring I/O performance degradation is required in order to improve the DAS architecture. In response to this issue, we propose FlexDAS, which improves the flexibility of direct attached storage by using a disk area network (DAN) without degradation the I/O performance. A resource manager connects or disconnects the computation nodes to the HDDs via the FlexDAS switch, which supports the SAS or SATA protocols. This function enables for the servers to be replaced in a short period of time. We developed a prototype FlexDAS switch and quantitatively evaluated the architecture. Results show that the FlexDAS switch can disconnect and connect the HDD to the server in just 1.16 seconds. We also confirmed that the FlexDAS improves the performance of the data intensive applications by up to 2.84 times compared with the iSCSI.
    Download PDF (1405K)
  • Shoichi HIRASAWA, Hiroyuki TAKIZAWA, Hiroaki KOBAYASHI
    Type: PAPER
    Subject area: Software
    2015 Volume E98.D Issue 12 Pages 2178-2186
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Automatic performance tuning of a practical application could be time-consuming and sometimes infeasible, because it often needs to evaluate the performances of a large number of code variants to find the best one. In this paper, hence, a light-weight rollback mechanism is proposed to evaluate each of code variants at a low cost. In the proposed mechanism, once one code variant of a target code block is executed, the execution state is rolled back to the previous state of not yet executing the block so as to repeatedly execute only the block to find the best code variant. It also has a feature of terminating a code variant whose execution time is longer than the shortest execution time so far. As a result, it can prevent executing the whole application many times and thus reduces the timing overhead of an auto-tuning process required for finding the best code variant.
    Download PDF (702K)
  • Toshihiro YAMAUCHI, Masahiro TSURUYA, Hideo TANIGUCHI
    Type: LETTER
    Subject area: Operating System
    2015 Volume E98.D Issue 12 Pages 2187-2191
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Microkernel operating systems (OSes) use zero-copy communication to reduce the overhead of copying transfer data, because the communication between OS servers occurs frequently in the case of microkernel OSes. However, when a memory management unit manages the translation lookaside buffer (TLB) using software, TLB misses tend to increase the overhead of interprocess communication (IPC) between OS servers running on a microkernel OS. Thus, improving the control method of a software-managed TLB is important for microkernel OSes. This paper proposes a fast control method of software-managed TLB that manages page attachment in the area used for IPC by using TLB entries, instead of page tables. Consequently, TLB misses can be avoided in the area, and the performance of IPC improves. Thus, taking the SH-4 processor as an example of a processor having a software-managed TLB, this paper describes the design and the implementation of the proposed method for AnT operating system, and reports the evaluation results of the proposed method.
    Download PDF (257K)
  • Wei XIONG, Ye WU, Luo CHEN, Ning JING
    Type: LETTER
    Subject area: Storage System
    2015 Volume E98.D Issue 12 Pages 2192-2195
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The challenges of providing a divide-and-conquer strategy for tackling large geospatial raster data input/output (I/O) are longstanding. Solutions need to change with advances in the technology and hardware. After analyzing the reason for the problems of traditional parallel raster I/O mode, a parallel I/O strategy using file view is proposed to solve these problems. Message Passing Interface I/O (MPI-IO) is used to implement this strategy. Experimental results show how a file view approach can be effectively married to General Parallel File System (GPFS). A suitable file view setting provides an efficient solution to parallel geospatial raster data I/O.
    Download PDF (4436K)
  • Yuta MATSUI, Shinji FUKUMA, Shin-ichiro MORI
    Type: LETTER
    Subject area: Software
    2015 Volume E98.D Issue 12 Pages 2196-2198
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In this paper, the repeatable hybrid parallel implementation of inverse matrix computation using SMW formula is proposed. The authors' had previously proposed a hybrid parallel algorithm for inverse matrix computation. It is reasonably fast for a one time computation of an inverse matrix, but it is hard to apply this algorithm repeatedly for consecutive computations since the relocation of the large matrix is required at the beginning of each iterations. In order to eliminate the relocation of the large input matrix which is the output of the inverse matrix computation from the previous time step, the computation algorithm has been redesigned so that the required portion of the input matrix becomes the same as the output portion of the previously computed matrix in each node. This makes it possible to repeatedly and efficiently apply the SMW formula to compute inverse matrix in a time-series simulation.
    Download PDF (457K)
Regular Section
  • Asahi TAKAOKA, Shingo OKUMA, Satoshi TAYU, Shuichi UENO
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2015 Volume E98.D Issue 12 Pages 2199-2206
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The harmonious coloring of an undirected simple graph is a vertex coloring such that adjacent vertices are assigned different colors and each pair of colors appears together on at most one edge. The harmonious chromatic number of a graph is the least number of colors used in such a coloring. The harmonious chromatic number of a path is known, whereas the problem to find the harmonious chromatic number is NP-hard even for trees with pathwidth at most 2. Hence, we consider the harmonious coloring of trees with pathwidth 1, which are also known as caterpillars. This paper shows the harmonious chromatic number of a caterpillar with at most one vertex of degree more than 2. We also show the upper bound of the harmonious chromatic number of a 3-regular caterpillar.
    Download PDF (426K)
  • Yasin OGE, Masato YOSHIMI, Takefumi MIYOSHI, Hideyuki KAWASHIMA, Hidet ...
    Type: PAPER
    Subject area: Computer System
    2015 Volume E98.D Issue 12 Pages 2207-2217
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In this paper, we propose Configurable Query Processing Hardware (CQPH), an FPGA-based accelerator for continuous query processing over data streams. CQPH is a highly optimized and minimal-overhead execution engine designed to deliver real-time response for high-volume data streams. Unlike most of the other FPGA-based approaches, CQPH provides on-the-fly configurability for multiple queries with its own dynamic configuration mechanism. With a dedicated query compiler, SQL-like queries can be easily configured into CQPH at run time. CQPH supports continuous queries including selection, group-by operation and sliding-window aggregation with a large number of overlapping sliding windows. As a proof of concept, a prototype of CQPH is implemented on an FPGA platform for a case study. Evaluation results indicate that a given query can be configured within just a few microseconds, and the prototype implementation of CQPH can process over 150 million tuples per second with a latency of less than a microsecond. Results also indicate that CQPH provides linear scalability to increase its flexibility (i.e., on-the-fly configurability) without sacrificing performance (i.e., maximum allowable clock speed).
    Download PDF (2056K)
  • Hirohisa AMAN, Sousuke AMASAKI, Takashi SASAKI, Minoru KAWAHARA
    Type: PAPER
    Subject area: Software Engineering
    2015 Volume E98.D Issue 12 Pages 2218-2228
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    This paper focuses on the power of comments to predict fault-prone programs. In general, comments along with executable statements enhance the understandability of programs. However, comments may also be used to mask the lack of readability in the program, therefore well-written comments are referred to as “deodorant to mask code smells” in the field of code refactoring. This paper conducts an empirical analysis to examine whether Lines of Comments (LCM) written inside a method's body is a noteworthy metric for analyzing fault-proneness in Java methods. The empirical results show the following two findings: (1) more-commented methods (the methods having more comments than the amount estimated by size and complexity of the methods) are about 1.6 - 2.8 times more likely to be faulty than the others, and (2) LCM can be a useful factor in fault-prone method prediction models along with the method size and the method complexity.
    Download PDF (631K)
  • Kazuaki NAKAMURA, Takuya FUNATOMI, Atsushi HASHIMOTO, Mayumi UEDA, Mic ...
    Type: PAPER
    Subject area: Human-computer Interaction
    2015 Volume E98.D Issue 12 Pages 2229-2241
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The amount of seasonings used during food preparation is quite important information for modern people to enable them to cook delicious dishes as well as to take care for their health. In this paper, we propose a near real-time automated system for measuring and recording the amount of seasonings used during food preparation. Our proposed system is equipped with two devices: electronic scales and a camera. Seasoning bottles are basically placed on the electronic scales in the proposed system, and the scales continually measure the total weight of the bottles placed on them. When a chef uses a certain seasoning, he/she first picks up the bottle containing it from the scales, then adds the seasoning to a dish, and then returns the bottle to the scales. In this process, the chef's picking and returning actions are monitored by the camera. The consumed amount of each seasoning is calculated as the difference in weight between before and after it is used. We evaluated the performance of the proposed system with experiments in 301 trials in actual food preparation performed by seven participants. The results revealed that our system successfully measured the consumption of seasonings in 60.1% of all the trials.
    Download PDF (1357K)
  • Xiaoli GONG, Yanjun LIU, Yang JIAO, Baoji WANG, Jianchao ZHOU, Haiyang ...
    Type: PAPER
    Subject area: Human-computer Interaction
    2015 Volume E98.D Issue 12 Pages 2242-2249
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    An earthquake is a destructive natural disaster, which cannot be predicted accurately and causes devastating damage and losses. In fact, many of the damages can be prevented if people know what to do during and after earthquakes. Earthquake education is the most important method to raise public awareness and mitigate the damage caused by earthquakes. Generally, earthquake education consists of conducting traditional earthquake drills in schools or communities and experiencing an earthquake through the use of an earthquake simulator. However, these approaches are unrealistic or expensive to apply, especially in underdeveloped areas where earthquakes occur frequently. In this paper, an earthquake drill simulation system based on virtual reality (VR) technology is proposed. A User is immersed in a 3D virtual earthquake environment through a head mounted display and is able to control the avatar in a virtual scene via Kinect to respond to the simulated earthquake environment generated by SIGVerse, a simulation platform. It is a cost effective solution and is easy to deploy. The design and implementation of this VR system is proposed and a dormitory earthquake simulation is conducted. Results show that powerful earthquakes can be simulated successfully and the VR technology can be applied in the earthquake drills.
    Download PDF (945K)
  • Ye AI, Feng MIAO, Qingmao HU, Weifeng LI
    Type: PAPER
    Subject area: Pattern Recognition
    2015 Volume E98.D Issue 12 Pages 2250-2256
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In this paper, a novel method of high-grade brain tumor segmentation from multi-sequence magnetic resonance images is presented. Firstly, a Gaussian mixture model (GMM) is introduced to derive an initial posterior probability by fitting the fluid attenuation inversion recovery histogram. Secondly, some grayscale and region properties are extracted from different sequences. Thirdly, grayscale and region characteristics with different weights are proposed to adjust the posterior probability. Finally, a cost function based on the posterior probability and neighborhood information is formulated and optimized via graph cut. Experiment results on a public dataset with 20 high-grade brain tumor patient images show the proposed method could achieve a dice coefficient of 78%, which is higher than the standard graph cut algorithm without a probability-adjusting step or some other cost function-based methods.
    Download PDF (1544K)
  • Guanwen ZHANG, Jien KATO, Yu WANG, Kenji MASE
    Type: PAPER
    Subject area: Pattern Recognition
    2015 Volume E98.D Issue 12 Pages 2257-2270
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In this paper, we propose a patch-wise learning based approach to deal with the multiple-shot people re-identification task. In the proposed approach, re-identification is formulated as a patch-wise set-to-set matching problem, with each patch set being matched using a specifically learned Mahalanobis distance metric. The proposed approach has two advantages: (1) a patch-wise representation that moderates the ambiguousness of a non-rigid matching problem (of human body) to an approximate rigid one (of body parts); (2) a patch-wise learning algorithm that enables more constraints to be included in the learning process and results in distance metrics of high quality. We evaluate the proposed approach on popular benchmark datasets and confirm its competitive performance compared to the state-of-the-art methods.
    Download PDF (4495K)
  • Xiaoyun WANG, Seiichi YAMAMOTO
    Type: PAPER
    Subject area: Speech and Hearing
    2015 Volume E98.D Issue 12 Pages 2271-2279
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Recognition of second language (L2) speech is still a challenging task even for state-of-the-art automatic speech recognition (ASR) systems, partly because pronunciation by L2 speakers is usually significantly influenced by the mother tongue of the speakers. The authors previously proposed using a reduced phoneme set (RPS) instead of the canonical one of L2 when the mother tongue of speakers is known, and demonstrated that this reduced phoneme set improved the recognition performance through experiments using English utterances spoken by Japanese. However, the proficiency of L2 speakers varies widely, as does the influence of the mother tongue on their pronunciation. As a result, the effect of the reduced phoneme set is different depending on the speakers' proficiency in L2. In this paper, the authors examine the relation between proficiency of speakers and a reduced phoneme set customized for them. The experimental results are then used as the basis of a novel speech recognition method using a lexicon in which the pronunciation of each lexical item is represented by multiple reduced phoneme sets, and the implementation of a language model most suitable for that lexicon is described. Experimental results demonstrate the high validity of the proposed method.
    Download PDF (1978K)
  • Duy Khanh NINH, Yoichi YAMASHITA
    Type: PAPER
    Subject area: Speech and Hearing
    2015 Volume E98.D Issue 12 Pages 2280-2289
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    A conventional HMM-based speech synthesis system for Hanoi Vietnamese often suffers from hoarse quality due to incomplete F0 parameterization of glottalized tones. Since estimating F0 from glottalized waveform is rather problematic for usual F0 extractors, we propose a pitch marking algorithm where pitch marks are propagated from regular regions of a speech signal to glottalized ones, from which complete F0 contours for the glottalized tones are derived. The proposed F0 parameterization scheme was confirmed to significantly reduce the hoarseness whilst slightly improving the tone naturalness of synthetic speech by both objective and listening tests. The pitch marking algorithm works as a refinement step based on the results of an F0 extractor. Therefore, the proposed scheme can be combined with any F0 extractor.
    Download PDF (960K)
  • Dae-Chul KIM, Wang-Jun KYUNG, Ho-Gun HA, Yeong-Ho HA
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2015 Volume E98.D Issue 12 Pages 2290-2298
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The role of an optical low-pass filter (OLPF) in a digital still camera is to remove the high spatial frequencies that cause aliasing, thereby enhancing the image quality. However, this also causes some loss of detail. Yet, when an image is captured without the OLPF, moiré generally appears in the high spatial frequency region of the image. Accordingly, this paper presents a moiré reduction method that allows omission of the OLPF. Since most digital still cameras use a CCD or a CMOS with a Bayer pattern, moiré patterns and color artifacts are simultaneously induced by aliasing at high spatial frequencies. Therefore, in this study, moiré reduction is performed in both the luminance channel to remove the moiré patterns and the color channel to reduce color smearing. To detect the moiré patterns, the spatial frequency response (SFR) of the camera is first analyzed. The moiré regions are identified using patterns related to the SFR of the camera and then analyzed in the frequency domain. The moiré patterns are reduced by removing their frequency components, represented by the inflection point between the high-frequency and DC components in the moiré region. To reduce the color smearing, color changing regions are detected using the color variation ratios for the RGB channels and then corrected by multiplying with the average surrounding colors. Experiments confirm that the proposed method is able to reduce the moiré in both the luminance and color channels, while also preserving the detail.
    Download PDF (1467K)
  • Houari SABIRIN, Hiroshi SANKOH, Sei NAITO
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2015 Volume E98.D Issue 12 Pages 2299-2307
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    The problem of identifying moving objects in a video recording produced by a range sensor camera is due to the limited information available for classifying different objects. On the other hand, the infrared signal from a range sensor camera is more robust for extreme luminance intensity when the monitored area has light conditions that are too bright or too dark. This paper proposes a method of detection and tracking moving objects in image sequences captured by stationary range sensor cameras. Here, the depth information is utilized to correctly identify each of detected objects. Firstly, camera calibration and background subtraction are performed to separate the background from the moving objects. Next, a 2D projection mapping is performed to obtain the location and contour of the objects in the 2D plane. Based on this information, graph matching is performed based on features extracted from the 2D data, namely object position, size and the behavior of the objects. By observing the changes in the number of objects and the objects' position relative to each other, similarity matching is performed to track the objects in the temporal domain. Experimental results show that by using similarity matching, object identification can be correctly achieved even during occlusion.
    Download PDF (1645K)
  • Takatsugu HIRAYAMA, Toshiya OHIRA, Kenji MASE
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2015 Volume E98.D Issue 12 Pages 2308-2316
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Intelligent information systems captivate people's attention. Examples of such systems include driving support vehicles capable of sensing driver state and communication robots capable of interacting with humans. Modeling how people search visual information is indispensable for designing these kinds of systems. In this paper, we focus on human visual attention, which is closely related to visual search behavior. We propose a computational model to estimate human visual attention while carrying out a visual target search task. Existing models estimate visual attention using the ratio between a representative value of visual feature of a target stimulus and that of distractors or background. The models, however, can not often achieve a better performance for difficult search tasks that require a sequentially spotlighting process. For such tasks, the linear separability effect of a visual feature distribution should be considered. Hence, we introduce this effect to spatially localized activation. Concretely, our top-down model estimates target-specific visual attention using Fisher's variance ratio between a visual feature distribution of a local region in the field of view and that of a target stimulus. We confirm the effectiveness of our computational model through a visual search experiment.
    Download PDF (2492K)
  • Yongsoo JOO, Sangsoo PARK, Hyokyung BAHN
    Type: LETTER
    Subject area: Computer System
    2015 Volume E98.D Issue 12 Pages 2317-2321
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Application prefetchers improve application launch performance on HDDs through either I/O reordering or I/O interleaving, but there has been no proposal to combine the two techniques. We present a new algorithm to combine both approaches, and demonstrate that it reduces cold start launch time by 50%.
    Download PDF (1337K)
  • Xiao XUAN, Xiaoqiong ZHAO, Ye WANG, Shanping LI
    Type: LETTER
    Subject area: Software Engineering
    2015 Volume E98.D Issue 12 Pages 2322-2327
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Bugs in industrial financial systems have not been extensively studied. To address this gap, we focused on the empirical study of bugs in three systems, PMS, β-Analyzer, and OrderPro. Results showed the 3 most common types of bugs in industrial financial systems to be internal interface (19.00%), algorithm/method (17.67%), and logic (15.00%).
    Download PDF (164K)
  • Masoud REYHANI HAMEDANI, Sang-Wook KIM
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2015 Volume E98.D Issue 12 Pages 2328-2332
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In this paper, we propose SimCS (similarity based on contribution scores) to compute the similarity of scientific papers. For similarity computation, we exploit a notion of a contribution score that indicates how much a paper contributes to another paper citing it. Also, we consider the author dominance of papers in computing contribution scores. We perform extensive experiments with a real-world dataset to show the superiority of SimCS. In comparison with SimCC, the-state-of-the-art method, SimCS not only requires no extra parameter tuning but also shows higher accuracy in similarity computation.
    Download PDF (6724K)
  • Jeongyeup PAEK, Byung-Seo KIM
    Type: LETTER
    Subject area: Information Network
    2015 Volume E98.D Issue 12 Pages 2333-2336
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Even though the IEEE 802.15.4 standard defines processes for handling the loss of beacon frames in beacon-enabled low-rate wireless personal area networks (LR-WPANs), they are not efficient nor detailed. This letter proposes an enhanced process to improve the throughput performance of LR-WPANs under the losses of beacon frames. The key idea of our proposed enhancement is to make devices that have not received a beacon frame, due to packet loss, to transmit their data in the contention period and even in the inactive period instead of holding pending frames during the whole superframe period. The proposed protocol is evaluated using mathematical analysis as well as simulations, and the throughput improvement of LR-WPANs is proved.
    Download PDF (374K)
  • Shuoyan LIU, Kai FANG
    Type: LETTER
    Subject area: Pattern Recognition
    2015 Volume E98.D Issue 12 Pages 2337-2340
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Face verification in the presence of age progression is an important problem that has not been widely addressed. Despite appearance changes for same person due to aging, they are more similar compared to facial images from different individuals. Hence, we design common and adapted vocabularies, where common vocabulary describes contents of general population and adapted vocabulary represents specific characteristics of one of image facial pairs. And the other image is characterized with a concatenation histogram of common and adapted visual words counts, termed as “age-invariant distinctive representation”. The representation describes whether the image content is best modeled by the common vocabulary or the corresponding adapted vocabulary, which is further used to accomplish the face verification. The proposed approach is tested on the FGnet dataset and a collection of real-world facial images from identification card. The experimental results demonstrate the effectiveness of the proposed method for verification of identity at a modest computational cost.
    Download PDF (1131K)
  • Su-Jin CHOI, Jeong-Yong BOO, Ki-Jun KIM, Hochong PARK
    Type: LETTER
    Subject area: Speech and Hearing
    2015 Volume E98.D Issue 12 Pages 2341-2344
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    We propose a method of enhancing the performance of a cross-talk canceller for a four-speaker system with respect to sweet spot size and ringing effect. For the large sweet spot of a cross-talk canceller, the speaker layout needs to be symmetrical to the listener's position. In addition, a ringing effect of the cross-talk canceller is reduced when many speakers are located close to each other. Based on these properties, the proposed method first selects the two speakers in a four-speaker system that are most symmetrical to the target listener's position and then adds the remaining speakers between these two to the final selection. By operating only these selected speakers, the proposed method enlarges the sweet spot size and reduces the ringing effect. We conducted objective and subjective evaluations and verified that the proposed method improves the performance of the cross-talk canceller compared to the conventional method.
    Download PDF (578K)
  • Shin Jae KANG, Kang Hyun LEE, Nam Soo KIM
    Type: LETTER
    Subject area: Speech and Hearing
    2015 Volume E98.D Issue 12 Pages 2345-2348
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    In this letter, we propose a novel supervised pre-training technique for deep neural network (DNN)-hidden Markov model systems to achieve robust speech recognition in adverse environments. In the proposed approach, our aim is to initialize the DNN parameters such that they yield abstract features robust to acoustic environment variations. In order to achieve this, we first derive the abstract features from an early fine-tuned DNN model which is trained based on a clean speech database. By using the derived abstract features as the target values, the standard error back-propagation algorithm with the stochastic gradient descent method is performed to estimate the initial parameters of the DNN. The performance of the proposed algorithm was evaluated on Aurora-4 DB, and better results were observed compared to a number of conventional pre-training methods.
    Download PDF (82K)
  • Lili PAN, Qiangsen HE, Yali ZHENG, Mei XIE
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2015 Volume E98.D Issue 12 Pages 2349-2352
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Facial age estimation requires accurately capturing the mapping relationship between facial features and corresponding ages, so as to precisely estimate ages for new input facial images. Previous works usually use one-layer regression model to learn this complex mapping relationship, resulting in low estimation accuracy. In this letter, we propose a new gender-specific regression model with a two-layer structure for more accurate age estimation. Different from recent two-layer models that use a global regressor to calculate cumulative attributes (CA) and use CA to estimate age, we use gender-specific ones to calculate CA with more flexibility and precision. Extensive experimental results on FG-NET and Morph 2 datasets demonstrate the superiority of our method over other state-of-the-art age estimation methods.
    Download PDF (456K)
  • MinKyu KIM, SunHo KI, YoungDuke SEO, JinHong PARK, ChuShik JHON
    Type: LETTER
    Subject area: Computer Graphics
    2015 Volume E98.D Issue 12 Pages 2353-2357
    Published: December 01, 2015
    Released: December 01, 2015
    JOURNALS FREE ACCESS
    Recently in the mobile graphic industry, ultra-realistic visual qualities with 60fps and limited power budget for GPU have been required. For graphics-heavy applications that run at 30 fps, we easily observed very noticeable flickering artifacts. Further, the workload imposed by high resolutions at high frame rates directly decreases the battery life. Unlike the recent frame rate up sampling algorithms which remedy the flickering but cause inevitable significant overheads to reconstruct intermediate frames, we propose a dynamic rendering quality scaling (DRQS) that includes dynamic rendering based on resolution changes and quality scaling to increase the frame rate with negligible overhead using a transform matrix. Further DRQS reduces the workload up to 32% without human visual-perceptual changes for graphics-light applications.
    Download PDF (1494K)
feedback
Top