IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E100.D , Issue 12
Showing 1-45 articles out of 45 articles from the selected issue
Special Section on Parallel and Distributed Computing and Networking
  • Satoshi FUJITA
    2017 Volume E100.D Issue 12 Pages 2748
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (50K)
  • Muhammad ALFIAN AMRIZAL, Atsuya UNO, Yukinori SATO, Hiroyuki TAKIZAWA, ...
    Type: PAPER
    Subject area: High performance computing
    2017 Volume E100.D Issue 12 Pages 2749-2760
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Coordinated checkpointing is a widely-used checkpoint/restart protocol for fault-tolerance in large-scale HPC systems. However, this protocol will involve massive amounts of I/O concentration, resulting in considerably high checkpoint overhead and high energy consumption. This paper focuses on speculative checkpointing, a CPR mechanism that allows for temporal distribution of checkpointings to avoid I/O concentration. We propose execution time and energy models for speculative checkpointing, and investigate energy-performance characteristics when speculative checkpointing is adopted in exascale systems. Using these models, we study the benefit of speculative checkpointing over coordinated checkpointing under various realistic scenarios for exascale HPC systems. We show that, compared to coordinated checkpointing, speculative checkpointing can achieve up to a 11% energy reduction at the cost of a relatively-small increase in the execution time. In addition, a significant energy-performance trade-off is expected when the system scale exceeds 1.2 million nodes.

    Download PDF (1227K)
  • Shunsuke YAGAI, Masato OGUCHI, Miyuki NAKANO, Saneyasu YAMAGUCHI
    Type: PAPER
    Subject area: Database system
    2017 Volume E100.D Issue 12 Pages 2761-2770
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    In data centers, large numbers of computers are run simultaneously. These computers consume an enormous amount of energy. Several challenges related to this issue have been published. An energy-efficient storage management method that cooperates with applications was one effective approach. In this method, data and storage devices are managed using application support and the power consumption of storage devices is significantly decreased. However, existing studies do not take the virtualized environment into account. Recently, many data-intensive applications have been run in a virtualized environment, such as the cloud computing environment. In this paper, we focus on a virtualized environment wherein multiple virtual machines run on a physical computer and a data intensive application runs on each virtual machine. We discuss a method for reducing storage device power consumption using application support. First, we propose two storage management methods using application information. One method optimizes the inter-HDD file layout. This method removes frequently-accessed files from a certain HDD and switches the HDD to power-off mode. To balance loads and reduce seek distances, this method separates a heavily accessed file and consolidates files in a virtual machine with low access frequency. The other method optimizes the intra-HDD file layout, in addition to performing inter-HDD optimization. This method places frequently accessed files near each other. Second, we present our experimental results and demonstrate that the proposed methods can create sufficiently long HDD access intervals that power-off mode can be used, and thereby, reduce the power consumption of storage devices.

    Download PDF (1702K)
  • Yu-Liang LIU, Ruey-Chyi WU
    Type: PAPER
    Subject area: Interconnection networks
    2017 Volume E100.D Issue 12 Pages 2771-2780
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    The exchanged hypercube, denoted by EH(s,t), is a graph obtained by systematically removing edges from the corresponding hypercube, while preserving many of the hypercube's attractive properties. Moreover, ring-connected topology is one of the most promising topologies in Wavelength Division Multiplexing (WDM) optical networks. Let Rn denote a ring-connected topology. In this paper, we address the routing and wavelength assignment problem for implementing the EH(s,t) communication pattern on Rn, where n=s+t+1. We design an embedding scheme. Based on the embedding scheme, a near-optimal wavelength assignment algorithm using 2s+t-2+⌊2t/3⌋ wavelengths is proposed. We also show that the wavelength assignment algorithm uses no more than an additional 25 percent of (or ⌊2t-1/3⌋) wavelengths, compared to the optimal wavelength assignment algorithm.

    Download PDF (1455K)
  • Takashi YOKOTA, Kanemitsu OOTSU, Takeshi OHKAWA
    Type: PAPER
    Subject area: Interconnection networks
    2017 Volume E100.D Issue 12 Pages 2781-2795
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Interconnection network is one of the inevitable components in parallel computers, since it is responsible to communication capabilities of the systems. It affects the system-level performance as well as the physical and logical structure of the systems. Although many studies are reported to enhance the interconnection network technology, we have to discuss many issues remaining. One of the most important issues is congestion management. In an interconnection network, many packets are transferred simultaneously and the packets interfere to each other in the network. Congestion arises as a result of the interferences. Its fast spreading speed seriously degrades communication performance and it continues for long time. Thus, we should appropriately control the network to suppress the congested situation for maintaining the maximum performance. Many studies address the problem and present effective methods, however, the maximal performance in an ideal situation is not sufficiently clarified. Solving the ideal performance is, in general, an NP-hard problem. This paper introduces particle swarm optimization (PSO) methodology to overcome the problem. In this paper, we first formalize the optimization problem suitable for the PSO method and present a simple PSO application as naive models. Then, we discuss reduction of the size of search space and introduce three practical variations of the PSO computation models as repetitive model, expansion model, and coding model. We furthermore introduce some non-PSO methods for comparison. Our evaluation results reveal high potentials of the PSO method. The repetitive and expansion models achieve significant acceleration of collective communication performance at most 1.72 times faster than that in the bursty communication condition.

    Download PDF (2510K)
  • Ryuta KAWANO, Hiroshi NAKAHARA, Ikki FUJIWARA, Hiroki MATSUTANI, Michi ...
    Type: PAPER
    Subject area: Interconnection networks
    2017 Volume E100.D Issue 12 Pages 2796-2807
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    End-to-end network latency has become an important issue for parallel application on large-scale high performance computing (HPC) systems. It has been reported that randomly-connected inter-switch networks can lower the end-to-end network latency. This latency reduction is established in exchange for a large amount of routing information. That is, minimal routing on irregular networks is achieved by using routing tables for all destinations in the networks. In this work, a novel distributed routing method called LOREN (Layout-Oriented Routing with Entries for Neighbors) to achieve low-latency with a small routing table is proposed for irregular networks whose link length is limited. The routing tables contain both physically and topologically nearby neighbor nodes to ensure livelock-freedom and a small number of hops between nodes. Experimental results show that LOREN reduces the average latencies by 5.8% and improves the network throughput by up to 62% compared with a conventional compact routing method. Moreover, the number of required routing table entries is reduced by up to 91%, which improves scalability and flexibility for implementation.

    Download PDF (1567K)
  • Sho SASAKI, Yuichi MIYAJI, Hideyuki UEHARA
    Type: PAPER
    Subject area: Wireless networks
    2017 Volume E100.D Issue 12 Pages 2808-2817
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    A number of battery-driven sensor nodes are deployed to operate a wireless sensor network, and many routing protocols have been proposed to reduce energy consumption for data communications in the networks. We have proposed a new routing policy which employs a nearest-neighbor forwarding based on hop progress. Our proposed routing method has a topology parameter named forwarding angle to determine which node to connect with as a next-hop, and is compared with other existing policies to clarify the best topology for energy efficiency. In this paper, we also formulate the energy budget for networks with the routing policy by means of stochastic-geometric analysis on hop-count distributions for random planar networks. The formulation enables us to tell how much energy is required for all nodes in the network to forward sensed data in a pre-deployment phase. Simulation results show that the optimal topology varies according to node density in the network. Direct communication to the sink is superior for a small-sized network, and the multihop routing is more effective as the network becomes sparser. Evaluation results also demonstrate that our energy formulation can well approximate the energy budget, especially for small networks with a small forwarding angle. Discussion on the error with a large forwarding angle is then made with a geographical metric. It is finally clarified that our analytical expressions can obtain the optimal forwarding angle which yields the best energy efficiency for the routing policy when the network is moderately dense.

    Download PDF (2509K)
  • Fumiya TESHIMA, Hiroyasu OBATA, Ryo HAMAMOTO, Kenji ISHIDA
    Type: PAPER
    Subject area: Wireless networks
    2017 Volume E100.D Issue 12 Pages 2818-2827
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Streaming services that use TCP have increased; however, throughput is unstable due to congestion control caused by packet loss when TCP is used. Thus, TCP control to secure a required transmission rate for streaming communication using Forward Error Correction (FEC) technology (TCP-AFEC) has been proposed. TCP-AFEC can control the appropriate transmission rate according to network conditions using a combination of TCP congestion control and FEC. However, TCP-AFEC was not developed for wireless Local Area Network (LAN) environments; thus, it requires a certain time to set the appropriate redundancy and cannot obtain the required throughput. In this paper, we demonstrate the drawbacks of TCP-AFEC in wireless LAN environments. Then, we propose a redundancy setting method that can secure the required throughput for FEC, i.e., TCP-TFEC. Finally, we show that TCP-TFEC can secure more stable throughput than TCP-AFEC.

    Download PDF (1491K)
  • Yusuke MATSUSHITA, Hayate OKUHARA, Koichiro MASUYAMA, Yu FUJITA, Ryuta ...
    Type: PAPER
    Subject area: Architecture
    2017 Volume E100.D Issue 12 Pages 2828-2836
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Body biasing can be used to control the leakage power and performance by changing the threshold voltage of transistors after fabrication. Especially, a new process called Silicon-On-Thin Box (SOTB) CMOS can control their balance widely. When it is applied to a Coarse Grained Reconfigurable Array (CGRA), the leakage power can be much reduced by precise bias control with small domain size including a small number of PEs. On the other hand, the area overhead for separating power domain and delivering a lot of wires for body bias voltage supply increases. This paper explores the grain of domain size of an energy efficient CGRA called CMA (Cool Mega Array). By using Genetic Algorithm based body bias assignment method, the leakage reduction of various grain size was evaluated. As a result, a domain with 2x1 PEs achieved about 40% power reduction with a 6% area overhead. It has appeared that a combination of three body bias voltages; zero bias, weak reverse bias and strong reverse bias can achieve the optimal leakage reduction and area overhead balance in most cases.

    Download PDF (1130K)
  • Runzi ZHANG, Jinlin WANG, Yiqiang SHENG, Xiao CHEN, Xiaozhou YE
    Type: PAPER
    Subject area: Architecture
    2017 Volume E100.D Issue 12 Pages 2837-2846
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Cache affinity has been proved to have great impact on the performance of packet processing applications on multi-core platforms. Flow-based packet scheduling can make the best of data cache affinity with flow associated data and context structures. However, little work on packet scheduling algorithms has been conducted when it comes to instruction cache (I-Cache) affinity in modified pipelining (MPL) architecture for multi-core systems. In this paper, we propose a protocol-aware packet scheduling (PAPS) algorithm aiming at maximizing I-Cache affinity at protocol dependent stages in MPL architecture for multi-protocol processing (MPP) scenario. The characteristics of applications in MPL are analyzed and a mapping model is introduced to illustrate the procedure of MPP. Besides, a stage processing time model for MPL is presented based on the analysis of multi-core cache hierarchy. PAPS is a kind of flow-based packet scheduling algorithm and it schedules flows in consideration of both application-level protocol of flows and load balancing. Experiments demonstrate that PAPS outperforms the Round Robin algorithm and the HRW-based (HRW) algorithm for MPP applications. In particular, PAPS can eliminate all I-Cache misses at protocol dependent stage and reduce the average CPU cycle consumption per packet by more than 10% in comparison with HRW.

    Download PDF (711K)
  • Takuma NAKAJIMA, Masato YOSHIMI, Celimuge WU, Tsutomu YOSHINAGA
    Type: PAPER
    Subject area: Information networks
    2017 Volume E100.D Issue 12 Pages 2847-2856
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Cooperative caching is a key technique to reduce rapid growing video-on-demand's traffic by aggregating multiple cache storages. Existing strategies periodically calculate a sub-optimal allocation of the content caches in the network. Although such technique could reduce the generated traffic between servers, it comes with the cost of a large computational overhead. This overhead will be the cause of preventing these caches from following the rapid change in the access pattern. In this paper, we propose a light-weight scheme for cooperative caching by grouping contents and servers with color tags. In our proposal, we associate servers and caches through a color tag, with the aim to increase the effective cache capacity by storing different contents among servers. In addition to the color tags, we propose a novel hybrid caching scheme that divides its storage area into colored LFU (Least Frequently Used) and no-color LRU (Least Recently Used) areas. The colored LFU area stores color-matching contents to increase cache hit rate and no-color LRU area follows rapid changes in access patterns by storing popular contents regardless of their tags. On the top of the proposed architecture, we also present a new routing algorithm that takes benefit of the color tags information to reduce the traffic by fetching cached contents from the nearest server. Evaluation results, using a backbone network topology, showed that our color-tag based caching scheme could achieve a performance close to the sub-optimal one obtained with a genetic algorithm calculation, with only a few seconds of computational overhead. Furthermore, the proposed hybrid caching could limit the degradation of hit rate from 13.9% in conventional non-colored LFU, to only 2.3%, which proves the capability of our scheme to follow rapid insertions of new popular contents. Finally, the color-based routing scheme could reduce the traffic by up to 31.9% when compared with the shortest-path routing.

    Download PDF (866K)
  • Toru FUJITA, Koji NAKANO, Yasuaki ITO, Daisuke TAKAFUJI
    Type: PAPER
    Subject area: GPU computing
    2017 Volume E100.D Issue 12 Pages 2857-2865
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    The main contribution of this paper is to present an efficient GPU implementation of bulk computation of the CKY parsing for a context-free grammar, which determines if a context-free grammar derives each of a lot of input strings. The bulk computation is to execute the same algorithm for a lot of inputs in turn or at the same time. The CKY parsing is to determine if a context-free grammar derives a given string. We show that the bulk computation of the CKY parsing can be implemented in the GPU efficiently using Bitwise Parallel Bulk Computation (BPBC) technique. We also show the rule minimization technique and the dynamic scheduling method for further acceleration of the CKY parsing on the GPU. The experimental results using NVIDIA TITAN X GPU show that our implementation of the bitwise-parallel CKY parsing for strings of length 32 takes 395µs per string with 131072 production rules for 512 non-terminal symbols.

    Download PDF (583K)
  • Xuechun WANG, Yuan JI, Wendong CHEN, Feng RAN, Aiying GUO
    Type: LETTER
    Subject area: Architecture
    2017 Volume E100.D Issue 12 Pages 2866-2870
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Hardware implementation of neural networks usually have high computational complexity that increase exponentially with the size of a circuit, leading to more uncertain and unreliable circuit performance. This letter presents a novel Radial Basis Function (RBF) neural network based on parallel fault tolerant stochastic computing, in which number is converted from deterministic domain to probabilistic domain. The Gaussian RBF for middle layer neuron is implemented using stochastic structure that reduce the hardware resources significantly. Our experimental results from two pattern recognition tests (the Thomas gestures and the MIT faces) show that the stochastic design is capable to maintain equivalent performance when the stream length set to 10Kbits. The stochastic hidden neuron uses only 1.2% hardware resource compared with the CORDIC algorithm. Furthermore, the proposed algorithm is very flexible in design tradeoff between computing accuracy, power consumption and chip area.

    Download PDF (627K)
  • Shinwook KIM, Tae-Gyu CHANG
    Type: LETTER
    Subject area: Architecture
    2017 Volume E100.D Issue 12 Pages 2871-2875
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    This letter describes the development and implementation of the lane detection system accelerated by the neuromorphic hardware. Because the neuromorphic hardware has inherently parallel nature and has constant output latency regardless the size of the knowledge, the proposed lane detection system can recognize various types of lanes quickly and efficiently. Experimental results using the road images obtained in the actual driving environments showed that white and yellow lanes could be detected with an accuracy of more than 94 percent.

    Download PDF (4343K)
Special Section on Frontiers in Agent-based Technology
  • Takayuki ITO
    2017 Volume E100.D Issue 12 Pages 2876-2877
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS
    Download PDF (64K)
  • Atsushi NOZAKI, Takanobu MIZUTA, Isao YAGI
    Type: PAPER
    Subject area: Information Network
    2017 Volume E100.D Issue 12 Pages 2878-2887
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    As financial products have grown in complexity and level of risk compounding in recent years, investors have come to find it difficult to assess investment risk. Furthermore, companies managing mutual funds are increasingly expected to perform risk control and thus prevent assumption of unforeseen risk by investors. A related revision to the investment fund legal system in Japan led to establishing what is known as “the rule for investment diversification” in December 2014, without a clear discussion of its expected effects on market price formation having taken place. In this paper, we therefore used an artificial market to investigate its effects on price formation in financial markets where investors follow the rule at the time of a market crash that is caused by the collapse of an asset fundamental price. As results, we found the possibility that when the fundamental price of one asset collapses and its market price also collapses, some asset market prices also fall, whereas other asset market prices rise for a market in which investors follow the rule for investment diversification.

    Download PDF (2231K)
  • Naoyuki NIDE, Shiro TAKATA
    Type: PAPER
    Subject area: Information Network
    2017 Volume E100.D Issue 12 Pages 2888-2896
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    The Werewolf game is a kind of role-playing game in which players have to guess other players' roles from their speech acts (what they say). In this game, players have to estimate other players' beliefs and intentions, and try to modify others' intentions. The BDI model is a suitable one for this game, because it explicitly has notions of mental states, i.e. beliefs, desires and intentions. On the other hand, in this game, players' beliefs are not completely known. Consequently, in many cases it is difficult for players to choose a unique strategy; in other words, players frequently have to maintain probabilistic intentions. However, the conventional BDI model does not have the notion of probabilistic mental states. In this paper, we propose an extension of BDI logic that can handle probabilistic mental states and use it to model some situations in the Werewolf game. We also show examples of deductions concerning those situations. We expect that this study will serve as a basis for developing a Werewolf game agent based on BDI logic in the future.

    Download PDF (605K)
  • Maxime CLEMENT, Tenda OKIMOTO, Katsumi INOUE
    Type: PAPER
    Subject area: Information Network
    2017 Volume E100.D Issue 12 Pages 2897-2905
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Many real world optimization problems involving sets of agents can be modeled as Distributed Constraint Optimization Problems (DCOPs). A DCOP is defined as a set of variables taking values from finite domains, and a set of constraints that yield costs based on the variables' values. Agents are in charge of the variables and must communicate to find a solution minimizing the sum of costs over all constraints. Many applications of DCOPs include multiple criteria. For example, mobile sensor networks must optimize the quality of the measurements and the quality of communication between the agents. This introduces trade-offs between solutions that are compared using the concept of Pareto dominance. Multi-Objective Distributed Constraint Optimization Problems (MO-DCOPs) are used to model such problems where the goal is to find the set of Pareto optimal solutions. This set being exponential in the number of variables, it is important to consider fast approximation algorithms for MO-DCOPs. The bounded multi-objective max-sum (B-MOMS) algorithm is the first and only existing approximation algorithm for MO-DCOPs and is suited for solving a less-constrained problem. In this paper, we propose a novel approximation MO-DCOP algorithm called Distributed Pareto Local Search (DPLS) that uses a local search approach to find an approximation of the set of Pareto optimal solutions. DPLS provides a distributed version of an existing centralized algorithm by complying with the communication limitations and the privacy concerns of multi-agent systems. Experiments on a multi-objective extension of the graph-coloring problem show that DPLS finds significantly better solutions than B-MOMS for problems with medium to high constraint density while requiring a similar runtime.

    Download PDF (418K)
  • Ryusuke IMADA, Katsuhide FUJITA
    Type: PAPER
    Subject area: Information Network
    2017 Volume E100.D Issue 12 Pages 2906-2914
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Sponsored search is a mechanism that shows the appropriate advertisements (ads) according to search queries. The orders and payments of ads are determined by the auction. However, the externalities which give effects to CTR and haven't been considered in some existing works because the mechanism with externalities has high computational cost. In addition, some algorithms which can calculate the approximated solution considering the externalities within the polynomial-time are proposed, however, it assumed that one bidder can propose only a single ad. In this paper, we propose the approximation allocation algorithm that one bidder can offer many ads considering externalities. The proposed algorithm employs the concept of the combinatorial auction in order to consider the combinational bids. In addition, the proposed algorithm can find the approximated allocation by the dynamic programming. Moreover, we prove the computational complexity and the monotonicity of the proposed mechanism, and demonstrate computational costs and efficiency ratios by changing the number of ads, slots and maximum bids. The experimental results show that the proposed algorithm can calculate 0.7-approximation solution even though the full search can't find solutions in the limited times.

    Download PDF (1288K)
  • Susel FERNANDEZ, Takayuki ITO
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 12 Pages 2915-2922
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Intelligent transportation systems (ITS) are a set of technological solutions used to improve the performance and safety of road transportation. Since one of the most important information sources on ITS are sensors, the integration and sharing the sensor data become a big challenging problem in the application of sensor networks to these systems. In order to make full use of the sensor data, is crucial to convert the sensor data into semantic data, which can be understood by computers. In this work, we propose to use the SSN ontology to manage the sensor information in an intelligent transportation architecture. The system was tested in a traffic light settings application, allowing to predict and avoid traffic accidents, and also for the routing optimization.

    Download PDF (1226K)
  • Naoki YAMADA, Yuji YAMAGATA, Naoki FUKUTA
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 12 Pages 2923-2930
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    On an inference-enabled Linked Open Data (LOD) endpoint, usually a query execution takes longer than on an LOD endpoint without inference engine due to its processing of reasoning. Although there are two separate kind of approaches, query modification approaches, and ontology modifications have been investigated on the different contexts, there have been discussions about how they can be chosen or combined for various settings. In this paper, for reducing query execution time on an inference-enabled LOD endpoint, we compare these two promising methods: query rewriting and ontology modification, as well as trying to combine them into a cluster of such systems. We employ an evolutionary approach to make such rewriting and modification of queries and ontologies based on the past-processed queries and their results. We show how those two approaches work well on implementing an inference-enabled LOD endpoint by a cluster of SPARQL endpoints.

    Download PDF (2065K)
Regular Section
  • Jungkyu HAN, Hayato YAMANA
    Type: SURVEY PAPER
    Subject area: Data Engineering, Web Information Systems
    2017 Volume E100.D Issue 12 Pages 2931-2944
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    In recommending to another individual an item that one loves, accuracy is important, however in most cases, focusing only on accuracy generates less satisfactory recommendations. Studies have repeatedly pointed out that aspects that go beyond accuracy — such as the diversity and novelty of the recommended items — are as important as accuracy in making a satisfactory recommendation. Despite their importance, there is no global consensus about definitions and evaluations regarding beyond-accuracy aspects, as such aspects closely relate to the subjective sensibility of user satisfaction. In addition, devising algorithms for this purpose is difficult, because algorithms concurrently pursue the aspects in trade-off relation (i.e., accuracy vs. novelty). In the aforementioned situation, for researchers initiating a study in this domain, it is important to obtain a systematically integrated view of the domain. This paper reports the results of a survey of about 70 studies published over the last 15 years, each of which addresses recommendations that consider beyond-accuracy aspects. From this survey, we identify diversity, novelty, and coverage as important aspects in achieving serendipity and popularity unbiasedness — factors that are important to user satisfaction and business profits, respectively. The five major groups of algorithms that tackle the beyond-accuracy aspects are multi-objective, modified collaborative filtering (CF), clustering, graph, and hybrid; we then classify and describe algorithms as per this typology. The off-line evaluation metrics and user studies carried out by the studies are also described. Based on the survey results, we assert that there is a lot of room for research in the domain. Especially, personalization and generalization are considered important issues that should be addressed in future research (e.g., automatic per-user-trade-off among the aspects, and properly establishing beyond-accuracy aspects for various types of applications or algorithms).

    Download PDF (715K)
  • Fumito TAKEUCHI, Masaaki NISHINO, Norihito YASUDA, Takuya AKIBA, Shin- ...
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2017 Volume E100.D Issue 12 Pages 2945-2952
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    This paper deals with the constrained DAG shortest path problem (CDSP), which finds the shortest path on a given directed acyclic graph (DAG) under any logical constraints posed on taken edges. There exists a previous work that uses binary decision diagrams (BDDs) to represent the logical constraints, and traverses the input DAG and the BDD simultaneously. The time and space complexity of this BDD-based method is derived from BDD size, and tends to be fast only when BDDs are small. However, since it does not prioritize the search order, there is considerable room for improvement, particularly for large BDDs. We combine the well-known A* search with the BDD-based method synergistically, and implement several novel heuristic functions. The key insight here is that the ‘shortest path’ in the BDD is a solution of a relaxed problem, just as the shortest path in the DAG is. Experiments, particularly practical machine learning applications, show that the proposed method decreases search time by up to 2 orders of magnitude, with the specific result that it is 2,000 times faster than a commercial solver. Moreover, the proposed method can reduce the peak memory usage up to 40 times less than the conventional method.

    Download PDF (759K)
  • Kentaro KATO, Somsak CHOOMCHUAY
    Type: PAPER
    Subject area: Computer System
    2017 Volume E100.D Issue 12 Pages 2953-2961
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    This paper analyzes the time domain Reed Solomon Decoder with FPGA implementation. Data throughput and area is carefully evaluated compared with typical frequency domain Reed Solomon Decoder. In this analysis, three hardware architecture to enhance the data throughput, namely, the pipelined architecture, the parallel architecture, and the truncated arrays, is evaluated, too. The evaluation reveals that the number of the consumed resources of RS(255, 239) is about 20% smaller than those of the frequency domain decoder although data throughput is less than 10% of the frequency domain decoder. The number of the consumed resources of the pipelined architecture is 28% smaller than that of the parallel architecture when data throughput is same. It is because the pipeline architecture requires less extra logics than the parallel architecture. To get higher data throughput, the pipelined architecture is better than the parallel architecture from the viewpoint of consumed resources.

    Download PDF (789K)
  • Kha Cong NGUYEN, Cuong Tuan NGUYEN, Masaki NAKAGAWA
    Type: PAPER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 12 Pages 2962-2972
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    This paper presents a method to segment single- and multiple-touching characters in offline handwritten Japanese text recognition with practical speed. Distortions due to handwriting and a mix of complex Chinese characters with simple phonetic and alphanumeric characters leave optical handwritten text recognition (OHTR) for Japanese still far from perfection. Segmentation of characters, which touch neighbors on multiple points, is a serious unsolved problem. Therefore, we propose a method to segment them which is made in two steps: coarse segmentation and fine segmentation. The coarse segmentation employs vertical projection, stroke-width estimation while the fine segmentation takes a graph-based approach for thinned text images, which employs a new bridge finding process and Voronoi diagrams with two improvements. Unlike previous methods, it locates character centers and seeks segmentation candidates between them. It draws vertical lines explicitly at estimated character centers in order to prevent vertically unconnected components from being left behind in the bridge finding. Multiple candidates of separation are produced by removing touching points combinatorially. SVM is applied to discard improbable segmentation boundaries. Then, ambiguities are finally solved by the text recognition employing linguistic context and geometric context to recognize segmented characters. The results of our experiments show that the proposed method can segment not only single-touching characters but also multiple-touching characters, and each component in our proposed method contributes to the improvement of segmentation and recognition rates.

    Download PDF (2692K)
  • Hongmin LIU, Lulu CHEN, Zhiheng WANG, Zhanqiang HUO
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 12 Pages 2973-2983
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    In this paper, the concept of gradient order is introduced and a novel gradient order curve descriptor (GOCD) for curve matching is proposed. The GOCD is constructed in the following main steps: firstly, curve support region independent of the dominant orientation is determined and then divided into several sub-regions based on gradient magnitude order; then gradient order feature (GOF) of each feature point is generated by encoding the local gradient information of the sample points; the descriptor is finally achieved by turning to the description matrix of GOF. Since both the local and the global gradient information are captured by GOCD, it is more distinctive and robust compared with the existing curve matching methods. Experiments under various changes, such as illumination, viewpoint, image rotation, JPEG compression and noise, show the great performance of GOCD. Furthermore, the application of image mosaic proves GOCD can be used successfully in actual field.

    Download PDF (2962K)
  • Natsuki TAKAYAMA, Hiroki TAKAHASHI
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 12 Pages 2984-2992
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Partial blur segmentation is one of the most interesting topics in computer vision, and it has practical value. The generation of blur maps is a crucial part of partial blur segmentation because partial blur segmentation involves producing a blur map and applying a segmentation algorithm to the blur map. In this study, we address two important issues in order to improve the discrimination of blur maps: (1) estimating a robust local blur feature to consider variations in the intensity amplitude and (2) a scheme for generating blur maps. We propose the ANGHS (Amplitude-Normalized Gradient Histogram Span) as a local blur feature. ANGHS represents the heavy-tailedness of a gradient distribution, where it is calculated from an image gradient normalized using the intensity amplitude. ANGHS is robust to variations in the intensity amplitude, and it can handle local regions in a more appropriate manner than previously proposed local blur features. Blur maps are affected by local blur features but also by the contents and sizes of local regions, and the assignment of blur feature values to pixels. Thus, multiple-sized grids and the EAI (Edge-Aware Interpolation) are employed in each task to improve the discrimination of blur maps. The discrimination of the generated blur maps is evaluated visually and statistically using numerous partial blur images. Comparisons with the results obtained by state-of-the-art methods demonstrate the high discrimination of the blur maps generated using the proposed method.

    Download PDF (3724K)
  • Qun SHI, Norimichi UKITA, Ming-Hsuan YANG
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 12 Pages 2993-3000
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    This paper proposes a natural facial and head behavior recognition method using hybrid dynamical systems. Most existing facial and head behavior recognition methods focus on analyzing deliberately displayed prototypical emotion patterns rather than complex and spontaneous facial and head behaviors in natural conversation environments. We first capture spatio-temporal features on important facial parts via dense feature extraction. Next, we cluster the spatio-temporal features using hybrid dynamical systems, and construct a dictionary of motion primitives to cover all possible elemental motion dynamics accounting for facial and head behaviors. With this dictionary, the facial and head behavior can be interpreted into a distribution of motion primitives. This interpretation is robust against different rhythms of dynamic patterns in complex and spontaneous facial and head behaviors. We evaluate the proposed approach under natural tele-communication scenarios, and achieve promising results. Furthermore, the proposed method also performs favorably against the state-of-the-art methods on three benchmark databases.

    Download PDF (1126K)
  • Takuma EBISU, Ryutaro ICHISE
    Type: PAPER
    Subject area: Natural Language Processing
    2017 Volume E100.D Issue 12 Pages 3001-3009
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Knowledge graphs have been shown to be useful to many tasks in artificial intelligence. Triples of knowledge graphs are traditionally structured by human editors or extracted from semi-structured information; however, editing is expensive, and semi-structured information is not common. On the other hand, most such information is stored as text. Hence, it is necessary to develop a method that can extract knowledge from texts and then construct or populate a knowledge graph; this has been attempted in various ways. Currently, there are two approaches to constructing a knowledge graph. One is open information extraction (Open IE), and the other is knowledge graph embedding; however, neither is without problems. Stanford Open IE, the current best such system, requires labeled sentences as training data, and knowledge graph embedding systems require numerous triples. Recently, distributed representations of words have become a hot topic in the field of natural language processing, since this approach does not require labeled data for training. These require only plain text, but Mikolov showed that it can perform well with the word analogy task, answering questions such as, “a is to b as c is to __?.” This can be considered as a knowledge extraction task from a text for finding the missing entity of a triple. However, the accuracy is not sufficiently high when applied in a straightforward manner to relations in knowledge graphs, since the method uses only one triple as a positive example. In this paper, we analyze why distributed representations perform such tasks well; we also propose a new method for extracting knowledge from texts that requires much less annotated data. Experiments show that the proposed method achieves considerable improvement compared with the baseline; in particular, the improvement in HITS@10 was more than doubled for some relations.

    Download PDF (431K)
  • Khairun Nisa' MINHAD, Jonathan Shi Khai OOI, Sawal Hamid MD ALI, Mamun ...
    Type: PAPER
    Subject area: Biological Engineering
    2017 Volume E100.D Issue 12 Pages 3010-3017
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Malaysia is one of the countries with the highest car crash fatality rates in Asia. The high implementation cost of in-vehicle driver behavior warning system and autonomous driving remains a significant challenge. Motivated by the large number of simple yet effective inventions that benefitted many developing countries, this study presents the findings of emotion recognition based on skin conductance response using a low-cost wearable sensor. Emotions were evoked by presenting the proposed display stimulus and driving stimulator. Meaningful power spectral density was extracted from the filtered signal. Experimental protocols and frameworks were established to reduce the complexity of the emotion elicitation process. The proof of concept in this work demonstrated the high accuracy of two-class and multiclass emotion classification results. Significant differences of features were identified using statistical analysis. This work is one of the most easy-to-use protocols and frameworks, but has high potential to be used as biomarker in intelligent automobile, which helps prevent accidents and saves lives through its simplicity.

    Download PDF (4240K)
  • Jaehwan LEE, Joohwan KIM, Ji Sun SHIN
    Type: LETTER
    Subject area: Computer System
    2017 Volume E100.D Issue 12 Pages 3018-3021
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    The ability to efficiently process exponentially increasing data remains a challenging issue for computer platforms. In legacy computing platforms, large amounts of data can cause performance bottlenecks at the I/O interfaces between CPUs and storage devices. To overcome this problem, the in-storage computing (ISC) technique is introduced, which offloads some of the computations from the CPUs to the storage devices. In this paper, we propose DiSC, a distributed in-storage computing platform using cost-effective hardware. First, we designed a general-purpose ISC device, a so-called DiSC endpoint, by combining an inexpensive single-board computer (SBC) and a hard disk. Second, a Mesos-based resource manager is adapted into the DiSC platform to schedule the DiSC endpoint tasks. To draw comparisons to a general CPU-based platform, a DiSC testbed is constructed and experiments are carried out using essential applications. The experimental results show that DiSC attains cost-efficient performance advantages over a desktop, particularly for searching and filtering workloads.

    Download PDF (881K)
  • Yuan SUN, Xing-she ZHOU, Gang YANG
    Type: LETTER
    Subject area: Software System
    2017 Volume E100.D Issue 12 Pages 3022-3026
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    In this letter, we investigate the computation offloading problem in cloud based multi-robot systems, in which user weights, communication interference and cloud resource limitation are jointly considered. To minimize the system cost, two offloading selection and resource allocation algorithms are proposed. Numerical results show that the proposed algorithms both can greatly reduce the overall system cost, and the greedy selection based algorithm even achieves near-optimal performance.

    Download PDF (281K)
  • Zhuo ZHANG, Yan LEI, Qingping TAN, Xiaoguang MAO, Ping ZENG, Xi CHANG
    Type: LETTER
    Subject area: Software Engineering
    2017 Volume E100.D Issue 12 Pages 3027-3031
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Fault localization is essential for solving the issue of software faults. Aiming at improving fault localization, this paper proposes a deep learning-based fault localization with contextual information. Specifically, our approach uses deep neural network to construct a suspiciousness evaluation model to evaluate the suspiciousness of a statement being faulty, and then leverages dynamic backward slicing to extract contextual information. The empirical results show that our approach significantly outperforms the state-of-the-art technique Dstar.

    Download PDF (4592K)
  • Jun WANG, Guoqing WANG, Leida LI
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 12 Pages 3032-3035
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    A quantized index for evaluating the pattern similarity of two different datasets is designed by calculating the number of correlated dictionary atoms. Guided by this theory, task-specific biometric recognition model transferred from state-of-the-art DNN models is realized for both face and vein recognition.

    Download PDF (584K)
  • Yang LI, Zhuang MIAO, Jiabao WANG, Yafei ZHANG, Hang LI
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 12 Pages 3036-3040
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    The latest deep hashing methods perform hash codes learning and image feature learning simultaneously by using pairwise or triplet labels. However, generating all possible pairwise or triplet labels from the training dataset can quickly become intractable, where the majority of those samples may produce small costs, resulting in slow convergence. In this letter, we propose a novel deep discriminative supervised hashing method, called DDSH, which directly learns hash codes based on a new combined loss function. Compared to previous methods, our method can take full advantages of the annotated data in terms of pairwise similarity and image identities. Extensive experiments on standard benchmarks demonstrate that our method preserves the instance-level similarity and outperforms state-of-the-art deep hashing methods in the image retrieval application. Remarkably, our 16-bits binary representation can surpass the performance of existing 48-bits binary representation, which demonstrates that our method can effectively improve the speed and precision of large scale image retrieval systems.

    Download PDF (403K)
  • Seongkyu MUN, Suwon SHON, Wooil KIM, David K. HAN, Hanseok KO
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 12 Pages 3041-3044
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Various types of classifiers and feature extraction methods for acoustic scene classification have been recently proposed in the IEEE Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 Challenge Task 1. The results of the final evaluation, however, have shown that even top 10 ranked teams, showed extremely low accuracy performance in particular class pairs with similar sounds. Due to such sound classes being difficult to distinguish even by human ears, the conventional deep learning based feature extraction methods, as used by most DCASE participating teams, are considered facing performance limitations. To address the low performance problem in similar class pair cases, this letter proposes to employ a recurrent neural network (RNN) based source separation for each class prior to the classification step. Based on the fact that the system can effectively extract trained sound components using the RNN structure, the mid-layer of the RNN can be considered to capture discriminative information of the trained class. Therefore, this letter proposes to use this mid-layer information as novel discriminative features. The proposed feature shows an average classification rate improvement of 2.3% compared to the conventional method, which uses additional classifiers for the similar class pair issue.

    Download PDF (844K)
  • Tsubasa MIYAUCHI, Ayato ONO, Hiroki YOSHIMURA, Masashi NISHIYAMA, Yosh ...
    Type: LETTER
    Subject area: Human-computer Interaction
    2017 Volume E100.D Issue 12 Pages 3045-3049
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    We propose a method for embedding the awareness state and response state in an image-based avatar to smoothly and automatically start an interaction with a user. When both states are not embedded, the image-based avatar can become non-responsive or slow to respond. To consider the beginning of an interaction, we observed the behaviors between a user and receptionist in an information center. Our method replayed the behaviors of the receptionist at appropriate times in each state of the image-based avatar. Experimental results demonstrate that, at the beginning of the interaction, our method for embedding the awareness state and response state increased subjective scores more than not embedding the states.

    Download PDF (1373K)
  • Ryo YAMAZAKI, Tetsuya WATANABE
    Type: LETTER
    Subject area: Rehabilitation Engineering and Assistive Technology
    2017 Volume E100.D Issue 12 Pages 3050-3053
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    The purpose of this study is to investigate the effects of device size on non-visual icon search using a touch interface with voice output. We conducted an experiment in which twelve participants searched for the target icons with four different-sized touchscreen devices. We analyzed the search time, search strategies and subjective evaluations. As a result, mobile devices with a screen size of 4.7 inches had the shortest search time and obtained the highest subjective evaluation among the four devices.

    Download PDF (298K)
  • Fuqiang LI, Tongzhuang ZHANG, Yong LIU, Guoqing WANG
    Type: LETTER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 12 Pages 3054-3058
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    The ignored side effect reflecting in the introduction of mismatching brought by contrast enhancement in representative SIFT based vein recognition model is investigated. To take advantage of contrast enhancement in increasing keypoints generation, hierarchical keypoints selection and mismatching removal strategy is designed to obtain state-of-the-art recognition result.

    Download PDF (991K)
  • Viet-Hang DUONG, Manh-Quan BUI, Jian-Jiun DING, Yuan-Shan LEE, Bach-Tu ...
    Type: LETTER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 12 Pages 3059-3063
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    This work presents a new approach which derives a learned data representation method through matrix factorization on the complex domain. In particular, we introduce an encoding matrix-a new representation of data-that satisfies the simplicial constraint of the projective basis matrix on the field of complex numbers. A complex optimization framework is provided. It employs the gradient descent method and computes the derivative of the cost function based on Wirtinger's calculus.

    Download PDF (653K)
  • Ki-Seung LEE
    Type: LETTER
    Subject area: Speech and Hearing
    2017 Volume E100.D Issue 12 Pages 3064-3067
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    One of the problems associated with voice conversion from a nonparallel corpus is how to find the best match or alignment between the source and the target vector sequences without linguistic information. In a previous study, alignment was achieved by minimizing the distance between the source vector and the transformed vector. This method, however, yielded a sequence of feature vectors that were not well matched with the underlying speaker model. In this letter, the vectors were selected from the candidates by maximizing the overall likelihood of the selected vectors with respect to the target model in the HMM context. Both objective and subjective evaluations were carried out using the CMU ARCTIC database to verify the effectiveness of the proposed method.

    Download PDF (165K)
  • Mingye JU, Zhenfei GU, Dengyin ZHANG, Jian LIU
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2017 Volume E100.D Issue 12 Pages 3068-3072
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    In this letter, we propose a novel technique to increase the visibility of the hazy image. Benefiting from the atmospheric scattering model and the invariance principle for scene structure, we formulate structure constraint equations that derive from two simulated inputs by performing gamma correction on the input image. Relying on the inherent boundary constraint of the scattering function, the expected scene albedo can be well restored via these constraint equations. Extensive experimental results verify the power of the proposed dehazing technique.

    Download PDF (2643K)
  • Can CHEN, Dengyin ZHANG, Jian LIU
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2017 Volume E100.D Issue 12 Pages 3073-3076
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Multi-hypothesis prediction technique, which exploits inter-frame correlation efficiently, is widely used in block-based distributed compressive video sensing. To solve the problem of inaccurate prediction in multi-hypothesis prediction technique at a low sampling rate and enhance the reconstruction quality of non-key frames, we present a resample-based hybrid multi-hypothesis scheme for block-based distributed compressive video sensing. The innovations in this paper include: (1) multi-hypothesis reconstruction based on measurements reorganization (MR-MH) which integrates side information into the original measurements; (2) hybrid multi-hypothesis (H-MH) reconstruction which mixes multiple multi-hypothesis reconstructions adaptively by resampling each reconstruction. Experimental results show that the proposed scheme outperforms the state-of-the-art technique at the same low sampling rate.

    Download PDF (277K)
  • Xiaoqing YE, Jiamao LI, Han WANG, Xiaolin ZHANG
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 12 Pages 3077-3080
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    Accurate stereo matching remains a challenging problem in case of weakly-textured areas, discontinuities and occlusions. In this letter, a novel stereo matching method, consisting of leveraging feature ensemble network to compute matching cost, error detection network to predict outliers and priority-based occlusion disambiguation for refinement, is presented. Experiments on the Middlebury benchmark demonstrate that the proposed method yields competitive results against the state-of-the-art algorithms.

    Download PDF (908K)
  • Donghyun YOO, Youngjoong KO, Jungyun SEO
    Type: LETTER
    Subject area: Natural Language Processing
    2017 Volume E100.D Issue 12 Pages 3081-3084
    Published: December 01, 2017
    Released: December 01, 2017
    JOURNALS FREE ACCESS

    In this paper, we propose a deep learning based model for classifying speech-acts using a convolutional neural network (CNN). The model uses some bigram features including parts-of-speech (POS) tags and dependency-relation bigrams, which represent syntactic structural information in utterances. Previous classification approaches using CNN have commonly exploited word embeddings using morpheme unigrams. However, the proposed model first extracts two different bigram features that well reflect the syntactic structure of utterances and then represents them as a vector representation using a word embedding technique. As a result, the proposed model using bigram embeddings achieves an accuracy of 89.05%. Furthermore, the accuracy of this model is relatively 2.8% higher than that of competitive models in previous studies.

    Download PDF (928K)
feedback
Top