IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E101.D , Issue 3
Showing 1-35 articles out of 35 articles from the selected issue
Special Section on Foundations of Computer Science — Frontiers of Theoretical Computer Science —
  • Shuji KIJIMA
    2018 Volume E101.D Issue 3 Pages 573
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS
    Download PDF (121K)
  • Hiroshi ETO, Hiroyuki KAWAHARA, Eiji MIYANO, Natsuki NONOUE
    Type: PAPER
    2018 Volume E101.D Issue 3 Pages 574-581
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    In this paper, we study a variant of the Minimum Dominating Set problem. Given an unweighted undirected graph G=(V,E) of n=|V| vertices, the goal of the Minimum Single Dominating Cycle problem (MinSDC) is to find a single shortest cycle which dominates all vertices, i.e., a cycle C such that for the set V(C) of vertices in C and the set N(V(C)) of neighbor vertices of C, V(G)=V(C)∪N(V(C)) and |V(C)| is minimum over all dominating cycles in G [6], [17], [24]. In this paper we consider the (in)approximability of MinSDC if input graphs are restricted to some special classes of graphs. We first show that MinSDC is still NP-hard to approximate even when restricted to planar, bipartite, chordal, or r-regular (r≥3). Then, we show the (lnn+1)-approximability and the (1-ε)lnn-inapproximability of MinSDC on split graphs under PNP. Furthermore, we explicitly design a linear-time algorithm to solve MinSDC for graphs with bounded treewidth and estimate the hidden constant factor of its running time-bound.

    Download PDF (468K)
  • Takayoshi SHOUDAI, Yuta YOSHIMURA, Yusuke SUZUKI, Tomoyuki UCHIDA, Tet ...
    Type: PAPER
    2018 Volume E101.D Issue 3 Pages 582-592
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    A cograph (complement reducible graph) is a graph which can be generated by disjoint union and complement operations on graphs, starting with a single vertex graph. Cographs arise in many areas of computer science and are studied extensively. With the goal of developing an effective data mining method for graph structured data, in this paper we introduce a graph pattern expression, called a cograph pattern, which is a special type of cograph having structured variables. Firstly, we show that a problem whether or not a given cograph pattern g matches a given cograph G is NP-complete. From this result, we consider the polynomial time learnability of cograph pattern languages defined by cograph patterns having variables labeled with mutually different labels, called linear cograph patterns. Secondly, we present a polynomial time matching algorithm for linear cograph patterns. Next, we give a polynomial time algorithm for obtaining a minimally generalized linear cograph pattern which explains given positive data. Finally, we show that the class of linear cograph pattern languages is polynomial time inductively inferable from positive data.

    Download PDF (705K)
  • Shouhei FUKUNAGA, Yoshimasa TAKABATAKE, Tomohiro I, Hiroshi SAKAMOTO
    Type: PAPER
    2018 Volume E101.D Issue 3 Pages 593-601
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    A grammar compression is a restricted context-free grammar (CFG) that derives a single string deterministically. The goal of a grammar compression algorithm is to develop a smaller CFG by finding and removing duplicate patterns, which is simply a frequent pattern discovery process. Any frequent pattern can be obtained in linear time; however, a huge working space is required for longer patterns, and the entire string must be preloaded into memory. We propose an online algorithm to address this problem approximately within compressed space. For an input sequence of symbols, a1,a2,..., let Gi be a grammar compression for the string a1a2ai. In this study, an online algorithm is considered one that can compute Gi+1 from (Gi,ai+1) without explicitly decompressing Gi. Here, let G be a grammar compression for string S. We say that variable X approximates a substring P of S within approximation ratio δ iff for any interval [i,j] with P=S[i,j], the parse tree of G has a node labeled with X that derives S[l,r] for a subinterval [l,r] of [i,j] satisfying |[l,r]|≥δ|[i,j]|. Then, G solves the frequent pattern discovery problem approximately within δ iff for any frequent pattern P of S, there exists a variable that approximates P within δ. Here, δ is called the approximation ratio of G for S. Previously, the best approximation ratio obtained by a polynomial time algorithm was Ω(1/lg2|P|). The main contribution of this work is to present a new lower bound Ω(1/<*|S|lg|P|) that is smaller than the previous bound when lg*|S|<lg|P|. Experimental results demonstrate that the proposed algorithm extracts sufficiently long frequent patterns and significantly reduces memory consumption compared to the offline algorithm in the previous work.

    Download PDF (794K)
  • Masashi TSUCHIDA, Fukuhito OOSHITA, Michiko INOUE
    Type: PAPER
    2018 Volume E101.D Issue 3 Pages 602-610
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    We propose an algorithm for the gathering problem of mobile agents in arbitrary networks (graphs) with Byzantine agents. Our algorithm can make all correct agents meet at a single node in O(fm) time (f is the upper bound of the number of Byzantine agents and m is the number of edges) under the assumption that agents have unique ID and behave synchronously, each node is equipped with an authenticated whiteboard, and f is known to agents. Here, the whiteboard is a node memory where agents can leave information. Since the existing algorithm achieves gathering without a whiteboard in Õ(n9λ) time, where n is the number of nodes and λ is the length of the longest ID, our algorithm shows an authenticated whiteboard can significantly reduce the time for the gathering problem in Byzantine environments.

    Download PDF (388K)
  • Mohd SHAHRIZAN OTHMAN, Aleksandar SHURBEVSKI, Hiroshi NAGAMOCHI
    Type: LETTER
    2018 Volume E101.D Issue 3 Pages 611-612
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Given an edge-weighted bipartite digraph G=(A,B;E), the Bipartite Traveling Salesman Problem (BTSP) asks to find the minimum cost of a Hamiltonian cycle of G, or determine that none exists. When |A|=|B|=n, the BTSP can be solved using polynomial space in O*(42nnlog n) time by using the divide-and-conquer algorithm of Gurevich and Shelah (SIAM Journal of Computation, 16(3), pp.486-502, 1987). We adapt their algorithm for the bipartite case, and show an improved time bound of O*(42n), saving the nlog n factor.

    Download PDF (75K)
  • Inbok LEE, Victor C. VALGENTI, Min S. KIM, Sung-il OH
    Type: LETTER
    2018 Volume E101.D Issue 3 Pages 613-615
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    In this paper we show a simple heuristic for constructing smaller automata for a set of regular expressions, based on suffix sorting: finding common prefixes and suffixes in regular expressions and merging them. It is an important problem in network security. We applied our approach to random and real-world regular expressions. Experimental results showed that our approach yields up to 12 times enhancement in throughput.

    Download PDF (279K)
Regular Section
  • Rui YAO, Ping ZHU, Junjie DU, Meiqun WANG, Zhaihe ZHOU
    Type: PAPER
    Subject area: Computer System
    2018 Volume E101.D Issue 3 Pages 616-626
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Evolvable hardware (EHW) based on field-programmable gate arrays (FPGAs) opens up new possibilities towards building efficient adaptive system. State of the art EHW systems based on virtual reconfiguration and dynamic partial reconfiguration (DPR) both have their limitations. The former has a huge area overhead and circuit delay, and the later has slow configuration speed and low flexibility. Therefore a general low-cost fast hybrid reconfiguration architecture is proposed in this paper, which merges the high flexibility of virtual reconfiguration and the low resource cost of DPR. Moreover, the bitstream relocation technology is introduced to save the bitstream storage space, and the discrepancy configuration technology is adopted to reduce reconfiguration time. And an embedded RAM core is adopted to store bitstreams which accelerate the reconfiguration speed further. The proposed architecture is evaluated by the online evolution of digital image filter implemented on the Xilinx Virtex-6 FPGA development board ML605. And the experimental results show that our system has lower resource overhead, higher operating frequency, faster reconfiguration speed and less bitstream storage space in comparison with the previous works.

    Download PDF (1169K)
  • Wenrui DONG, Guangming LIU
    Type: PAPER
    Subject area: Computer System
    2018 Volume E101.D Issue 3 Pages 627-643
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Erasure codes have been considered as one of the most promising techniques for data reliability enhancement and storage efficiency in modern distributed storage systems. However, erasure codes often suffer from a time-consuming coding process which makes them nearly impractical. The opportunity to solve this problem probably rely on the parallelization of erasure-code-based application on the modern multi-/many-core processors to fully take advantage of the adequate hardware resources on those platforms. However, the complicated data allocation and limited I/O throughput pose a great challenge on the parallelization. To address this challenge, we propose a general multi-threaded parallel coding approach in this work. The approach consists of a general multi-threaded parallel coding model named as MTPerasure, and two detailed parallel coding algorithms, named as sdaParallel and ddaParallel, respectively, adapting to different I/O circumstances. MTPerasure is a general parallel coding model focusing on the high level data allocation, and it is applicable for all erasure codes and can be implemented without any modifications of the low level coding algorithms. The sdaParallel divides the data into several parts and the data parts are allocated to different threads statically in order to eliminate synchronization latency among multiple threads, which improves the parallel coding performance under the dummy I/O mode. The ddaParallel employs two threads to execute the I/O reading and writing on the basis of small pieces independently, which increases the I/O throughput. Furthermore, the data pieces are assigned to the coding thread dynamically. A special thread scheduling algorithm is also proposed to reduce thread migration latency. To evaluate our proposal, we parallelize the popular open source library jerasure based on our approach. And a detailed performance comparison with the original sequential coding program indicates that the proposed parallel approach outperforms the original sequential program by an extraordinary speedups from 1.4x up to 7x, and achieves better utilization of the computation and I/O resources.

    Download PDF (3127K)
  • Takayuki OMORI, Katsuhisa MARUYAMA
    Type: PAPER
    Subject area: Software Engineering
    2018 Volume E101.D Issue 3 Pages 644-658
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Understanding which refactoring transformations were performed is in demand in modern software constructions. Traditionally, many researchers have been tackling understanding code changes with history data derived from version control systems. In those studies, problems of the traditional approach are pointed out, such as entanglement of multiple changes. To alleviate the problems, operation histories on IDEs' code editors are available as a new source of software evolution data nowadays. By replaying such histories, we can investigate past code changes in a fine-grained level. However, the prior studies did not provide enough evidence of their effectiveness for detecting refactoring transformations. This paper describes an experiment in which participants detect refactoring transformations performed by other participants after investigating the code changes with an operation-replay tool and diff tools. The results show that both approaches have their respective factors that pose misunderstanding and overlooking of refactoring transformations. Two negative factors on divided operations and generated compound operations were observed in the operation-based approach, whereas all the negative factors resulted from three problems on tangling, shadowing, and out-of-order of code changes in the difference-based approach. This paper also shows seven concrete examples of participants' mistakes in both approaches. These findings give us hints for improving existing tools for understanding code changes and detecting refactoring transformations.

    Download PDF (2414K)
  • Gilseok HONG, Seonghyeon KANG, Chang soo KIM, Jun-Ki MIN
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2018 Volume E101.D Issue 3 Pages 659-667
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    In this paper, we study parallel join processing to improve the performance of the merge phase of sort-merge join by integrating all parallelism provided by mainstream CPUs. Modern CPUs support SIMD instruction sets with wider SIMD registers which allows to process multiple data items per each instruction. Thus, we devise an efficient parallel join algorithm, called Parallel Merge Join with SIMD instructions (PMJS). In our proposed algorithm, we utilize data parallelism by exploiting SIMD instructions. And we also accelerate the performance by avoiding the usage of conditional branch instructions. Furthermore, to take advantage of the multiple cores, our proposed algorithm is threaded in multi-thread environments. In our multi-thread algorithm, to distribute workload evenly to each thread, we devise an efficient workload balancing algorithm based on the kernel density estimator which allows to estimate the workload of each thread accurately.

    Download PDF (1195K)
  • Zhenghang CUI, Issei SATO, Masashi SUGIYAMA
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2018 Volume E101.D Issue 3 Pages 668-677
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    As the emergence and the thriving development of social networks, a huge number of short texts are accumulated and need to be processed. Inferring latent topics of collected short texts is an essential task for understanding its hidden structure and predicting new contents. A biterm topic model (BTM) was recently proposed for short texts to overcome the sparseness of document-level word co-occurrences by directly modeling the generation process of word pairs. Stochastic inference algorithms based on collapsed Gibbs sampling (CGS) and collapsed variational inference have been proposed for BTM. However, they either require large computational complexity, or rely on very crude estimation that does not preserve sufficient statistics. In this work, we develop a stochastic divergence minimization (SDM) inference algorithm for BTM to achieve better predictive likelihood in a scalable way. Experiments show that SDM-BTM trained by 30% data outperforms the best existing algorithm trained by full data.

    Download PDF (621K)
  • MohammadAmin LOTFOLAHI, Cheng-Zen YANG, I-Shyan HWANG, AliAkbar NIKOUK ...
    Type: PAPER
    Subject area: Information Network
    2018 Volume E101.D Issue 3 Pages 678-684
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Ethernet passive optical network (EPON) is one of the energy-efficient access networks. Many studies have been done to reach maximum energy saving in the EPON. However, it is a trade-off between achieving maximum energy saving and guaranteeing QoS. In this paper, a predictive doze mode mechanism in an enhanced EPON architecture is proposed to achieve energy saving by using a logistic regression (LR) model. The optical line terminal (OLT) in the EPON employs an enhanced Doze Manager practicing the LR model to predict the doze periods of the optical network units (ONUs). The doze periods are estimated more accurately based on the historical high-priority traffic information, and logistic regression DBA (LR-DBA) performs dynamic bandwidth allocation accordingly. The proposed LR-DBA mechanism is compared with a scheme without energy saving (IPACT) and another scheme with energy saving (GDBA). Simulation results show that LR-DBA effectively improves the power consumption of ONUs in most cases, and the improvement can be up to 45% while it guarantees the QoS metrics, such as the high-priority traffic delay and jitter.

    Download PDF (1114K)
  • Ryoya FURUKAWA, Ryoichi ISAWA, Masakatu MORII, Daisuke INOUE, Koji NAK ...
    Type: PAPER
    Subject area: Information Network
    2018 Volume E101.D Issue 3 Pages 685-697
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Malicious software (malware) poses various significant challenges. One is the need to retrieve plain-text messages transmitted between malware and herders through an encrypted network channel. Those messages (e.g., commands for malware) can be a useful hint to reveal their malicious activities. However, the retrieving is challenging even if the malware is executed on an analysis computer. To assist analysts in retrieving the plain-text from the memory, this paper presents FCReducer(Function Candidate Reducer), which provides a small candidate set of cryptographic functions called by malware. Given this set, an analyst checks candidates to locate cryptographic functions. If the decryption function is found, she then obtains its output as the plain-text. Although existing systems such as CipherXRay have been proposed to locate cryptographic functions, they heavily rely on fine-grained dynamic taint analysis (DTA). This makes them weak against under-tainting, which means failure of tracking data propagation. To overcome under-tainting, FCReducer conducts coarse-grained DTA and generates a typical data dependency graph of functions in which the root function accesses an encrypted message. This does not require fine-grained DTA. FCReducer then applies a community detection method such as InfoMap to the graph for detecting a community of functions that plays a role in decryption or encryption. The functions in this community are provided as candidates. With experiments using 12 samples including four malware specimens, we confirmed that FCReducer reduced, for example, 4830 functions called by Zeus malware to 0.87% as candidates. We also propose a heuristic to reduce candidates more greatly.

    Download PDF (1044K)
  • Rongzhen LI, Qingbo WU, Yusong TAN, Junyang ZHANG
    Type: PAPER
    Subject area: Information Network
    2018 Volume E101.D Issue 3 Pages 698-708
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Software-defined networking (SDN) has emerged as a promising approach to enable network innovation, which can provide network virtualization through a hypervisor plane to share the same cloud datacenter network among multiple virtual networks. While, this attractive approach may bring some new problem that leads to more susceptible to the failure of network component because of the separated control and forwarding planes. The centralized control and virtual network sharing the same physical network are becoming fragile and prone to failure if the topology of virtual network and the control path is not properly designed. Thus, how to map virtual network into physical datacenter network in virtualized SDN while guaranteeing the survivability against the failure of physical component is extremely important and should fully consider more influence factors on the survivability of virtual network. In this paper, combining VN with SDN, a topology-aware survivable virtual network embedding approach is proposed to improve the survivability of virtual network by an enhanced virtual controller embedding strategy to optimize the placement selection of virtual network without using any backup resources. The strategy explicitly takes account of the network delay and the number of disjoint path between virtual controller and virtual switch to minimize the expected percentage of control path loss with survivable factor. Extensive experimental evaluations have been conducted and the results verify that the proposed technology has improved the survivability and network delay while keeping the other within reasonable bounds.

    Download PDF (1129K)
  • Shujiao LIAO, Qingxin ZHU, Rui LIANG
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2018 Volume E101.D Issue 3 Pages 709-718
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Rough set theory is an important branch of data mining and granular computing, among which neighborhood rough set is presented to deal with numerical data and hybrid data. In this paper, we propose a new concept called inconsistent neighborhood, which extracts inconsistent objects from a traditional neighborhood. Firstly, a series of interesting properties are obtained for inconsistent neighborhoods. Specially, some properties generate new solutions to compute the quantities in neighborhood rough set. Then, a fast forward attribute reduction algorithm is proposed by applying the obtained properties. Experiments undertaken on twelve UCI datasets show that the proposed algorithm can get the same attribute reduction results as the existing algorithms in neighborhood rough set domain, and it runs much faster than the existing ones. This validates that employing inconsistent neighborhoods is advantageous in the applications of neighborhood rough set. The study would provide a new insight into neighborhood rough set theory.

    Download PDF (772K)
  • Daiki SUEHIRO, Kohei HATANO, Eiji TAKIMOTO
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2018 Volume E101.D Issue 3 Pages 719-729
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Finding linear functions that maximize AUC scores is important in ranking research. A typical approach to the ranking problem is to reduce it to a binary classification problem over a new instance space, consisting of all pairs of positive and negative instances. Specifically, this approach is formulated as hard or soft margin optimization problems over pn pairs of p positive and n negative instances. Solving the optimization problems directly is impractical since we have to deal with a sample of size pn, which is quadratically larger than the original sample size p+n. In this paper, we reformulate the ranking problem as variants of hard and soft margin optimization problems over p+n instances. The resulting classifiers of our methods are guaranteed to have a certain amount of AUC scores.

    Download PDF (504K)
  • Xun CAO, Naomi YAMASHITA, Toru ISHIDA
    Type: PAPER
    Subject area: Human-computer Interaction
    2018 Volume E101.D Issue 3 Pages 730-739
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Previous research has shown that transcripts generated by automatic speech recognition (ASR) technologies can improve the listening comprehension of non-native speakers (NNSs). However, we still lack a detailed understanding of how ASR transcripts affect listening comprehension of NNSs. To explore this issue, we conducted two studies. The first study examined how the current presentation of ASR transcripts impacted NNSs' listening comprehension. 20 NNSs engaged in two listening tasks, each in different conditions: C1) audio only and C2) audio+ASR transcripts. The participants pressed a button whenever they encountered a comprehension problem, and explained each problem in the subsequent interviews. From our data analysis, we found that NNSs adopted different strategies when using the ASR transcripts; some followed the transcripts throughout the listening; some only checked them when necessary. NNSs also appeared to face difficulties following imperfect and slightly delayed transcripts while listening to speech - many reported difficulties concentrating on listening/reading or shifting between the two. The second study explored how different display methods of ASR transcripts affected NNSs' listening experiences. We focused on two display methods: 1) accuracy-oriented display which shows transcripts only after the completion of speech input analysis, and 2) speed-oriented display which shows the interim analysis results of speech input. We conducted a laboratory experiment with 22 NNSs who engaged in two listening tasks with ASR transcripts presented via the two display methods. We found that the more the NNSs paid attention to listening to the audio, the more they tended to prefer the speed-oriented transcripts, and vice versa. Mismatched transcripts were found to have negative effects on NNSs' listening comprehension. Our findings have implications for improving the presentation methods of ASR transcripts to more effectively support NNSs.

    Download PDF (2271K)
  • Razvan BEURAN, Cuong PHAM, Dat TANG, Ken-ichi CHINEN, Yasuo TAN, Yoich ...
    Type: PAPER
    Subject area: Educational Technology
    2018 Volume E101.D Issue 3 Pages 740-749
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Given the worldwide proliferation of cyberattacks, it is imperative that cybersecurity education and training are addressed in a timely manner. These activities typically require trainees to do hands-on practice in order to consolidate and improve their skills, for which purpose training environments called cyber ranges are used. In this paper we present an open-source system named CyRIS (Cyber Range Instantiation System) that supports this endeavor by fully automating the training environment setup, thus making it possible for any organization to conduct more numerous and variate training activities. CyRIS uses a text-based representation in YAML format to describe the characteristics of the training environment, including both environment setup and security content generation. Based on this description, CyRIS automatically creates the corresponding cyber range instances on a computer and network infrastructure, for simultaneous use by multiple trainees. We have evaluated CyRIS in various realistic scenarios, and our results demonstrate its suitability for cybersecurity education and training, both in terms of features and performance, including for large-scale training sessions with hundreds of participants.

    Download PDF (987K)
  • Reiko KUWA, Tsuneo KATO, Seiichi YAMAMOTO
    Type: PAPER
    Subject area: Speech and Hearing
    2018 Volume E101.D Issue 3 Pages 750-757
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    This paper proposes a classification method of second-language-learner utterances for interactive computer-assisted language learning systems. This classification method uses three types of bilingual evaluation understudy (BLEU) scores as features for a classifier. The three BLEU scores are calculated in accordance with three subsets of a learner corpus divided according to the quality of utterances. For the purpose of overcoming the data-sparseness problem, this classification method uses the BLEU scores calculated using a mixture of word and part-of-speech (POS)-tag sequences converted from word sequences based on a POS-replacement rule according to which words are replaced with POS tags in n-grams. Experiments of classifying English utterances by Japanese demonstrated that the proposed classification method achieved classification accuracy of 78.2% which was 12.3 points higher than a baseline with one BLEU score.

    Download PDF (677K)
  • Norimichi UKITA
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2018 Volume E101.D Issue 3 Pages 758-766
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    This paper proposes an iterative scheme between human action classification and pose estimation in still images. Initial action classification is achieved only by global image features that consist of the responses of various object filters. The classification likelihood of each action weights human poses estimated by the pose models of multiple sub-action classes. Such fine-grained action-specific pose models allow us to robustly identify the pose of a target person under the assumption that similar poses are observed in each action. From the estimated pose, pose features are extracted and used with global image features for action re-classification. This iterative scheme can mutually improve action classification and pose estimation. Experimental results with a public dataset demonstrate the effectiveness of the proposed method both for action classification and pose estimation.

    Download PDF (2154K)
  • Ju Yong CHANG, Ji Young PARK
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2018 Volume E101.D Issue 3 Pages 767-777
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    The present study considers an action-based person identification problem, in which an input action sequence consists of 3D skeletal data from multiple frames. Unlike previous approaches, the type of action is not pre-defined in this work, which requires the subject classifier to possess cross-action generalization capabilities. To achieve that, we present a novel pose-based Hough forest framework, in which each per-frame pose feature casts a probabilistic vote to the Hough space. Pose distribution is estimated from training data and then used to compute the reliability of the vote to deal with the unseen poses in the test action sequence. Experimental results with various real datasets demonstrate that the proposed method provides effective person identification results especially for the challenging cross-action person identification setting.

    Download PDF (950K)
  • Jing ZHANG, Degen HUANG, Kaiyu HUANG, Zhuang LIU, Fuji REN
    Type: PAPER
    Subject area: Natural Language Processing
    2018 Volume E101.D Issue 3 Pages 778-785
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Microblog data contains rich information of real-world events with great commercial values, so microblog-oriented natural language processing (NLP) tasks have grabbed considerable attention of researchers. However, the performance of microblog-oriented Chinese Word Segmentation (CWS) based on deep neural networks (DNNs) is still not satisfying. One critical reason is that the existing microblog-oriented training corpus is inadequate to train effective weight matrices for DNNs. In this paper, we propose a novel active learning method to extend the scale of the training corpus for DNNs. However, due to a large amount of partially overlapped sentences in the microblogs, it is difficult to select samples with high annotation values from raw microblogs during the active learning procedure. To select samples with higher annotation values, parameter λ is introduced to control the number of repeatedly selected samples. Meanwhile, various strategies are adopted to measure the overall annotation values of a sample during the active learning procedure. Experiments on the benchmark datasets of NLPCC 2015 show that our λ-active learning method outperforms the baseline system and the state-of-the-art method. Besides, the results also demonstrate that the performances of the DNNs trained on the extended corpus are significantly improved.

    Download PDF (1178K)
  • Pilsung KANG
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2018 Volume E101.D Issue 3 Pages 786-790
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    We present a GPU (graphics processing unit) accelerated stochastic algorithm implementation for simulating biochemical reaction networks using the latest NVidia architecture. To effectively utilize the massive parallelism offered by the NVidia Pascal hardware, we apply a set of performance tuning methods and guidelines such as exploiting the architecture's memory hierarchy in our algorithm implementation. Based on our experimentation results as well as comparative analysis using CPU-based implementations, we report our initial experiences on the performance of modern GPUs in the context of scientific computing.

    Download PDF (247K)
  • Jiahui LUO, Zhijian CHEN, Xiaoyan XIANG, Jianyi MENG
    Type: LETTER
    Subject area: Computer System
    2018 Volume E101.D Issue 3 Pages 791-794
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    This work presents a low-complexity lossless electrocardiogram (ECG) compression ASIC for wireless sensors. Three linear predictors aiming for different signal characteristics are provided for prediction based on a history table that records of the optimum predictors for recent samples. And unlike traditional methods using a unified encoder, the prediction error is encoded by a hybrid Golomb encoder combining Exp-Golomb and Golomb-Rice and can adaptively configure the encoding scheme according to the predictor selection. The novel adaptive prediction and encoding scheme contributes to a compression rate of 2.77 for the MIT-BIH Arrhythmia database. Implemented in 40nm CMOS process, the design takes a small gate count of 1.82K with 37.6nW power consumption under 0.9V supply voltage.

    Download PDF (1456K)
  • Seongjin CHOI, Hyeong-Cheol OH
    Type: LETTER
    Subject area: Computer System
    2018 Volume E101.D Issue 3 Pages 795-798
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    This paper proposes and analyzes a pipelining scheme for a hardware squarer that can square unsigned integers of up to 12 bits. Each stage is designed and adjusted such that stage delays are well balanced and that the critical path delay of the design does not exceed the reference value which is set up based on the analysis. The resultant design has the critical path delay of approximately 3.5 times a full-adder delay. In an implementation using an Intel Stratix V FPGA, the design operates at approximately 23% higher frequency than the comparable pipelined squarer provided in the Intel library.

    Download PDF (382K)
  • Kanyakorn JEWMAIDANG, Takashi ISHIO, Akinori IHARA, Kenichi MATSUMOTO, ...
    Type: LETTER
    Subject area: Software Engineering
    2018 Volume E101.D Issue 3 Pages 799-802
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    This paper proposes a method to extract and visualize a library update history in a project. The method identifies reused library versions by comparing source code in a product with existing versions of the library so that developers can understand when their own copy of a library has been copied, modified, and updated.

    Download PDF (252K)
  • SeongHan SHIN, Kazukuni KOBARA
    Type: LETTER
    Subject area: Information Network
    2018 Volume E101.D Issue 3 Pages 803-807
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    A purpose of password-based anonymous authentication schemes is to provide not only password-based authentication but also user anonymity. In [19], Yang et al., proposed a password-based anonymous authentication scheme (we call it YZWB10 scheme) using the password-protected credentials. In this paper, we discuss user anonymity of the YZWB10 scheme [19] against a third-party attacker, who is much weaker than a malicious server. First, we show that a third-party attacker in the YZWB10 scheme can specify which user actually sent the login request to the server. This attack also indicates that the attacker can link different login requests to be sent later by the same user. Second, we give an effective countermeasure to this attack which does not require any security for storing users' password-protected credentials.

    Download PDF (116K)
  • Koichi ISHIDA, Yoshiaki TANIGUCHI, Nobukazu IGUCHI
    Type: LETTER
    Subject area: Information Network
    2018 Volume E101.D Issue 3 Pages 808-811
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    We have proposed a fish-farm monitoring system. In our system, the transmission range of acoustic waves from sensors attached to the undersides of the fish is not omnidirectional because of obstruction from the bodies of the fish. In addition, energy-efficient control is highly important in our system to avoid the need to replace the batteries. In this letter, we propose a data-gathering method for fish-farm monitoring without the use of control packets so that energy-efficient control is possible. Instead, our method uses the transmission-range volume as calculated from the location of the sensor node to determine the timing of packet transmission. Through simulation evaluations, we show that the data-gathering performance of our proposed method is better than that of comparative methods.

    Download PDF (901K)
  • Lihua GUO
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2018 Volume E101.D Issue 3 Pages 812-816
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Self-paced learning (SPL) gradually trains the data from easy to hard, and includes more data into the training process in a self-paced manner. The advantage of SPL is that it has an ability to avoid bad local minima, and the system can improve the generalization performance. However, SPL's system needs an expert to judge the complexity of data at the beginning of training. Generally, this expert does not exist in the beginning, and is learned by gradually training the samples. Based on this consideration, we add an uncertainty of complexity judgment into SPL's system, and propose a self-paced learning with uncertainty prior (SPUP). For efficiently solving our system optimization function, an iterative optimization and statistical simulated annealing method are introduced. The final experimental results indicate that our SPUP has more robustness to the outlier and achieves higher accuracy and less error than SPL.

    Download PDF (940K)
  • Hongbin LIN, Zheng WU, Dong LEI, Wei WANG, Xiuping PENG
    Type: LETTER
    Subject area: Pattern Recognition
    2018 Volume E101.D Issue 3 Pages 817-820
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    This letter presents a novel tensor voting mechanism — analytic tensor voting (ATV), to get rid of the difficulties in original tensor voting, especially the efficiency. One of the main advantages is its explicit voting formulations, which benefit the completion of tensor voting theory and computational efficiency. Firstly, new decaying function was designed following the basic spirit of decaying function in original tensor voting (OTV). Secondly, analytic stick tensor voting (ASTV) was formulated using the new decaying function. Thirdly, analytic plate and ball tensor voting (APTV, ABTV) were formulated through controllable stick tensor construction and tensorial integration. These make the each voting of tensor can be computed by several non-iterative matrix operations, improving the efficiency of tensor voting remarkably. Experimental results validate the effectiveness of proposed method.

    Download PDF (268K)
  • Ryohei SASAKI, Katsumi KONISHI, Tomohiro TAKAHASHI, Toshihiro FURUKAWA
    Type: LETTER
    Subject area: Speech and Hearing
    2018 Volume E101.D Issue 3 Pages 821-825
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    This letter deals with an audio declipping problem and proposes a multiple matrix rank minimization approach. We assume that short-time audio signals satisfy the autoregressive (AR) model and formulate the declipping problem as a multiple matrix rank minimization problem. To solve this problem, an iterative algorithm is provided based on the iterative partial matrix shrinkage (IPMS) algorithm. Numerical examples show its efficiency.

    Download PDF (216K)
  • Zhan WANG, Ping-an DU, Jian LIU
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2018 Volume E101.D Issue 3 Pages 826-829
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Irreversible thermal paints or temperature sensitive paints are a kind of special temperature sensor which can indicate the temperature grad by judging the color change and is widely used for off-line temperature measurement during aero engine test. Unfortunately, the hot gases flow within the engine during measuring always make the paint color degraded, which means a serious saturation reduction and contrast loss of the paint colors. This phenomenon makes it more difficult to interpret the thermal paint test results. Present contrast enhancement algorithms can significantly increase the image contrast but can't protect the hue feature of the paint images effectively, which always cause color shift. In this paper, we propose a color restoration method for thermal paint image. This method utilizes the atmospheric scattering model to restore the lost contrast and saturation information, so that the hue can be protected and the temperature can be precisely interpreted based on the image.

    Download PDF (1163K)
  • Shilei CHENG, Song GU, Maoquan YE, Mei XIE
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2018 Volume E101.D Issue 3 Pages 830-834
    Published: March 01, 2018
    Released: March 01, 2018
    JOURNALS FREE ACCESS

    Human action recognition in videos draws huge research interests in computer vision. The Bag-of-Word model is quite commonly used to obtain the video level representations, however, BoW model roughly assigns each feature vector to its nearest visual word and the collection of unordered words ignores the interest points' spatial information, inevitably causing nontrivial quantization errors and impairing improvements on classification rates. To address these drawbacks, we propose an approach for action recognition by encoding spatio-temporal log Euclidean covariance matrix (ST-LECM) features within the low-rank and sparse representation framework. Motivated by low rank matrix recovery, local descriptors in a spatial temporal neighborhood have similar representation and should be approximately low rank. The learned coefficients can not only capture the global data structures, but also preserve consistent. Experimental results showed that the proposed approach yields excellent recognition performance on synthetic video datasets and are robust to action variability, view variations and partial occlusion.

    Download PDF (1390K)
Errata
feedback
Top