-
Yan ZHANG, Hongyan MAO
Article type: PAPER
Subject area: Fundamentals of Information Systems
2016 Volume E99.D Issue 9 Pages
2239-2247
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
In this paper, the integration of dynamic plant-wide optimization and distributed generalized predictive control (DGPC) is presented for serially connected processes. On the top layer, chance-constrained programming (CCP) is employed in the plant-wide optimization with economic and model uncertainties, in which the constraints containing stochastic parameters are guaranteed to be satisfied at a high level of probability. The deterministic equivalents are derived for linear and nonlinear individual chance constraints, and an algorithm is developed to search for the solution to the joint probability constrained problem. On the lower layer, the distributed GPC method based on neighborhood optimization with one-step delay communication is developed for on-line control of the whole system. Simulation studies for furnace temperature set-points optimization problem of the walking-beam-type reheating furnace are illustrated to verify the effectiveness and practicality of the proposed scheme.
View full abstract
-
Aseffa DEREJE TEKILU, Chin-Hsien WU
Article type: PAPER
Subject area: Software System
2016 Volume E99.D Issue 9 Pages
2248-2258
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
A map-reduce framework is popular for big data analysis. In the typical map-reduce framework, both master node and worker nodes can use hard-disk drives (HDDs) as local disks for the map-reduce computation. However, because of the inherit mechanical problems of HDDs, the I/O performance is a bottleneck for the map-reduce framework when I/O-intensive applications (e.g., sorting) are performed. Replacing HDDs with solid-state drives (SSDs) is not economical, although SSDs have better performance than HDDs. In this paper, we propose a virtualization-based hybrid storage system for the map-reduce framework. The objective of the paper is to combine the advantages of the fast access property of SSDs and the low cost of HDDs by realizing an economical design and improving I/O performance of a map-reduce framework in a virtualization environment. We propose three storage combinations: SSD-based, HDD-based, and a hybrid of SSD-based and HDD-based storage systems which balances speed, capacity, and lifetime. According to experiments, the hybrid of SSD-based and HDD-based storage systems offers superior performance and economy.
View full abstract
-
Hiroaki AKUTSU, Kazunori UEDA, Takeru CHIBA, Tomohiro KAWAGUCHI, Norio ...
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2016 Volume E99.D Issue 9 Pages
2259-2268
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. If data loss occurs by multiple drive failure, it affects many users using a storage system. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed blocks from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamic change of amount of storage at each redundancy level caused by multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. We showed a failure impact model and a method for localizing the failure. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, the proposed technique turned out to scale well, and the probability of data loss decreased by two orders of magnitude for systems with a thousand drives compared to normal RAID. The appropriate setting of a stripe distribution level could localize the failure.
View full abstract
-
Xianqiang BAO, Nong XIAO, Yutong LU, Zhiguang CHEN
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2016 Volume E99.D Issue 9 Pages
2269-2282
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
NoSQL systems have become vital components to deliver big data services due to their high horizontal scalability. However, existing NoSQL systems rely on experienced administrators to configure and tune the wide range of configurable parameters for optimized performance. In this work, we present a configuration management framework for NoSQL systems, called xConfig. With xConfig, its users can first identify performance sensitive parameters and capture the tuned parameters for different workloads as configuration policies. Next, based on tuned policies, xConfig can be implemented as the corresponding configuration optimiaztion system for the specific NoSQL system. Also it can be used to analyze the range of configurable parameters that may impact the runtime performance of NoSQL systems. We implement a prototype called HConfig based on HBase, and the parameter tuning strategies for HConfig can generate tuned policies and enable HBase to run much more efficiently on both individual worker node and entire cluster. The massive writing oriented evaluation results show that HBase under write-intensive policies outperforms both the default configuration and some existing configurations while offering significantly higher throughput.
View full abstract
-
Khalid MAHMOOD, Mazen ALOBAIDI, Hironao TAKAHASHI
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2016 Volume E99.D Issue 9 Pages
2283-2294
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
The automation of traceability links or traceability matrices is important to many software development paradigms. In turn, the efficiency and effectiveness of the recovery of traceability links in the distributed software development is becoming increasingly vital due to complexity of project developments, as this include continuous change in requirements, geographically dispersed project teams, and the complexity of managing the elements of a project - time, money, scope, and people. Therefore, the traceability links among the requirements artifacts, which fulfill business objectives, is also critical to reduce the risk and ensures project‘s success. This paper proposes Autonomous Decentralized Semantic based Traceability Link Recovery (AD-STLR) architecture. According to best of our knowledge this is the first architectural approach that uses an autonomous decentralized concept, DBpedia knowledge-base, Babelnet 2.5 multilingual dictionary and semantic network, for finding similarity among different project artifacts and the automation of traceability links recovery.
View full abstract
-
Jun-Li LU, Makoto P. KATO, Takehiro YAMAMOTO, Katsumi TANAKA
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2016 Volume E99.D Issue 9 Pages
2295-2305
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
We address the problem of entity identification on a microblog with special attention to indirect reference cases in which entities are not referred to by their names. Most studies on identifying entities referred to them by their full/partial name or abbreviation, while there are many indirectly mentioned entities in microblogs, which are difficult to identify in short text such as microblogs. We therefore tackled indirect reference cases by developing features that are particularly important for certain types of indirect references and modeling dependency among referred entities by a Conditional Random Field (CRF) model. In addition, we model non-sequential order dependency while keeping the inference tractable by dynamically building dependency among entities. The experimental results suggest that our features were effective for indirect references, and our CRF model with adaptive dependency was robust even when there were multiple mentions in a microblog and achieved the same high performance as that with the fully connected CRF model.
View full abstract
-
Chaiyaporn PANYINDEE, Chuchart PINTAVIROOJ
Article type: PAPER
Subject area: Image Processing and Video Processing
2016 Volume E99.D Issue 9 Pages
2306-2319
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
This paper introduces a reversible watermarking algorithm that exploits an adaptable predictor and sorting parameter customized for each image and each payload. Our proposed method relies on a well-known prediction-error expansion (PEE) technique. Using small PE values and a harmonious PE sorting parameter greatly decreases image distortion. In order to exploit adaptable tools, Gaussian weight predictor and expanded variance mean (EVM) are used as parameters in this work. A genetic algorithm is also introduced to optimize all parameters and produce the best results possible. Our results show an improvement in image quality when compared with previous conventional works.
View full abstract
-
Masaya MURATA, Hidehisa NAGANO, Kaoru HIRAMATSU, Kunio KASHINO, Shin'i ...
Article type: PAPER
Subject area: Image Processing and Video Processing
2016 Volume E99.D Issue 9 Pages
2320-2331
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
In this paper, we first analyze the discriminative power in the Best Match (BM) 25 formula and provide its calculation method from the Bayesian point of view. The resulting, derived discriminative power is quite similar to the exponential inverse document frequency (EIDF) that we have previously proposed [1] but retains more preferable theoretical advantages. In our previous paper [1], we proposed the EIDF in the framework of the probabilistic information retrieval (IR) method BM25 to address the instance search task, which is a specific object search for videos using an image query. Although the effectiveness of our EIDF was experimentally demonstrated, we did not consider its theoretical justification and interpretation. We also did not describe the use of region-of-interest (ROI) information, which is supposed to be input to the instance search system together with the original image query showing the instance. Therefore, here, we justify the EIDF by calculating the discriminative power in the BM25 from the Bayesian viewpoint. We also investigate the effect of the ROI information for improving the instance search accuracy and propose two search methods incorporating the ROI effect into the BM25 video ranking function. We validated the proposed methods through a series of experiments using the TREC Video Retrieval Evaluation instance search task dataset.
View full abstract
-
Chao ZHANG, Haitian SUN, Takuya AKASHI
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2016 Volume E99.D Issue 9 Pages
2332-2340
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
In this paper, we address the problem of non-parametric template matching which does not assume any specific deformation models. In real-world matching scenarios, deformation between a template and a matching result usually appears to be non-rigid and non-linear. We propose a novel approach called local rigidity constraints (LRC). LRC is built based on an assumption that the local rigidity, which is referred to as structural persistence between image patches, can help the algorithm to achieve better performance. A spatial relation test is proposed to weight the rigidity between two image patches. When estimating visual similarity under an unconstrained environment, high-level similarity (e.g. with complex geometry transformations) can then be estimated by investigating the number of LRC. In the searching step, exhaustive matching is possible because of the simplicity of the algorithm. Global maximum is given out as the final matching result. To evaluate our method, we carry out a comprehensive comparison on a publicly available benchmark and show that our method can outperform the state-of-the-art method.
View full abstract
-
Chao ZHANG, Takuya AKASHI
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2016 Volume E99.D Issue 9 Pages
2341-2350
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
In this paper, we address the problem of projective template matching which aims to estimate parameters of projective transformation. Although homography can be estimated by combining key-point-based local features and RANSAC, it can hardly be solved with feature-less images or high outlier rate images. Estimating the projective transformation remains a difficult problem due to high-dimensionality and strong non-convexity. Our approach is to quantize the parameters of projective transformation with binary finite field and search for an appropriate solution as the final result over the discrete sampling set. The benefit is that we can avoid searching among a huge amount of potential candidates. Furthermore, in order to approximate the global optimum more efficiently, we develop a level-wise adaptive sampling (LAS) method under genetic algorithm framework. With LAS, the individuals are uniformly selected from each fitness level and the elite solution finally converges to the global optimum. In the experiment, we compare our method against the popular projective solution and systematically analyse our method. The result shows that our method can provide convincing performance and holds wider application scope.
View full abstract
-
SeungJong NOH, Moongu JEON
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2016 Volume E99.D Issue 9 Pages
2351-2359
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
As the number of surveillance cameras keeps increasing, the demand for automated traffic-monitoring systems is growing. In this paper, we propose a practical vehicle detection method for such systems. In the last decade, vehicle detection mainly has been performed by employing an image scan strategy based on sliding windows whereby a pre-trained appearance model is applied to all image areas. In this approach, because the appearance models are built from vehicle sample images, the normalization of the scales and aspect ratios of samples can significantly influence the performance of vehicle detection. Thus, to successfully apply sliding window schemes to detection, it is crucial to select the normalization sizes very carefully in a wise manner. To address this, we present a novel vehicle detection technique. In contrast to conventional methods that determine the normalization sizes without considering given scene conditions, our technique first learns local region-specific size models based on scene-contextual clues, and then utilizes the obtained size models to normalize samples to construct more elaborate appearance models, namely local size-specific classifiers (LSCs). LSCs can provide advantages in terms of both accuracy and operational speed because they ignore unnecessary information on vehicles that are observable in faraway areas from each sliding window position. We conduct experiments on real highway traffic videos, and demonstrate that the proposed method achieves a 16% increased detection accuracy with at least 3 times faster operational speed compared with the state-of-the-art technique.
View full abstract
-
Motohiro NAKAMURA, Shinnosuke OYA, Takahiro OKABE, Hendrik P. A. LENSC ...
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2016 Volume E99.D Issue 9 Pages
2360-2367
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
Self-luminous light sources in the real world often have nonnegligible sizes and radiate light inhomogeneously. Acquiring the model of such a light source is highly important for accurate image synthesis and understanding. In this paper, we propose an approach to measuring 4D light fields of self-luminous extended light sources by using a liquid crystal (LC) panel, i.e. a programmable optical filter and a diffuse-reflection board. The proposed approach recovers the 4D light field from the images of the board illuminated by the light radiated from a light source and passing through the LC panel. We make use of the feature that the transmittance of the LC panel can be controlled both spatially and temporally. The approach enables multiplexed sensing and adaptive sensing, and therefore is able to acquire 4D light fields more efficiently and densely than the straightforward method. We implemented the prototype setup, and confirmed through a number of experiments that our approach is effective for modeling self-luminous extended light sources in the real world.
View full abstract
-
Yoshihide KATO, Shigeki MATSUBARA
Article type: PAPER
Subject area: Natural Language Processing
2016 Volume E99.D Issue 9 Pages
2368-2376
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
This paper proposes a method of incrementally constructing semantic representations. Our method is based on Steedman's Combinatory Categorial Grammar (CCG), which has a transparent correspondence between syntax and semantics. In our method, a derivation for a sentence is constructed in an incremental fashion and the corresponding semantic representation is derived synchronously. Our method uses normal form CCG derivation. This is the difference between our approach and previous ones. Previous approaches use most left-branching derivation called incremental derivation, but they cannot process coordinate structures incrementally. Our method overcomes this problem.
View full abstract
-
Byungnam LIM, Yeeun SHIM, Yon Dohn CHUNG
Article type: LETTER
Subject area: Fundamentals of Information Systems
2016 Volume E99.D Issue 9 Pages
2377-2380
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
For an efficient processing of large data in a distributed system, Hadoop MapReduce performs task scheduling such that tasks are distributed with consideration of the data locality. The data locality, however, is limitedly exploited, since it is pursued one node at a time basis without considering the global optimality. In this paper, we propose a novel task scheduling algorithm that globally considers the data locality. Through experiments, we show our algorithm improves the performance of MapReduce in various situations.
View full abstract
-
Yong ZHANG, Wanqiu ZHANG, Dunwei GONG, Yinan GUO, Leida LI
Article type: LETTER
Subject area: Fundamentals of Information Systems
2016 Volume E99.D Issue 9 Pages
2381-2384
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
Considering an uncertain multi-objective optimization system with interval coefficients, this letter proposes an interval multi-objective particle swarm optimization algorithm. In order to improve its performance, a crowding distance measure based on the distance and the overlap degree of intervals, and a method of updating the archive based on the acceptance coefficient of decision-maker, are employed. Finally, results show that our algorithm is capable of generating excellent approximation of the true Pareto front.
View full abstract
-
Hongzhe LI, Jaesang OH, Heejo LEE
Article type: LETTER
Subject area: Software System
2016 Volume E99.D Issue 9 Pages
2385-2389
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
Finding software vulnerabilities in source code before the program gets deployed is crucial to ensure the software quality. Existing source code auditing tools for vulnerability detection generate too many false positives, and only limited types of vulnerability can be detected automatically. In this paper, we propose an extendable mechanism to reveal vulnerabilities in source code with low false positives by specifying security requirements and detecting requirement violations of the potential vulnerable sinks. The experimental results show that the proposed mechanism can detect vulnerabilities with zero false positives and indicate the extendability of the mechanism to cover more types of vulnerabilities.
View full abstract
-
Dae Hyun YUM
Article type: LETTER
Subject area: Information Network
2016 Volume E99.D Issue 9 Pages
2390-2394
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
Key infection is a lightweight key-distribution protocol for partially compromised wireless sensor networks, where sensor nodes send cryptographic keys in the clear. As the adversary is assumed to be present partially at the deployment stage, some keys are eavesdropped but others remain secret. To enhance the security of key infection, secrecy amplification combines keys propagated along different paths. Two neighbor nodes W1 and W2 can use another node W3 to update their key. If W3 is outside of the eavesdropping region of the adversary, the updated key is guaranteed to be secure. To date, the effectiveness of secrecy amplification has been demonstrated only by simulation. In this article, we present the first mathematical analysis of secrecy amplification. Our result shows that the effectiveness of secrecy amplification increases as the distance between the two neighbor nodes decreases.
View full abstract
-
Xiaofan CHEN, Shunzheng YU
Article type: LETTER
Subject area: Information Network
2016 Volume E99.D Issue 9 Pages
2395-2399
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
DDoS remains a major threat to Software Defined Networks. To keep SDN secure, effective detection techniques for DDoS are indispensable. Most of the newly proposed schemes for detecting such attacks on SDN make the SDN controller act as the IDS or the central server of a collaborative IDS. The controller consequently becomes a target of the attacks and a heavy loaded point of collecting traffic. A collaborative intrusion detection system is proposed in this paper without the need for the controller to play a central role. It is deployed as a modified artificial neural network distributed over the entire substrate of SDN. It disperses its computation power over the network that requires every participating switch to perform like a neuron. The system is robust without individual targets and has a global view on a large-scale distributed attack without aggregating traffic over the network. Emulation results demonstrate its effectiveness.
View full abstract
-
Gyuyeong KIM, Wonjun LEE
Article type: LETTER
Subject area: Information Network
2016 Volume E99.D Issue 9 Pages
2400-2403
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
Query response times are critical for cluster computing applications in data centers. In this letter, we argue that to optimize the network performance, we should consider the latency of the flows suffered loss, which are called tardy flows. We propose two tardy flow scheduling algorithms and show that our work offers significant performance gains through performance analysis and simulations.
View full abstract
-
Woo Hyun AHN, Sanghyeon PARK, Jaewon OH, Seung-Ho LIM
Article type: LETTER
Subject area: Dependable Computing
2016 Volume E99.D Issue 9 Pages
2404-2409
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
In Android OS, we discover that a notification service called inotify is a new side-channel allowing malware to identify file accesses associated with the display of a security-relevant UI screen. This paper proposes a phishing attack that detects victim UI screens by their file accesses in applications and steals private information.
View full abstract
-
Yong-Jo AHN, Xiangjian WU, Donggyu SIM, Woo-Jin HAN
Article type: LETTER
Subject area: Image Processing and Video Processing
2016 Volume E99.D Issue 9 Pages
2410-2412
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
In this letter, fast intra mode decision algorithms for HEVC Screen Contents Coding (SCC) are proposed. HEVC SCC has been developed to efficiently code mixed contents consisting of natural video, graphics, and texts. Comparing to HEVC version 1, the SCC encoding complexity significantly increases due to the newly added intra block copy mode. To reduce the heavy encoding complexity, the evaluation orders of multiple intra modes are rearranged and several early termination schemes based on intermediate coding information are developed. Based on our evaluation, it is found that the proposed method can achieve encoding time reduction of 13∼30% with marginal coding gain or loss, compared with HEVC SCC test model 2.0 in all intra (AI) case.
View full abstract
-
Mahmoud EMAM, Qi HAN, Liyang YU, Hongli ZHANG
Article type: LETTER
Subject area: Image Processing and Video Processing
2016 Volume E99.D Issue 9 Pages
2413-2416
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
The copy-move or region duplication forgery technique is a very common type of image manipulation, where a region of the image is copied and then pasted in the same image in order to hide some details. In this paper, a keypoint-based method for copy-move forgery detection is proposed. Firstly, the feature points are detected from the image by using the Förstner Operator. Secondly, the algorithm extracts the features by using MROGH feature descriptor, and then matching the features. Finally, the affine transformation parameters can be estimated using the RANSAC algorithm. Experimental results are presented to confirm that the proposed method is effective to locate the altered region with geometric transformation (rotation and scaling).
View full abstract
-
Katsuto NAKAJIMA, Azusa MAMA, Yuki MORIMOTO
Article type: LETTER
Subject area: Computer Graphics
2016 Volume E99.D Issue 9 Pages
2417-2421
Published: September 01, 2016
Released on J-STAGE: September 01, 2016
JOURNAL
FREE ACCESS
We propose a system named ETIS (Energy-based Tree Illustration System) for automatically generating tree illustrations characteristic of two-dimensional ones with features such as exaggerated branch curves, leaves, and flowers. The growth behavior of the trees can be controlled by adjusting the energy. The canopy shape and the region to fill with leaves and flowers are also controlled by hand-drawn guide lines.
View full abstract