IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
E98.D 巻, 6 号
選択された号の論文の17件中1~17を表示しています
Special Section on Formal Approach
  • Kozo OKANO
    2015 年 E98.D 巻 6 号 p. 1120
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
  • Masahiko SAKAI, Hidetomo NABESHIMA
    原稿種別: PAPER
    専門分野: Foundation
    2015 年 E98.D 巻 6 号 p. 1121-1127
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Pseudo-Boolean (PB) problems are Integer Linear Problem restricted to 0-1 variables. This paper discusses on acceleration techniques of PB-solvers that employ SAT-solving of combined CNFs each of which is produced from each PB-constraint via a binary decision diagram (BDD). Specifically, we show (i) an efficient construction of a reduced ordered BDD (ROBDD) from a constraint in band form l ≤ <Linear term> ≤ h, (ii) a CNF coding that produces two clauses for some nodes in an ROBDD obtained by (i), and (iii) an incremental SAT-solving of the binary/alternative search for minimizing values of a given goal function. We implemented the proposed constructions and report on experimental results.
  • Mohd Anuaruddin BIN AHMADON, Shingo YAMAGUCHI
    原稿種別: PAPER
    専門分野: Petri net
    2015 年 E98.D 巻 6 号 p. 1128-1136
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    The number of states is a very important matter for model checking approach in Petri net's analysis. We first gave a formal definition of state number calculation problem: For a Petri net with an initial state (marking), how many states does it have? Next we showed the problem cannot be solved in polynomial time for a popular subclass of Petri nets, known as free choice workflow nets, if PNP. Then we proposed a polynomial time algorithm to solve the problem by utilizing a representational bias called as process tree. We also showed effectiveness of the algorithm through an application example.
  • Dieu-Huong VU, Yuki CHIBA, Kenro YATAKE, Toshiaki AOKI
    原稿種別: PAPER
    専門分野: Formal Verification
    2015 年 E98.D 巻 6 号 p. 1137-1149
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Verification of a design with respect to its requirement specification is important to prevent errors before constructing an actual implementation. The existing works focus on verifications where the specifications are described using temporal logics or using the same languages as that used to describe the designs. Our work considers cases where the specifications and the designs are described using different languages. To verify such cases, we propose a framework to check if a design conforms to its specification based on their simulation relation. Specifically, we define the semantics of the specifications and the designs commonly as labelled transition systems (LTSs). We appreciate LTSs since they could interpret information about the system and actions that the system may perform as well as the effect of these actions. Then, we check whether a design conforms to its specification based on the simulation relation of their LTS. In this paper, we present our framework for the verification of reactive systems, and we present the case where the specifications and the designs are described in Event-B and Promela/Spin, respectively. We also present two case studies with the results of several experiments to illustrate the applicability of our framework on practical systems.
  • Hideto OGAWA, Makoto ICHII, Tomoyuki MYOJIN, Masaki CHIKAHISA, Yuichir ...
    原稿種別: PAPER
    専門分野: Model Checking
    2015 年 E98.D 巻 6 号 p. 1150-1160
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    A practical model-checking (MC) approach for fault analysis, that is one of the most cost-effective tasks in software development, is proposed. The proposed approach is based on a technique, named “Program-oriented Modeling” (POM) for extracting a model from source code. The framework of model extraction by POM provides configurable abstraction based on user-defined transformation rules, and it supports trial-and-error model extraction. An environment for MC called POM/MC was also built. POM/MC analyzes C source code to extract Promela models used for the SPIN model checker. It was applied to an industrial software system to evaluate the efficiency of the configurable model extraction by POM for fault analysis. Moreover, it was shown that the proposed MC approach can reduce the effort involved in analyzing software faults by MC.
  • Yasunori ISHIHARA, Yasuhiro USHIROZAKO, Kengo MORI, Jun FURUKAWA
    原稿種別: LETTER
    専門分野: Database Security
    2015 年 E98.D 巻 6 号 p. 1161-1165
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    In this letter, we propose a secrecy criterion for outsourcing encrypted databases. In encrypted databases, encryption schemes revealing some information are often used in order to manipulate encrypted data efficiently. The proposed criterion is based on inference analysis for databases: We simulate attacker's inference on specified secret information with and without the revealed information from the encrypted database. When the two inference results are the same, then secrecy of the specified information is preserved against outsourcing the encrypted database. We also show that the proposed criterion is decidable under a practical setting.
Regular Section
  • Nguyen Ngoc BINH, Pham Van HUONG, Bui Ngoc HAI
    原稿種別: PAPER
    専門分野: Computer System
    2015 年 E98.D 巻 6 号 p. 1166-1175
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Optimizing embedded software is a problem having scientific and practical signification. Optimizing embedded software can be done in different phases of the software life cycle under different optimal conditions. Most studies of embedded software optimization are done in forward engineering and these studies have not given an overall model for the optimization problem of embedded software in both forward engineering and reverse engineering. Therefore, in this paper, we propose a new approach to embedded software optimization based on reverse engineering. First, we construct an overall model for the embedded software optimization in both forward engineering and reverse engineering and present a process of embedded software optimization in reverse engineering. The main idea of this approach is that decompiling executable code to source code, converting the source code to models and optimizing embedded software under different levels such as source code and model. Then, the optimal source code is recompiled. To develop this approach, we present two optimization techniques such as optimizing power consumption of assembly programs based on instruction schedule and optimizing performance based on alternating equivalent expressions.
  • Kento AIDA, Omar ABDUL-RAHMAN, Eisaku SAKANE, Kazutaka MOTOYAMA
    原稿種別: PAPER
    専門分野: Computer System
    2015 年 E98.D 巻 6 号 p. 1176-1184
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Cloud computing is a widely used computing platform in business and academic communities. Performance is an important issue when a user runs an application in the cloud. The user may want to estimate the application-execution time beforehand to guarantee the application performance or to choose the most suitable cloud. Moreover, the cloud system architect and the designer need to understand the application performance characteristics, such as the scalability or the utilization of cloud platforms, to improve performance. However, because the application performance in clouds sometime fluctuates, estimation of the application performance is difficult. In this paper, we discuss the performance fluctuation of Hadoop jobs in both a public cloud and a community cloud for one to three months. The experimental results indicate phenomena that we cannot see without long-term experiments and phenomena inherent in Hadoop. The results suggest better ways to estimate Hadoop application performances in clouds. For example, we should be aware of application characteristics (CPU intensive or communication intensive), datacenter characteristics (busy or not), and time frame (time of day and day of the week) to estimate the performance fluctuation due to workload congestion in cloud platforms. Furthermore, we should be aware of performance degradation due to task re-execution in Hadoop applications.
  • Tetsuya KANDA, Takashi ISHIO, Katsuro INOUE
    原稿種別: PAPER
    専門分野: Software Engineering
    2015 年 E98.D 巻 6 号 p. 1185-1193
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Once a software product has been released, a large number of software products may be derived from an original single product. Management and maintenance of product variants are important, but those are hardly cared because developers do not make efforts for the further maintainability in the initial phase of software development. However, history of products would be lost in typical cases and developers have only source code of products in the worst case. In this paper, we approximate the evolution history of software products using source code of them. Our key idea is that two successive products are the most similar pair of products in evolution history, and have many similar source files. We did an experiment to compare the analysis result with actual evolution history. The result shows 78% (on average) of edges in the extracted trees are consistent with the actual evolution history of the products.
  • Yu KASHIMA, Takashi ISHIO, Shogo ETSUDA, Katsuro INOUE
    原稿種別: PAPER
    専門分野: Software Engineering
    2015 年 E98.D 巻 6 号 p. 1194-1205
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    To understand the behavior of a program, developers often need to read source code fragments in various modules. System-dependence-graph-based (SDG) program slicing is a good candidate for supporting the investigation of data-flow paths among modules, as SDG is capable of showing the data-dependence of focused program elements. However, this technique has two problems. First, constructing SDG requires heavyweight analysis, so SDG is not suitable for daily uses. Second, the results of SDG-based program slicing are difficult to visualize, as they contain many vertices. In this research, we proposed variable data-flow graphs (VDFG) for use in program slicing techniques. In contrast to SDG, VDFG is created by lightweight analysis because several approximations are used. Furthermore, we propose using the fractal value to visualize VDFG-based program slice in order to reduce the graph complexity for visualization purposes. We performed three experiments that demonstrate the accuracy of VDFG program slicing with fractal value, the size of a visualized program slice, and effectiveness of our tool for source code reading.
  • Kenji MIKAWA, Ken TANAKA
    原稿種別: PAPER
    専門分野: Information Network
    2015 年 E98.D 巻 6 号 p. 1206-1212
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Packet classification is a fundamental task in the control of network traffic, protection from cyber threats. Most layer 3 and higher network devices have a packet classification capability that determines whether to permit or discard incoming packets by comparing their headers with a set of rules. Although linear search is an intuitive implementation of packet classification, it is very inefficient. Srinivasan et al. proposed a novel lookup scheme using a hierarchical trie instead of linear search, which realizes faster packet classification with time complexity proportional to rule length rather than the number of rules. However, the hierarchical trie and its various improved algorithms allow only single prefix rules to be processed. Since it is necessary for layer 4 and higher packet classifications to deal with arbitrary bitmask rules in the hierarchical trie, we propose a run-based trie based on the hierarchical trie, but extended to deal with arbitrary bitmask rules. Our proposed algorithm achieves O((dW)2) query time and O(NdW) space complexity with N rules of length dW. The query time of our novel alrorithm doesn't depend on the number of rules. It solves the latency problem caused by increase of the rules in firewalls.
  • Chengsong WANG, Xiaoguang MAO, Yan LEI, Peng ZHANG
    原稿種別: PAPER
    専門分野: Dependable Computing
    2015 年 E98.D 巻 6 号 p. 1213-1227
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    In recent years, hybrid typestate analysis has been proposed to eliminate unnecessary monitoring instrumentations for runtime monitors at compile-time. Nop-shadows Analysis (NSA) is one of these hybrid typestate analyses. Before generating residual monitors, NSA performs the data-flow analysis which is intra-procedural flow-sensitive and partially context-sensitive to improve runtime performance. Although NSA is precise, there are some cases on which it has little effects. In this paper, we propose three optimizations to further improve the precision of NSA. The first two optimizations try to filter interferential states of objects when determining whether a monitoring instrumentation is necessary. The third optimization refines the inter-procedural data-flow analysis induced by method invocations. We have integrated our optimizations into Clara and conducted extensive experiments on the DaCapo benchmark. The experimental results demonstrate that our first two optimizations can further remove unnecessary instrumentations after the original NSA in more than half of the cases, without a significant overhead. In addition, all the instrumentations can be removed for two cases, which implies the program satisfy the typestate property and is free of runtime monitoring. It comes as a surprise to us that the third optimization can only be effective on 8.7% cases. Finally, we analyze the experimental results and discuss the reasons why our optimizations fail to further eliminate unnecessary instrumentations in some special situations.
  • Xia MAO, Yiping PENG, Yuli XUE, Na LUO, Alberto ROVETTA
    原稿種別: PAPER
    専門分野: Human-computer Interaction
    2015 年 E98.D 巻 6 号 p. 1228-1237
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    In recent years, researchers have tried to create unhindered human-computer interaction by giving virtual agents human-like conversational skills. Predicting backchannel feedback for agent listeners has become a novel research hot-spot. The main goal of this paper is to identify appropriate features and methods for backchannel prediction in Mandarin conversations. Firstly, multimodal Mandarin conversations are recorded for the analysis of backchannel behaviors. In order to eliminate individual difference in the original face-to-face conversations, more backchannels from different listeners are gathered together. These data confirm that backchannels occurring in the speakers' pauses form a vast majority in Mandarin conversations. Both prosodic and visual features are used in backchannel prediction. Four types of models based on the speakers' pauses are built by using support vector machine classifiers. An evaluation of the pause-based prediction model has shown relatively high accuracy in consideration of the optional nature of backchannel feedback. Finally, the results of the subjective evaluation validate that the conversations performed between humans and virtual listeners using backchannels predicted by the proposed models is more unhindered compared to other backchannel prediction methods.
  • Gee-Sern HSU, Hsiao-Chia PENG, Ding-Yu LIN, Chyi-Yeu LIN
    原稿種別: PAPER
    専門分野: Image Recognition, Computer Vision
    2015 年 E98.D 巻 6 号 p. 1238-1246
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Face recognition across pose is generally tackled by either 2D based or 3D based approaches. The 2D-based often require a training set from which the cross-pose multi-view relationship can be learned and applied for recognition. The 3D based are mostly composed of 3D surface reconstruction of each gallery face, synthesis of 2D images of novel views using the reconstructed model, and match of the synthesized images to the probes. The depth information provides crucial information for arbitrary poses but more methods are yet to be developed. Extended from a latest face reconstruction method using a single 3D reference model and a frontal registered face, this study focuses on using the reconstructed 3D face for recognition. The recognition performance varies with poses, the closer to the front, the better. Several ways to improve the performance are attempted, including different numbers of fiducial points for alignment, multiple reference models considered in the reconstruction phase, and both frontal and profile poses available in the gallery. These attempts make this approach competitive to the state-of-the-art methods.
  • Kaoru KATAYAMA, Takumi SATO
    原稿種別: LETTER
    専門分野: Data Engineering, Web Information Systems
    2015 年 E98.D 巻 6 号 p. 1247-1250
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    We present a matching method for 3D CAD assembly models consisting of multiple components. Here we need to distinguish the layouts and the materials of the components in addition to their shapes. A set of the feature quantities of an assembly model is extracted using projections from various angles. We show the effectiveness of our method experimentally for 3D CAD assembly models.
  • Ji-Sang BAE, Jong-Ok KIM
    原稿種別: LETTER
    専門分野: Human-computer Interaction
    2015 年 E98.D 巻 6 号 p. 1251-1255
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    Support Vector Machine (SVM) is one of the most widely used classifiers to categorize observations. This classifier deterministically selects a class that has the largest score for a classification output. In this letter, we propose a multiclass probabilistic classification method that reflects the degree of confidence. We apply the proposed method to age group classification and verify the performance.
  • Hang LI, Yafei ZHANG, Jiabao WANG, Yulong XU, Yang LI, Zhisong PAN
    原稿種別: LETTER
    専門分野: Image Recognition, Computer Vision
    2015 年 E98.D 巻 6 号 p. 1256-1259
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    State-of-the-art background subtraction and foreground detection methods still face a variety of challenges, including illumination changes, camouflage, dynamic backgrounds, shadows, intermittent object motion. Detection of foreground elements via the robust principal component analysis (RPCA) method and its extensions based on low-rank and sparse structures have been conducted to achieve good performance in many scenes of the datasets, such as Changedetection.net (CDnet); however, the conventional RPCA method does not handle shadows well. To address this issue, we propose an approach that considers observed video data as the sum of three parts, namely a row-rank background, sparse moving objects and moving shadows. Next, we cast inequality constraints on the basic RPCA model and use an alternating direction method of multipliers framework combined with Rockafeller multipliers to derive a closed-form solution of the shadow matrix sub-problem. Our experiments have demonstrated that our method works effectively on challenging datasets that contain shadows.
feedback
Top