IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E100.D, Issue 6
Displaying 1-29 of 29 articles from this issue
Special Section on Formal Approach
  • Masaki NAKAMURA
    2017 Volume E100.D Issue 6 Pages 1157
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (79K)
  • Masashi MIZOGUCHI, Toshimitsu USHIO
    Article type: PAPER
    Subject area: Formal techniques
    2017 Volume E100.D Issue 6 Pages 1158-1165
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    In this paper, we consider a networked control system where bounded network delays and packet dropouts exist in the network. The physical plant is abstracted by a transition system whose states are quantized states of the plant measured by a sensor, and a control specification for the abstracted plant is given by a transition system when no network disturbance occurs. Then, we design a prediction-based controller that determines a control input by predicting a set of all feasible abstracted states at time when the actuator receives the delayed input. It is proved that the prediction-based controller suppresses effects of network delays and packet dropouts and that the controlled plant still achieves the specification in spite of the existence of network delays and packet dropouts.

    Download PDF (457K)
  • Sasinee PRUEKPRASERT, Toshimitsu USHIO
    Article type: PAPER
    Subject area: Formal techniques
    2017 Volume E100.D Issue 6 Pages 1166-1171
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    This paper studies the supervisory control of partially observed quantitative discrete event systems (DESs) under the fixed-initial-credit energy objective. A quantitative DES is modeled by a weighted automaton whose event set is partitioned into a controllable event set and an uncontrollable event set. Partial observation is modeled by a mapping from each event and state of the DES to the corresponding masked event and masked state that are observed by a supervisor. The supervisor controls the DES by disabling or enabling any controllable event for the current state of the DES, based on the observed sequences of masked states and masked events. We model the control process as a two-player game played between the supervisor and the DES. The DES aims to execute the events so that its energy level drops below zero, while the supervisor aims to maintain the energy level above zero. We show that the proposed problem is reducible to finding a winning strategy in a turn-based reachability game.

    Download PDF (586K)
  • Reona MINODA, Shin-ichi MINATO
    Article type: PAPER
    Subject area: Formal techniques
    2017 Volume E100.D Issue 6 Pages 1172-1181
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    This paper proposes a formal approach of verifying ubiquitous computing application scenarios. Ubiquitous computing application scenarios assume that there are a lot of devices and physical things with computation and communication capabilities, which are called smart objects, and these are interacted with each other. Each of these interactions among smart objects is called “federation”, and these federations form a ubiquitous computing application scenario. Previously, Yuzuru Tanaka proposed “a proximity-based federation model among smart objects”, which is intended for liberating ubiquitous computing from stereotyped application scenarios. However, there are still challenges to establish the verification method of this model. This paper proposes a verification method of this model through model checking. Model checking is one of the most popular formal verification approach and it is often used in various fields of industry. Model checking is conducted using a Kripke structure which is a formal state transition model. We introduce a context catalytic reaction network (CCRN) to handle this federation model as a formal state transition model. We also give an algorithm to transform a CCRN into a Kripke structure and we conduct a case study of ubiquitous computing scenario verification, using this algorithm and the model checking. Finally, we discuss the advantages of our formal approach by showing the difficulties of our target problem experimentally.

    Download PDF (1606K)
  • Hiroshi IWATA, Nanami KATAYAMA, Ken'ichi YAMAGUCHI
    Article type: PAPER
    Subject area: Formal techniques
    2017 Volume E100.D Issue 6 Pages 1182-1189
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    In accordance with Moore's law, recent design issues include shortening of time-to-market and detection of delay faults. Several studies with respect to formal techniques have examined the first issue. Using the equivalence checking, it is possible to identify whether large circuits are equivalent or not in a practical time frame. With respect to the latter issue, it is difficult to achieve 100% fault efficiency even for transition faults in full scan designs. This study involved proposing a redundant transition fault identification method using equivalence checking. The main concept of the proposed algorithm involved combining the following two known techniques, 1. modeling of a transition fault as a stuck-at fault with temporal expansion and 2. detection of a stuck-at fault by using equivalence checking tools. The experimental results indicated that the proposed redundant identification method using a formal approach achieved 100% fault efficiency for all benchmark circuits in a practical time even if a commercial ATPG tool was unable to achieve 100% fault efficiency for several circuits.

    Download PDF (668K)
  • Somsak VANIT-ANUNCHAI
    Article type: PAPER
    Subject area: Formal techniques
    2017 Volume E100.D Issue 6 Pages 1190-1199
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    This paper presents the formal analysis of the feature negotiation and connection management procedures of the Datagram Congestion Control Protocol (DCCP). Using state space analysis we discover an error in the DCCP specification, that result in both ends of the connection having different agreed feature values. The error occurs when the client ignores an unexpected Response packet in the OPEN state that carries a valid Confirm option. This provides an evidence that the connection management procedure and feature negotiation procedures interact. We also propose solutions to rectify the problem.

    Download PDF (1098K)
  • Toshiyuki MIYAMOTO
    Article type: PAPER
    Subject area: Formal tools
    2017 Volume E100.D Issue 6 Pages 1200-1209
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    For a service-oriented architecture based system, the problem of synthesizing a concrete model, i.e., behavioral model, for each service configuring the system from an abstract specification, which is referred to as choreography, is known as the choreography realization problem. In this paper, we assume that choreography is given by an acyclic relation. We have already shown that the condition for the behavioral model is given by lower and upper bounds of acyclic relations. Thus, the degree of freedom for behavioral models increases; developing algorithms of synthesizing an intelligible model for users becomes possible. In this paper, we introduce several metrics for intelligibility of state machines, and study the algorithm of synthesizing Pareto efficient state machines.

    Download PDF (1292K)
  • Tomohiro ODA, Keijiro ARAKI, Peter GORM LARSEN
    Article type: PAPER
    Subject area: Formal tools
    2017 Volume E100.D Issue 6 Pages 1210-1217
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    The software development process is front-loaded when formal specification is deployed and as a consequence more problems are identified and solved at an earlier point of time. This places extra importance on the quality and efficiency of the different formal specification tasks. We use the term “exploratory modeling” to denote the modeling that is conducted during the early stages of software development before the requirements are clearly understood. We believe tools that support not only rigorous but also flexible construction of the specification at the same time are helpful in such exploratory modeling phases. This paper presents a web-based IDE named VDMPad to demonstrate the concept of exploratory modeling. VDMPad has been evaluated by experienced professional VDM engineers from industry. The positive evaluation resulting from such industrial users are presented. It is believed that flexible and rigorous tools for exploratory modeling will help to improve the productivity of the industrial software developments by making the formal specification phase more efficient.

    Download PDF (933K)
Regular Section
  • Wenhao FU, Huiqun YU, Guisheng FAN, Xiang JI
    Article type: PAPER
    Subject area: Software Engineering
    2017 Volume E100.D Issue 6 Pages 1218-1230
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Regression testing is essential for assuring the quality of a software product. Because rerunning all test cases in regression testing may be impractical under limited resources, test case prioritization is a feasible solution to optimize regression testing by reordering test cases for the current testing version. In this paper, we propose a novel test case prioritization approach that combines the clustering algorithm and the scheduling algorithm for improving the effectiveness of regression testing. By using the clustering algorithm, test cases with same or similar properties are merged into a cluster, and the scheduling algorithm helps allocate an execution priority for each test case by incorporating fault detection rates with the waiting time of test cases in candidate set. We have conducted several experiments on 12 C programs to validate the effectiveness of our proposed approach. Experimental results show that our approach is more effective than some well studied test case prioritization techniques in terms of average percentage of fault detected (APFD) values.

    Download PDF (1852K)
  • Sailan WANG, Zhenzhi YANG, Jin YANG, Hongjun WANG
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 6 Pages 1231-1241
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    In general, semi-supervised clustering can outperform unsupervised clustering. Since 2001, pairwise constraints for semi-supervised clustering have been an important paradigm in this field. In this paper, we show that pairwise constraints (ECs) can affect the performance of clustering in certain situations and analyze the reasons for this in detail. To overcome these disadvantages, we first outline some exemplars constraints. Based on these constraints, we then describe a semi-supervised clustering framework, and design an exemplars constraints expectation-maximization algorithm. Finally, standard datasets are selected for experiments, and experimental results are presented, which show that the exemplars constraints outperform the corresponding unsupervised clustering and semi-supervised algorithms based on pairwise constraints.

    Download PDF (1233K)
  • Yu ZHAO, Sheng GAO, Patrick GALLINARI, Jun GUO
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 6 Pages 1242-1250
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    It inevitably comes out information overload problem with the increasing available data on e-commence websites. Most existing approaches have been proposed to recommend the users personal significant and interesting items on e-commence websites, by estimating unknown rating which the user may rate the unrated item, i.e., rating prediction. However, the existing approaches are unable to perform user prediction and item prediction, since they just treat the ratings as real numbers and learn nothing about the ratings' embeddings in the training process. In this paper, motivated by relation prediction in multi-relational graph, we propose a novel embedding model, namely RPEM, to solve the problem including the tasks of rating prediction, user prediction and item prediction simultaneously for recommendation systems, by learning the latent semantic representation of the users, items and ratings. In addition, we apply the proposed model to cross-domain recommendation, which is able to realize recommendation generation in multiple domains. Empirical comparison on several real datasets validates the effectiveness of the proposed model. The data is available at https://github.com/yuzhaour/da.

    Download PDF (1890K)
  • Ting WANG, Tiansheng XU, Zheng TANG, Yuki TODO
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 6 Pages 1251-1261
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Linked Open Data (LOD) at Schema-Level and knowledge described in Chinese is an important part of the LOD project. Previous work generally ignored the rules of word-order sensitivity and polysemy in Chinese or could not deal with the out-of-vocabulary (OOV) mapping task. There is still no efficient system for large-scale Chinese ontology mapping. In order to solve the problem, this study proposes a novel TongYiCiCiLin (TYCCL) and Sequence Alignment-based Chinese Ontology Mapping model, which is called TongSACOM, to evaluate Chinese concept similarity in LOD environment. Firstly, an improved TYCCL-based similarity algorithm is proposed to compute the similarity between atomic Chinese concepts that have been included in TYCCL. Secondly, a global sequence-alignment and improved TYCCL-based combined algorithm is proposed to evaluate the similarity between Chinese OOV. Finally, comparing the TongSACOM to other typical similarity computing algorithms, and the results prove that it has higher overall performance and usability. This study may have important practical significance for promoting Chinese knowledge sharing, reusing, interoperation and it can be widely applied in the related area of Chinese information processing.

    Download PDF (2492K)
  • Liangliang ZHANG, Longqi YANG, Yong GONG, Zhisong PAN, Yanyan ZHANG, G ...
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 6 Pages 1262-1270
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    In multi-view social networks field, a flexible Nonnegative Matrix Factorization (NMF) based framework is proposed which integrates multi-view relation data and feature data for community discovery. Benefit with a relaxed pairwise regularization and a novel orthogonal regularization, it outperforms the-state-of-art algorithms on five real-world datasets in terms of accuracy and NMI.

    Download PDF (777K)
  • Makio ISHIHARA, Yukio ISHIHARA
    Article type: PAPER
    Subject area: Human-computer Interaction
    2017 Volume E100.D Issue 6 Pages 1271-1279
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    This paper discusses the use of a common computer mouse as a pointing interface for tabletop displays. In the use of a common computer mouse for tabletop displays, there might be an angular distance between the screen coordinates and the mouse control coordinates. To align those coordinates, this paper introduces a screen coordinates calibration technique using a shadow cursor. A shadow cursor is the basic idea of manipulating a mouse cursor without any visual feedbacks. The shadow cursor plays an important role in obtaining the angular distance between the two coordinates. It enables the user to perform a simple mouse manipulation so that screen coordinates calibration will be completed in less than a second.

    Download PDF (1604K)
  • Ryo NAGATA, Edward WHITTAKER
    Article type: PAPER
    Subject area: Educational Technology
    2017 Volume E100.D Issue 6 Pages 1280-1289
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    This paper presents a novel framework called error case frames for correcting preposition errors. They are case frames specially designed for describing and correcting preposition errors. Their most distinct advantage is that they can correct errors with feedback messages explaining why the preposition is erroneous. This paper proposes a method for automatically generating them by comparing learner and native corpora. Experiments show (i) automatically generated error case frames achieve a performance comparable to previous methods; (ii) error case frames are intuitively interpretable and manually modifiable to improve them; (iii) feedback messages provided by error case frames are effective in language learning assistance. Considering these advantages and the fact that it has been difficult to provide feedback messages using automatically generated rules, error case frames will likely be one of the major approaches for preposition error correction.

    Download PDF (1329K)
  • Zhiqiang HU, Dongju LI, Tsuyoshi ISSHIKI, Hiroaki KUNIEDA
    Article type: PAPER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 6 Pages 1290-1302
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Narrow swipe sensor has been widely used in embedded systems such as smart-phone. However, the size of captured image is much smaller than that obtained by the traditional area sensor. Therefore, the limited template coverage is the performance bottleneck of such kind of systems. Aiming to increase the geometry coverage of templates, a novel fingerprint template feature synthesis scheme is proposed in the present study. This method could synthesis multiple input fingerprints into a wider template by clustering the minutiae descriptors. The proposed method consists of two modules. Firstly, a user behavior-based Registration Pattern Inspection (RPI) algorithm is proposed to select the qualified candidates. Secondly, an iterative clustering algorithm Modified Fuzzy C-Means (MFCM) is proposed to process the large amount of minutiae descriptors and then generate the final template. Experiments conducted over swipe fingerprint database validate that this innovative method gives rise to significant improvements in reducing FRR (False Reject Rate) and EER (Equal Error Rate).

    Download PDF (3657K)
  • Yong DING, Xinyu ZHAO, Zhi ZHANG, Hang DAI
    Article type: PAPER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 6 Pages 1303-1315
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Image quality assessment (IQA) plays an important role in quality monitoring, evaluation and optimization for image processing systems. However, current quality-aware feature extraction methods for IQA can hardly balance accuracy and complexity. This paper introduces multi-order local description into image quality assessment for feature extraction. The first-order structure derivative and high-order discriminative information are integrated into local pattern representation to serve as the quality-aware features. Then joint distributions of the local pattern representation are modeled by spatially enhanced histogram. Finally, the image quality degradation is estimated by quantifying the divergence between such distributions of the reference image and those of the distorted image. Experimental results demonstrate that the proposed method outperforms other state-of-the-art approaches in consideration of not only accuracy that is consistent with human subjective evaluation, but also robustness and stability across different distortion types and various public databases. It provides a promising choice for image quality assessment development.

    Download PDF (1680K)
  • Dongchen ZHU, Ziran XING, Jiamao LI, Yuzhang GU, Xiaolin ZHANG
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 6 Pages 1316-1324
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Effective indoor localization is the essential part of VR (Virtual Reality) and AR (Augmented Reality) technologies. Tracking the RGB-D camera becomes more popular since it can capture the relatively accurate color and depth information at the same time. With the recovered colorful point cloud, the traditional ICP (Iterative Closest Point) algorithm can be used to estimate the camera poses and reconstruct the scene. However, many works focus on improving ICP for processing the general scene and ignore the practical significance of effective initialization under the specific conditions, such as the indoor scene for VR or AR. In this work, a novel indoor prior based initialization method has been proposed to estimate the initial motion for ICP algorithm. We introduce the generation process of colorful point cloud at first, and then introduce the camera rotation initialization method for ICP in detail. A fast region growing based method is used to detect planes in an indoor frame. After we merge those small planes and pick up the two biggest unparallel ones in each frame, a novel rotation estimation method can be employed for the adjacent frames. We evaluate the effectiveness of our method by means of qualitative observation of reconstruction result because of the lack of the ground truth. Experimental results show that our method can not only fix the failure cases, but also can reduce the ICP iteration steps significantly.

    Download PDF (2294K)
  • Jieyan LIU, Ao MA, Jingjing LI, Ke LU
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 6 Pages 1325-1338
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Subspace representation model is an important subset of visual tracking algorithms. Compared with models performed on the original data space, subspace representation model can effectively reduce the computational complexity, and filter out high dimensional noises. However, for some complicated situations, e.g., dramatic illumination changing, large area of occlusion and abrupt object drifting, traditional subspace representation models may fail to handle the visual tracking task. In this paper, we propose a novel subspace representation algorithm for robust visual tracking by using low-rank representation with graph constraints (LRGC). Low-rank representation has been well-known for its superiority of handling corrupted samples, and graph constraint is flexible to characterize sample relationship. In this paper, we aim to exploit benefits from both low-rank representation and graph constraint, and deploy it to handle challenging visual tracking problems. Specifically, we first propose a novel graph structure to characterize the relationship of target object in different observation states. Then we learn a subspace by jointly optimizing low-rank representation and graph embedding in a unified framework. Finally, the learned subspace is embedded into a Bayesian inference framework by using the dynamical model and the observation model. Experiments on several video benchmarks demonstrate that our algorithm performs better than traditional ones, especially in dynamically changing and drifting situations.

    Download PDF (3236K)
  • Hironori TAKIMOTO, Syuhei HITOMI, Hitoshi YAMAUCHI, Mitsuyoshi KISHIHA ...
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 6 Pages 1339-1349
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    It is estimated that 80% of the information entering the human brain is obtained through the eyes. Therefore, it is commonly believed that drawing human attention to particular objects is effective in assisting human activities. In this paper, we propose a novel image modification method for guiding user attention to specific regions of interest by using a novel saliency map model based on spatial frequency components. We modify the frequency components on the basis of the obtained saliency map to decrease the visual saliency outside the specified region. By applying our modification method to an image, human attention can be guided to the specified region because the saliency inside the region is higher than that outside the region. Using gaze measurements, we show that the proposed saliency map matches well with the distribution of actual human attention. Moreover, we evaluate the effectiveness of the proposed modification method by using an eye tracking system.

    Download PDF (2231K)
  • Tadashi MATSUO, Nobutaka SHIMADA
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 6 Pages 1350-1359
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classification of new objects could be a valuable tool for generic object recognition. Object functions are closely related to hand-object interactions during handling of a functional object; i.e., how the hand approaches the object, which parts of the object and contact the hand, and the shape of the hand during interaction. Hand-object interactions are helpful for modeling object functions. However, it is difficult to assign discrete labels to interactions because an object shape and grasping hand-postures intrinsically have continuous variations. To describe these interactions, we propose the interaction descriptor space which is acquired from unlabeled appearances of human hand-object interactions. By using interaction descriptors, we can numerically describe the relation between an object's appearance and its possible interaction with the hand. The model infers the quantitative state of the interaction from the object image alone. It also identifies the parts of objects designed for hand interactions such as grips and handles. We demonstrate that the proposed method can unsupervisedly generate interaction descriptors that make clusters corresponding to interaction types. And also we demonstrate that the model can infer possible hand-object interactions.

    Download PDF (1852K)
  • Qingfu FAN, Lei ZHANG, Wen LI
    Article type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2017 Volume E100.D Issue 6 Pages 1360-1363
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Existing noise inference algorithms neglected the smooth characteristics of noise data, which results in executing slowly of noise inference. In order to address this problem, we present a noise inference algorithm based on fast context-aware tensor decomposition (F-CATD). F-CATD improves the noise inference algorithm based on context-aware tensor decomposition algorithm. It combines the smoothness constraint with context-aware tensor decomposition to speed up the process of decomposition. Experiments with New York City 311 noise data show that the proposed method accelerates the noise inference. Compared with the existing method, F-CATD reduces 4-5 times in terms of time consumption while keeping the effectiveness of the results.

    Download PDF (208K)
  • Masahiro SUZUKI, Piyarat SILAPASUPHAKORNWONG, Youichi TAKASHIMA, Hidey ...
    Article type: LETTER
    Subject area: Information Network
    2017 Volume E100.D Issue 6 Pages 1364-1367
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    We evaluated a technique for protecting the copyright of digital data for 3-D printing. To embed copyright information, the inside of a 3-D printed object is constructed from fine domains that have different physical characteristics from those of the object's main body surrounding them, and to read out the embedded information, these fine domains inside the objects are detected using nondestructive inspections such as X-ray photography or thermography. In the evaluation, copyright information embedded inside the 3-D printed object was expressed using the depth of fine cavities inside the object, and X-ray photography were used for reading them out from the object. The test sample was a cuboid 46mm wide, 42mm long, and 20mm deep. The cavities were 2mm wide and 2mm long. The difference in the depths of the cavities appeared as a difference in the luminance in the X-ray photographs, and 21 levels of depth could be detected on the basis of the difference in luminance. These results indicate that under the conditions of the experiment, each cavity expressed 4 to 5bits of information with its depth. We demonstrated that the proposed technique had the possibility of embedding a sufficient volume of information for expressing copyright information by using the depths of cavities.

    Download PDF (792K)
  • Jing LIU, Yuan WANG, Pei Dai XIE, Yong Jun WANG
    Article type: LETTER
    Subject area: Information Network
    2017 Volume E100.D Issue 6 Pages 1368-1371
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Malware phylogeny refers to inferring the evolutionary relationships among instances of a family. It plays an important role in malware forensics. Previous works mainly focused on tree-based model. However, trees cannot represent reticulate events, such as inheriting code fragments from different parents, which are common in variants generation. Therefore, phylogenetic networks as a more accurate and general model have been put forward. In this paper, we propose a novel malware phylogenetic network construction method based on splits graph, taking advantage of the one-to-one correspondence between reticulate events and netted components in splits graph. We evaluate our algorithm on three malware families and two benign families whose ground truth are known and compare with competing algorithms. Experiments demonstrate that our method achieves a higher mean accuracy of 64.8%.

    Download PDF (408K)
  • Dengchao HE, Hongjun ZHANG, Wenning HAO, Rui ZHANG, Huan HAO
    Article type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 6 Pages 1372-1375
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    The purpose of document modeling is to learn low-dimensional semantic representations of text accurately for Natural Language Processing tasks. In this paper, proposed is a novel attention-based hybrid neural network model, which would extract semantic features of text hierarchically. Concretely, our model adopts a bidirectional LSTM module with word-level attention to extract semantic information for each sentence in text and subsequently learns high level features via a dynamic convolution neural network module. Experimental results demonstrate that our proposed approach is effective and achieve better performance than conventional methods.

    Download PDF (433K)
  • Xiaoguang TU, Feng YANG, Mei XIE, Zheng MA
    Article type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 6 Pages 1376-1379
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Numerous methods have been developed to handle lighting variations in the preprocessing step of face recognition. However, most of them only use the high-frequency information (edges, lines, corner, etc.) for recognition, as pixels lied in these areas have higher local variance values, and thus insensitive to illumination variations. In this case, information of low-frequency may be discarded and some of the features which are helpful for recognition may be ignored. In this paper, we present a new and efficient method for illumination normalization using an energy minimization framework. The proposed method aims to remove the illumination field of the observed face images while simultaneously preserving the intrinsic facial features. The normalized face image and illumination field could be achieved by a reciprocal iteration scheme. Experiments on CMU-PIE and the Extended Yale B databases show that the proposed method can preserve a very good visual quality even on the images illuminated with deep shadow and high brightness regions, and obtain promising illumination normalization results for better face recognition performance.

    Download PDF (3176K)
  • YingJiang WU, BenYong LIU
    Article type: LETTER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 6 Pages 1380-1383
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    Recently, a high dimensional classification framework has been proposed to introduce spatial structure information in classical single kernel support vector machine optimization scheme for brain image analysis. However, during the construction of spatial kernel in this framework, a huge adjacency matrix is adopted to determine the adjacency relation between each pair of voxels and thus it leads to very high computational complexity in the spatial kernel calculation. The method is improved in this manuscript by a new construction of tensorial kernel wherein a 3-order tensor is adopted to preserve the adjacency relation so that calculation of the above huge matrix is avoided, and hence the computational complexity is significantly reduced. The improvement is verified by experimental results on classification of Alzheimer patients and cognitively normal controls.

    Download PDF (357K)
  • Daehun KIM, Bonhwa KU, David K. HAN, Hanseok KO
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 6 Pages 1384-1387
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    In this paper, an algorithm is proposed for license plate recognition (LPR) in video traffic surveillance applications. In an LPR system, the primary steps are license plate detection and character segmentation. However, in practice, false alarms often occur due to images of vehicle parts that are similar in appearance to a license plate or detection rate degradation due to local illumination changes. To alleviate these difficulties, the proposed license plate segmentation employs an adaptive binarization using a superpixel-based local contrast measurement. From the binarization, we apply a set of rules to a sequence of characters in a sub-image region to determine whether it is part of a license plate. This process is effective in reducing false alarms and improving detection rates. Our experimental results demonstrate a significant improvement over conventional methods.

    Download PDF (735K)
  • Zhaoyang GUO, Xin'an WANG, Bo WANG, Zheng XIE
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 6 Pages 1388-1392
    Published: June 01, 2017
    Released on J-STAGE: June 01, 2017
    JOURNAL FREE ACCESS

    In the field of action recognition, Spatio-Temporal Interest Points (STIPs)-based features have shown high efficiency and robustness. However, most of state-of-the-art work to describe STIPs, they typically focus on 2-dimensions (2D) images, which ignore information in 3D spatio-temporal space. Besides, the compact representation of descriptors should be considered due to the costs of storage and computational time. In this paper, a novel local descriptor named 3D Gradient LBP is proposed, which extends the traditional descriptor Local Binary Patterns (LBP) into 3D spatio-temporal space. The proposed descriptor takes advantage of the neighbourhood information of cuboids in three dimensions, which accounts for its excellent descriptive power for the distribution of grey-level space. Experiments on three challenging datasets (KTH, Weizmann and UT Interaction) validate the effectiveness of our approach in the recognition of human actions.

    Download PDF (1032K)
feedback
Top