IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E96.D, Issue 6
Displaying 1-20 of 20 articles from this issue
Special Section on Formal Approach
  • Kazuhiro OGATA
    2013 Volume E96.D Issue 6 Pages 1257
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    Download PDF (81K)
  • Nikolaos TRIANTAFYLLOU, Petros STEFANEAS, Panayiotis FRANGOS
    Article type: PAPER
    Subject area: Formal Methods
    2013 Volume E96.D Issue 6 Pages 1258-1267
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    The Open Mobile Alliance (OMA) Order of Rights Object Evaluation algorithm causes the loss of rights on contents under certain circumstances. By identifying the cases that cause this loss we suggest an algebraic characterization, as well as an ordering of OMA licenses. These allow us to redesign the algorithm so as to minimize the losses, in a way suitable for the low computational powers of mobile devices. In addition we provide a formal proof that the proposed algorithm fulfills its intent. The proof is conducted using the OTS/CafeOBJ method for verifying invariant properties.
    Download PDF (284K)
  • Chittaphone PHONHARATH, Kenji HASHIMOTO, Hiroyuki SEKI
    Article type: PAPER
    Subject area: Static Analysis
    2013 Volume E96.D Issue 6 Pages 1268-1277
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    We study a static analysis problem on k-secrecy, which is a metric for the security against inference attacks on XML databases. Intuitively, k-secrecy means that the number of candidates of sensitive data of a given database instance or the result of unauthorized query cannot be narrowed down to k-1 by using available information such as authorized queries and their results. In this paper, we investigate the decidability of the schema k-secrecy problem defined as follows: for a given XML database schema, an authorized query and an unauthorized query, decide whether every database instance conforming to the given schema is k-secret. We first show that the schema k-secrecy problem is undecidable for any finite k>1 even when queries are represented by a simple subclass of linear deterministic top-down tree transducers (LDTT). We next show that the schema ∞-secrecy problem is decidable for queries represented by LDTT. We give an algorithm for deciding the schema ∞-secrecy problem and analyze its time complexity. We show the schema ∞-secrecy problem is EXPTIME-complete for LDTT. Moreover, we show similar results LDTT with regular look-ahead.
    Download PDF (673K)
  • Seikoh NISHITA
    Article type: PAPER
    Subject area: Static Analysis
    2013 Volume E96.D Issue 6 Pages 1278-1285
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    The string analysis is a static analysis of dynamically generated strings in a target program, which is applied to check well-formed string construction in web applications. The string analysis constructs a finite state automaton that approximates a set of possible strings generated for a particular string variable at a program location at runtime. A drawback in the string analysis is imprecision in the analysis result, leading to false positives in the well-formedness checkers. To address the imprecision, this paper proposes an improvement technique of the string analysis to make it perform more precise analysis with respect to input validation in web applications. This paper presents the improvement by annotations representing screening of a set of possible strings, and empirical evaluation with experiments of the improved analyzer on real-world web applications.
    Download PDF (659K)
Regular Section
  • Etsuji TOMITA, Yoichi SUTANI, Takanori HIGASHI, Mitsuo WAKATSUKI
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2013 Volume E96.D Issue 6 Pages 1286-1298
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    Many problems can be formulated as maximum clique problems. Hence, it is highly important to develop algorithms that can find a maximum clique very fast in practice. We propose new approximate coloring and other related techniques which markedly improve the run time of the branch-and-bound algorithm MCR (J. Global Optim., 37, pp.95-111, 2007), previously shown to be the fastest maximum-clique-finding algorithm for a large number of graphs. The algorithm obtained by introducing these new techniques in MCR is named MCS. It is shown that MCS is successful in reducing the search space quite efficiently with low overhead. Extensive computational experiments confirm the superiority of MCS over MCR and other existing algorithms. It is faster than the other algorithms by orders of magnitude for several graphs. In particular, it is faster than MCR for difficult graphs of very high density and for very large and sparse graphs, even though MCS is not designed for any particular type of graph. MCS can be faster than MCR by a factor of more than 100,000 for some extremely dense random graphs. This paper demonstrates in detail the effectiveness of each new techniques in MCS, as well as the overall contribution.
    Download PDF (811K)
  • Amila AKAGIC, Hideharu AMANO
    Article type: PAPER
    Subject area: Computer System
    2013 Volume E96.D Issue 6 Pages 1299-1308
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    Cyclic Redundancy Check (CRC) is a well known error detection scheme used to detect corruption of digital content in digital networks and storage devices. Since it is a compute-intensive process which adversely affects performance, hardware acceleration using FPGAs has been tried and satisfactory performance has been achieved. However, recent extended usage of networks and storage systems require various correction capabilities for various CRC standards. Traditional hardware designs based on the LFSR (Linear Feedback Shift Register) tend to have fixed structure without such flexibility. Here, fully-adaptable CRC accelerator based on a table-based algorithm is proposed. The table-based algorithm is a flexible method commonly used in software implementations. It has been rarely implemented with the hardware, since it is believed that the operational speed is not enough. However, by using pipelined structure and efficient use of memory modules in FPGAs, it appeared that the table-based fixed CRC accelerators achieved better performance than traditional implementation. Based on the implementation, fully-adaptable CRC accelerator which eliminate the need for many non-adaptable CRC implementations is proposed. The accelerator has ability to process arbitrary number of input data and generates CRC for any known CRC standard, up to 65bits of generator polynomial, during run-time. Further, we modify Table generation algorithm in order to decrease its space complexity from O(nm) to O(n). On Xilinx Virtex 6 LX550T board, the fully-adaptable accelerators occupy between 1 to 2% area to produce maximum of 289.8Gbps at 283.1MHz if BRAM is deployed, or between 1.6 - 14% of area for 418Gbps at 408.9MHz if tables are implemented in logic. Proposed architecture enables further expansion of throughput by increasing a number of input bits M processed at a time.
    Download PDF (1243K)
  • Zhen ZHANG, Shanping LI, Junzan ZHOU
    Article type: PAPER
    Subject area: Software Engineering
    2013 Volume E96.D Issue 6 Pages 1309-1322
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    Online resource management of a software system can take advantage of a performance model to predict the effect of proposed changes. However, the prediction accuracy may degrade if the performance model does not adapt to the changes in the system. This work considers the problem of using Kalman filters to track changes in both performance model parameters and system behavior. We propose a method based on the multiple-model Kalman filter. The method runs a set of Kalman filters, each of which models different system behavior, and adaptively fuses the output of those filters for overall estimates. We conducted case studies to demonstrate how to use the method to track changes in various system behaviors: performance modeling, process modeling, and measurement noise. The experiments show that the method can detect changes in system behavior promptly and significantly improve the tracking and prediction accuracy over the single-model Kalman filter. The influence of model design parameters and mode-model mismatch is evaluated. The results support the usefulness of the multiple-model Kalman filter for tracking performance model parameters in systems with time-varying behavior.
    Download PDF (783K)
  • Yoshinobu HIGAMI, Hiroshi TAKAHASHI, Shin-ya KOBAYASHI, Kewal K. SALUJ ...
    Article type: PAPER
    Subject area: Dependable Computing
    2013 Volume E96.D Issue 6 Pages 1323-1331
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    This paper deals with delay faults on clock lines assuming the launch-on-capture test. In this realistic fault model, the amount of delay at the FF driven by the faulty clock line is such that the scan shift operation can perform correctly even in the presence of a fault, but during the system clock operation, capturing functional value(s) at faulty FF(s), i.e. FF(s) driven by the clock with delay, is delayed and correct value(s) may not be captured. We developed a fault simulator that can handle such faults and using this simulator we investigate the relation between the duration of the delay and the difficulty of detecting clock delay faults in the launch-on-capture test. Next, we propose test generation methods for detecting clock delay faults that affect a single or two FFs. Experimental results for benchmark circuits are given in order to establish the effectiveness of the proposed methods.
    Download PDF (610K)
  • Dong Phuong DINH, Fumiko HARADA, Hiromitsu SHIMAKAWA
    Article type: PAPER
    Subject area: Educational Technology
    2013 Volume E96.D Issue 6 Pages 1332-1343
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    The paper proposes the PMD method to design an introductory programming practice course plan that is inclusive for all learners and stable throughout a course. To achieve the course plan, the method utilizes personas, each of which represents learners having similar motivation to study programming. The learning of the personas is directed to the course goal with an enforcement resulting from the discipline, which is an integration of effective learning strategies with affective components of the persoans. Under the enforcement, services to facilitate and promote the learning of each persona can be decided, based on motivation components of each persona, motivational effects of the services, and the cycle of self-efficacy. The application of the method on about 500 freshmen in C programming practice course has shown this is a successful approach for designing courses.
    Download PDF (978K)
  • Byeoung-su KIM, Cho-il LEE, Seong-hwan JU, Whoi-Yul KIM
    Article type: PAPER
    Subject area: Pattern Recognition
    2013 Volume E96.D Issue 6 Pages 1344-1350
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    3D display systems without glasses are preferred because of the inconvenience wearing of special glasses while viewing 3D content. In general, non-glass type 3D displays work by sending left and right views of the content to the corresponding eyes depending on the user position with respect to the display. Since accurate user position estimation has become a very important task for non-glass type 3D displays, most of such systems require additional hardware or suffer from low accuracy. In this paper, an accurate user position estimation method using a single camera for non-glass type 3D display is proposed. As inter-pupillary distance is utilized for the estimation, at first the face is detected and then tracked using an Active Appearance Model. The pose of face is then estimated to compensate the pose variations. To estimate the user position, a simple perspective mapping function is applied which uses the average of the inter-pupillary distance. For accuracy, personal inter-pupillary distance can also be used. Experimental results have shown that the proposed method successfully estimated the user position using a single camera. The average error for position estimation with the proposed method was small enough for viewing 3D contents.
    Download PDF (1357K)
  • Kota AOKI, Hiroshi NAGAHASHI
    Article type: PAPER
    Subject area: Pattern Recognition
    2013 Volume E96.D Issue 6 Pages 1351-1358
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    In this paper we aim to group visual correspondences in order to detect objects or parts of objects commonly appearing in a pair of images. We first extract visual keypoints from images and establish initial point correspondences between two images by comparing their descriptors. Our method is based on two types of graphs, named relational graphs and correspondence graphs. A relational graph of a point is constructed by thresholding geometric and topological distances between the point and its neighboring points. A threshold value of a geometric distance is determined according to the scale of each keypoint, and a topological distance is defined as the shortest path on a Delaunay triangulation built from keypoints. We also construct a correspondence graph whose nodes represent two pairs of matched points or correspondences and edges connect consistent correspondences. Two correspondences are consistent with each other if they meet the local consistency induced by their relational graphs. The consistent neighborhoods should represent an object or a part of an object contained in a pair of images. The enumeration of maximal cliques of a correspondence graph results in groups of keypoint pairs which therefore involve common objects or parts of objects. We apply our method to common visual pattern detection, object detection, and object recognition. Quantitative experimental results demonstrate that our method is comparable to or better than other methods.
    Download PDF (2069K)
  • Shinsuke SAKAI, Tatsuya KAWAHARA
    Article type: PAPER
    Subject area: Speech and Hearing
    2013 Volume E96.D Issue 6 Pages 1359-1367
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    Corpus-based concatenative speech synthesis has been widely investigated and deployed in recent years since it provides a highly natural synthesized speech quality. The amount of computation required in the run time, however, can often be quite large. In this paper, we propose early stopping schemes for Viterbi beam search in the unit selection, with which we can stop early in the local Viterbi minimization for each unit as well as in the exploration of candidate units for a given target. It takes advantage of the fact that the space of the acoustic parameters of the database units is fixed and certain lower bounds of the concatenation costs can be precomputed. The proposed method for early stopping is admissible in that it does not change the result of the Viterbi beam search. Experiments using probability-based concatenation costs as well as distance-based costs show that the proposed methods of admissible stopping effectively reduce the amount of computation required in the Viterbi beam search while keeping its result unchanged. Furthermore, the reduction effect of computation turned out to be much larger if the available lower bound for concatenation costs is tighter.
    Download PDF (578K)
  • Toshihiko YAMASAKI, Kiyoharu AIZAWA
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2013 Volume E96.D Issue 6 Pages 1368-1375
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    This paper presents a non-blind watermarking technique that is robust to non-linear geometric distortion attacks. This is one of the most challenging problems for copyright protection of digital content because it is difficult to estimate the distortion parameters for the embedded blocks. In our proposed scheme, the location of the blocks are recorded by the translation parameters from multiple Scale Invariant Feature Transform (SIFT) feature points. This method is based on two assumptions: SIFT features are robust to non-linear geometric distortion and even such non-linear distortion can be regarded as “linear” distortion in local regions. We conducted experiments using 149,800 images (7 standard images and 100 images downloaded from Flickr, 10 different messages, 10 different embedding block patterns, and 14 attacks). The results show that the watermark detection performance is drastically improved, while the baseline method can achieve only chance level accuracy.
    Download PDF (1426K)
  • Chunsheng HUA, Yasushi MAKIHARA, Yasushi YAGI
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2013 Volume E96.D Issue 6 Pages 1376-1386
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    In this paper, we propose a pedestrian detection algorithm based on both appearance and motion features to achieve high detection accuracy when applied to complex scenes. Here, a pedestrian's appearance is described by a histogram of oriented spatial gradients, and his/her motion is represented by another histogram of temporal gradients computed from successive frames. Since pedestrians typically exhibit not only their human shapes but also unique human movements generated by their arms and legs, the proposed algorithm is particularly powerful in discriminating a pedestrian from a cluttered situation, where some background regions may appear to have human shapes, but their motion differs from human movement. Unlike the algorithm based on a co-occurrence feature descriptor where significant generalization errors may arise owing to the lack of extensive training samples to cover feature variations, the proposed algorithm describes the shape and motion as unique features. These features enable us to train a pedestrian detector in the form of a spatio-temporal histogram of oriented gradients using the AdaBoost algorithm with a relatively small training dataset, while still achieving excellent detection performance. We have confirmed the effectiveness of the proposed algorithm through experiments on several public datasets.
    Download PDF (9178K)
  • Shizue NAGAHARA, Takenori OIDA, Tetsuo KOBAYASHI
    Article type: PAPER
    Subject area: Biological Engineering
    2013 Volume E96.D Issue 6 Pages 1387-1393
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    Diffusion-weighted (DW)-functional magnetic resonance imaging (fMRI) is a recently reported technique for measuring neural activities by using diffusion-weighted imaging (DWI). DW-fMRI is based on the property that cortical cells swell when the brain is activated. This approach can be used to observe changes in water diffusion around cortical cells. The spatial and temporal resolutions of DW-fMRI are superior to those of blood-oxygenation-level-dependent (BOLD)-fMRI. To investigate how the DWI signal intensities change in DW-fMRI measurement, we carried out Monte Carlo simulations to evaluate the intensities before and after cell swelling. In the simulations, we modeled cortical cells as two compartments by considering differences between the intracellular and the extracellular regions. Simulation results suggested that DWI signal intensities increase after cell swelling because of an increase in the intracellular volume ratio. The simulation model with two compartments, which respectively represent the intracellular and the extracellular regions, shows that the differences in the DWI signal intensities depend on the ratio of the intracellular and the extracellular volumes. We also investigated the MPG parameters, b-value, and separation time dependences on the percent signal changes in DW-fMRI and obtained useful results for DW-fMRI measurements.
    Download PDF (548K)
  • Dai-Kyung HYUN, Dae-Jin JUNG, Hae-Yeoun LEE, Heung-Kyu LEE
    Article type: LETTER
    Subject area: Information Network
    2013 Volume E96.D Issue 6 Pages 1394-1397
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    In this paper, we propose a novel camera identification method based on photo-response non-uniformity (PRNU), which performs well even with rotated videos. One of the disadvantages of the PRNU-based camera identification methods is that they are very sensitive to de-synchronization. If a video under investigation is slightly rotated, the identification process without synchronization fails. The proposed method solves this kind of out-of-sync problem, by achieving rotation-tolerance using Optimal Tradeoff Circular Harmonic Function (OTCHF) correlation filter. The experimental results show that the proposed method identifies source device with high accuracy from rotated videos.
    Download PDF (559K)
  • Hye-Yeon JEONG, Hyoung-Kyu SONG
    Article type: LETTER
    Subject area: Information Network
    2013 Volume E96.D Issue 6 Pages 1398-1401
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    In this letter, a novel adaptive detector that combines DFE and QRD-M is proposed for MIMO-OFDM system. QR decomposition (QRD) is commonly used in many MIMO detection algorithms. In particular, sorted QR decomposition (SQRD) is an advanced algorithm that improves MIMO detection performance. The proposed detector uses SQRD to achieve better performance. To reduce the computational complexity, the received layers of each subcarrier are ordered by using the post SNR and are detected by DFE and QRD-M detector based on the order. Therefore, the proposed detector structure is varied according to the channel state. In other words, the proposed detector achieves a good tradeoff between complexity and performance. A simulation confirms the substantial performance improvements of the proposed adaptive detector with only slightly greater complexity than the conventional detector.
    Download PDF (374K)
  • Yongwon JEONG
    Article type: LETTER
    Subject area: Speech and Hearing
    2013 Volume E96.D Issue 6 Pages 1402-1405
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    I propose an acoustic model adaptation method using bases constructed through the sparse principal component analysis (SPCA) of acoustic models trained in a clean environment. I perform experiments on adaptation to a new speaker and noise. The SPCA-based method outperforms the PCA-based method in the presence of babble noise.
    Download PDF (408K)
  • Tao WANG, Zhongying HU, Kiichi URAHAMA
    Article type: LETTER
    Subject area: Computer Graphics
    2013 Volume E96.D Issue 6 Pages 1406-1409
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    A non-photorealistic rendering technique is presented for generating images such as stippling images and paper mosaic images with various shapes of paper pieces. Paper pieces are spatially arranged by using an anisotropic Lp poisson disk sampling. The shape of paper pieces is adaptively varied by changing the value of p. We demonstrate with experiments that edges and details in an input image are preserved by the pieces according to the anisotropy of their shape.
    Download PDF (2969K)
  • Yoonjae CHOI, Pum-Mo RYU, Hyunki KIM, Changki LEE
    Article type: LETTER
    Subject area: Natural Language Processing
    2013 Volume E96.D Issue 6 Pages 1410-1414
    Published: June 01, 2013
    Released on J-STAGE: June 01, 2013
    JOURNAL FREE ACCESS
    Event extraction is vital to social media monitoring and social event prediction. In this paper, we propose a method for social event extraction from web documents by identifying binary relations between named entities. There have been many studies on relation extraction, but their aims were mostly academic. For practical application, we try to identify 130 relation types that comprise 31 predefined event types, which address business and public issues. We use structured Support Vector Machine, the state of the art classifier to capture relations. We apply our method on news, blogs and tweets collected from the Internet and discuss the results.
    Download PDF (558K)
feedback
Top