IPSJ Online Transactions
Online ISSN : 1882-6660
ISSN-L : 1882-6660
Volume 6
Showing 1-15 articles out of 15 articles from the selected issue
  • Tetsuo Kamina, Tomoyuki Aotani, Hidehiko Masuhara
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 1-8
    Published: 2013
    Released: January 29, 2013
    JOURNALS FREE ACCESS
    Context-oriented programming (COP) languages provide a modularization mechanism called a layer, which modularizes behaviors that are executable under specific contexts, and specify a way to dynamically switch behaviors. However, the correspondence between real-world contexts and units of behavioral variations is not simple. Thus, in existing COP languages, context-related concerns can easily be tangled within a piece of layer activation code. In this paper, we address this problem by introducing a new construct called a composite layer, which declares a proposition in which ground terms are given other layer names (true when active). A composite layer is active only when the proposition is true. We introduce this construct into EventCJ, out COP language, and verify this approach by conducting two case studies involving a context-aware Twitter client and a program editor. The results obtained in our approach show that the layer activation code is simple and free from tangled context-related concerns. We also discuss the efficient implementation of this mechanism in EventCJ.
    Download PDF (469K)
  • Masahiro Ide, Kimio Kuramitsu
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 9-16
    Published: 2013
    Released: January 29, 2013
    JOURNALS FREE ACCESS
    In recent years, as a method to improve the language performance of scripting languages has attracted the attention of the Just-In-Time (JIT) compilation techniques for scripting language. The difficulty of JIT compilation for scripting language is its dynamically typed code and in its own language runtime. The purpose of this paper is to evaluate the performance overhead of JIT compilation of runtime library's overhead by using a statically typed scripting language. In this study, we use a statically typed scripting language KonohaScript to analyze language runtime performance impact of the code generated by the JIT compiler.
    Download PDF (540K)
  • Yusuke Takamatsu, Yuji Kosuga, Kenji Kono
    Type: Regular Paper
    Subject area: Web Application
    2013 Volume 6 Pages 17-27
    Published: 2013
    Released: February 04, 2013
    JOURNALS FREE ACCESS
    Many web applications employ session management to keep track of visitors' activities across pages and over periods of time. A session is a period of time linked to a visitor, which is initiated when he/she arrives at a web application and it ends when his/her browser is closed or after a certain time of inactivity. Attackers can hijack a user's session by exploiting session management vulnerabilities by means of session fixation and cross-site request forgery attacks. Even though such session management vulnerabilities can be eliminated in the development phase of web applications, the test operator is required to have detailed knowledge of the attacks and to set up a test environment each time he/she attempts to detect vulnerabilities. We propose a technique that automatically detects session management vulnerabilities in web applications by simulating real attacks. Our technique requires the test operator to enter only a few pieces of basic information about the web application, without requiring a test environment to be set up or detailed knowledge of the web application. Our experiments demonstrated that our technique could detect vulnerabilities in a web application we built and in seven web applications deployed in the real world.
    Download PDF (958K)
  • Umair F. Siddiqi, Yoichi Shiraishi, Sadiq M. Sait
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 28-36
    Published: 2013
    Released: March 15, 2013
    JOURNALS FREE ACCESS
    Multi-objective path optimization is a critical operation in a large number of applications. Many applications execute on embedded systems, which use less powerful processors and limited amount of memory in order to reduce system costs and power consumption. Therefore, fast and memory-efficient algorithms are needed to solve the multi-objective path optimization problem. This paper proposes a fast and memory-efficient algorithm based on a Genetic Algorithm (GA) that can be used to solve the multi-objective path optimization problem. The proposed algorithm needs memory space approximately equal to its population size and consists of two GA operations (crossover and mutation). During each iteration, any one of the GA operations is applied to chromosomes, which can be either dominated or non-dominated. Dominated chromosomes prefer the crossover operation with a non-dominated chromosome in order to produce an offspring that has genes from both parents (dominated and non-dominated chromosomes). The mutation operation is preferred by non-dominated chromosomes. The offspring replaces its parent chromosome. The proposed algorithm is implemented using C++ and executed on an ARM-based embedded system as well as on an Intel-Celeron-M-based PC. In terms of the quality of its Pareto-optimal solutions, the algorithm is compared with Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and Simulated Annealing (SA). The performance of the proposed algorithm is better than that of SA. Moreover, comparison with NSGA-II shows that at approximately equal amounts of execution time and memory usage, the performance of the proposed algorithm is 5% better than that of NSGA-II. Based on the experimental results, the proposed algorithm is suitable for implementation on embedded systems.
    Download PDF (3708K)
  • Akisato Kimura, Masashi Sugiyama, Takuho Nakano, Hirokazu Kameoka, Hit ...
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 37-44
    Published: 2013
    Released: March 15, 2013
    JOURNALS FREE ACCESS
    Canonical correlation analysis (CCA) is a powerful tool for analyzing multi-dimensional paired data. However, CCA tends to perform poorly when the number of paired samples is limited, which is often the case in practice. To cope with this problem, we propose a semi-supervised variant of CCA named SemiCCA that allows us to incorporate additional unpaired samples for mitigating overfitting. Advantages of the proposed method over previously proposed methods are its computational efficiency and intuitive operationality: it smoothly bridges the generalized eigenvalue problems of CCA and principal component analysis (PCA), and thus its solution can be computed efficiently just by solving a single eigenvalue problem as the original CCA.
    Download PDF (905K)
  • Akisato Kimura, Masashi Sugiyama, Hitoshi Sakano, Hirokazu Kameoka
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 45-54
    Published: 2013
    Released: March 15, 2013
    JOURNALS FREE ACCESS
    It is well known that dimensionality reduction based on multivariate analysis methods and their kernelized extensions can be formulated as generalized eigenvalue problems of scatter matrices, Gram matrices or their augmented matrices. This paper provides a generic and theoretical framework of multivariate analysis introducing a new expression for scatter matrices and Gram matrices, called Generalized Pairwise Expression (GPE). This expression is quite compact but highly powerful. The framework includes not only (1) the traditional multivariate analysis methods but also (2) several regularization techniques, (3) localization techniques, (4) clustering methods based on generalized eigenvalue problems, and (5) their semi-supervised extensions. This paper also presents a methodology for designing a desired multivariate analysis method from the proposed framework. The methodology is quite simple: adopting the above mentioned special cases as templates, and generating a new method by combining these templates appropriately. Through this methodology, we can freely design various tailor-made methods for specific purposes or domains.
    Download PDF (280K)
  • Takeshi Yoshimura, Hiroshi Yamada, Kenji Kono
    Type: Regular Paper
    Subject area: Operating System
    2013 Volume 6 Pages 55-64
    Published: 2013
    Released: April 24, 2013
    JOURNALS FREE ACCESS
    Operating systems (OSes) are crucial for achieving high availability of computer systems. Even if applications running on an operating system are highly available, a bug inside the kernel may result in a failure of the entire software stack. The objective of this study is to gain some insight into the development of the Linux kernel that is more resilient against software faults. In particular, this paper investigates the scope of error propagation. The propagation scope is process-local if the erroneous value is not propagated outside the process context that activated it. The scope is kernel-global if the erroneous value is propagated outside the process context that activated it. The investigation of the scope of error propagation gives us some insight into 1) defensive coding style, 2) reboot-less rejuvenation, and 3) general recovery mechanisms of the Linux kernel. For example, if most errors are process-local, we can rejuvenate the kernel without reboots because the kernel can be recovered simply by killing faulty processes. To investigate the scope of error propagation, we conduct an experimental campaign of fault injection on Linux 2.6.18, using a kernel-level fault injector widely used in the OS community. Our findings are (1) our target kernel (Linux 2.6.18) is coded defensively. This defensive coding style contributes to lower rates of error manifestation and kernel-global errors, (2) the scope of error propagation is mostly process-local in Linux, and (3) global propagation occurs with low probability. Even if an error corrupts a global data structure, other processes merely access to them.
    Download PDF (467K)
  • Guangwen Liu, Masayuki Iwai, Kaoru Sezaki
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 65-74
    Published: 2013
    Released: July 04, 2013
    JOURNALS FREE ACCESS
    A novel simplification method for GPS trajectory is presented in this paper. Trajectory simplification can greatly improve the efficiency of data analysis (e.g., querying, clustering). Based on the observation of information content contained by sampling data, we assume that (1) the sampling points on the boundary of MBR (Minimum Bounding Rectangle) contain more information content, (2) the bigger the area of MBR is, the more the points should be stored. We applied these two assumptions in our method to simplify trajectory online. Two main components of this method (i.e., divide/merge principle and selection strategy), are elaborated in the paper. Moreover, we define a new error metric — enclosed area metric — to evaluate the accuracy of simplified trajectories, which is proven more robust against the uncertainty of GPS. To implement this measure, we devise a practical algorithm of area calculation for self-intersecting polygons. Through comparing with other methods in a series of experiments over huge dataset, our method is proven effective and efficient.
    Download PDF (1581K)
  • Jiyi Li, Qiang Ma, Yasuhito Asano, Masatoshi Yoshikawa
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 75-84
    Published: 2013
    Released: July 04, 2013
    JOURNALS FREE ACCESS
    Social image hosting websites such as Flickr provide services to users for sharing their images. Users can upload and tag their images or search for images by using keywords which describe image semantics. However various low quality tags in the user generated folksonomy tags have negative influences on image search results and user experience. To improve tag quality, we propose four approaches with one framework to automatically generate new tags, and rank the new tags as well as the existing raw tags, for both untagged and tagged images. The approaches utilize and integrate both textual and visual information, and analyze intra- and inter- probabilistic relationships among images and tags based on a graph model. The experiments based on the dataset constructed from Flickr illustrate the effectiveness and efficiency of our approaches.
    Download PDF (1801K)
  • Hiroyuki Ishigami, Kinji Kimura, Yoshimasa Nakamura
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 85-95
    Published: 2013
    Released: August 28, 2013
    JOURNALS FREE ACCESS
    In this paper, we introduce an inverse iteration algorithm that can be used to compute all the eigenvectors of a real symmetric tri-diagonal matrix on parallel processors. To overcome the sequential bottleneck created by modified Gram-Schmidt orthogonalization in classical inverse iteration, we propose the use of the compact WY representation in the reorthogonalization process, based on the Householder transformation. This change results in drastically reduced synchronization cost during parallel processing.
    Download PDF (783K)
  • Masao Yamanaka, Masakazu Matsugu, Masashi Sugiyama
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 96-103
    Published: 2013
    Released: August 28, 2013
    JOURNALS FREE ACCESS
    Detection of salient objects in images has been an active area of research in the computer vision community. However, existing approaches tend to perform poorly in noisy environments because probability density estimation involved in the evaluation of visual saliency is not reliable. Recently, a novel machine learning approach that directly estimates the ratio of probability densities was demonstrated to be a promising alternative to density estimation. In this paper, we propose a salient object detection method based on direct density-ratio estimation, and demonstrate its usefulness in experiments.
    Download PDF (1843K)
  • Masao Yamanaka, Masakazu Matsugu, Masashi Sugiyama
    Type: Regular Papers
    Subject area: Regular Paper
    2013 Volume 6 Pages 104-110
    Published: 2013
    Released: August 28, 2013
    JOURNALS FREE ACCESS
    We propose a method of unsupervised event detection from a video that compares probability distributions of past and current video sequence data in a sequential and hierarchical way. Because estimation of probability distributions is known to be difficult, naively comparing probability distributions via probability distribution estimation tends to be unreliable in practice. To cope with this problem, we use the state-of-the-art machine learning technique called density ratio estimation: The ratio of probability densities is directly estimated without density estimation, and thus probability distributions can be compared in a reliable way. Through experiments on a walking scene and a tennis match, we demonstrate the usefulness of the proposed approach.
    Download PDF (2412K)
  • Tomohisa Egawa, Naoki Nishimura, Kenichi Kourai
    Type: Regular Paper
    Subject area: Cloud
    2013 Volume 6 Pages 111-120
    Published: 2013
    Released: September 30, 2013
    JOURNALS FREE ACCESS
    In Infrastructure-as-a-Service (IaaS) clouds, the users manage the systems in the provided virtual machines (VMs) called user VMs through remote management software such as Virtual Network Computing (VNC). For dependability, they often perform out-of-band remote management via the management VM. Even in the case of system failures inside their VMs, the users could directly access their systems. However, the management VM is not always trustworthy in IaaS. Once outside or inside attackers intrude into the management VM, they could easily eavesdrop on all the inputs and outputs in remote management. To solve this security issue, this paper proposes FBCrypt for preventing information leakage via the management VM in out-of-band remote management. FBCrypt encrypts the inputs and outputs between a VNC client and a user VM using the virtual machine monitor (VMM). Sensitive information is protected against the management VM between them. The VMM intercepts the reads of virtual devices by a user VM and decrypts the inputs, whereas it intercepts the updates of a framebuffer by a user VM and encrypts the pixel data. We have implemented FBCrypt for para-virtualized and fully-virtualized guest operating systems in Xen and TightVNC. Then we confirmed that any keystrokes or pixel data did not leak.
    Download PDF (1331K)
  • Satoshi Yoshida, Takuya Kida
    Type: Regular Paper
    Subject area: Cloud
    2013 Volume 6 Pages 121-127
    Published: 2013
    Released: October 03, 2013
    JOURNALS FREE ACCESS
    In this study, we address the problem of improving variable-length-to-fixed-length codes (VF codes). A VF code is an encoding scheme that uses a fixed-length code, which provides easy access to compressed data. However, conventional VF codes generally have an inferior compression ratio compared with variable-length codes. A method proposed by Uemura et al. in 2010 delivered a good compression ratio that was comparable with that of gzip, but it was very time consuming. In this study, we propose a new VF coding method that applies a fixed-length code to a set of rules extracted using the Re-Pair algorithm, which was proposed by Larsson and Moffat in 1999. The Re-Pair algorithm is a simple offline grammar-based compression method, which has good compression-ratio performance with moderate compression speed. We also present experimental results, which demonstrates that our proposed coding method is superior to the existing VF coding method.
    Download PDF (151K)
  • Tetsuro Horikawa, Jin Nakazawa, Kazunori Takashio, Hideyuki Tokuda
    Type: Regular Paper
    Subject area: Application Development Support
    2013 Volume 6 Pages 128-140
    Published: 2013
    Released: November 01, 2013
    JOURNALS FREE ACCESS
    Spread of GPU-accelerated applications on PCs can cause serious degradation of the user experience such as frame dropping on the video playback, due to applications' resource competition on the same GPU due to arbitrary processors selection. In this paper, we propose a processors assignment system for real applications that achieves processors assignment according to condition based rules without modifying applications. To demonstrate the feasibility of our concept, we implemented a prototype of the centralized processors assignment mechanism called Torta. Our experiment using eight practical applications has shown that Torta achieves binary-compatible processors switching with an average performance penalty on only 0.2%. In a particular case where a video playback application is executed with three other GPU-intensive applications, our method enables users to enjoy the video playback with 60 frames per second (FPS) while the FPS decreases to 14 without the mechanism. This paper shows the design and the implementation of Torta on Windows 7 and concludes that our mechanism increases the efficiency of computational resource usage on PCs, thus improves the overall user experiences.
    Download PDF (1734K)
feedback
Top