Information and Media Technologies
Online ISSN : 1881-0896
ISSN-L : 1881-0896
Volume 3, Issue 2
Displaying 1-23 of 23 articles from this issue
Computing
  • Yuki Chiba, Takahito Aoto, Yoshihito Toyama
    2008 Volume 3 Issue 2 Pages 211-224
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Program transformation by templates (Huet and Lang, 1978)is a technique to improve the efficiency of programs. In this technique, programs are transformed according to a given program transformation template. To enhance the variety of program transformation, it is important to introduce new transformation templates. Up to our knowledge, however, few works discuss about the construction of transformation templates. Chiba, et al. (2006) proposed a framework of program transformation by template based on term rewriting and automated verification of its correctness. Based on this framework, we propose a method that automatically constructs transformation templates from similar program transformations. The key idea of our method is a second-order generalization, which is an extension of Plotkin's first-order generalization (1969). We give a second-order generalization algorithm and prove the soundness of the algorithm. We then report about an implementation of the generalization procedure and an experiment on the construction of transformation templates.
    Download PDF (455K)
  • Takahito Aoto
    2008 Volume 3 Issue 2 Pages 225-235
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Rewriting induction (Reddy, 1990) is a method to prove inductive theorems of term rewriting systems automatically. Koike and Toyama(2000) extracted an abstract principle of rewriting induction in terms of abstract reduction systems. Based on their principle, the soundness of the original rewriting induction system can be proved. It is not known, however, whether such an approach can be adapted also for more powerful rewriting induction systems. In this paper, we give a new abstract principle that extends Koike and Toyama's abstract principle. Using this principle, we show the soundness of a rewriting induction system extended with an inference rule of simplification by conjectures. Inference rules of simplification by conjectures have been used in many rewriting induction systems. Replacement of the underlying rewriting mechanism with ordered rewriting is an important refinement of rewriting induction — with this refinement, rewriting induction can handle non-orientable equations. It is shown that, based on the introduced abstract principle, a variant of our rewriting induction system based on ordered rewriting is sound, provided that its base order is ground-total. In our system based on ordered rewriting, the simplification rule extends those of the equational fragment of some major systems from the literature.
    Download PDF (322K)
  • Waihong Ng, Katsuhiko Kakehi
    2008 Volume 3 Issue 2 Pages 236-245
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    We present LCP Merge, a novel merging algorithm for merging two ordered sequences of strings. LCP Merge substitutes string comparisons with integer comparisons whenever possible to reduce the number of character-wise comparisons as well as the number of key accesses by utilizing the longest common prefixes (LCP) between the strings. As one of the applications of LCP Merge, we built a string merge sort based on recursive merge sort by replacing the merging algorithm with LCP Merge and we call it LCP Merge sort. In case of sorting strings, the computational complexity of recursive merge sort tends to be greater than O(n lg n) because string comparisons are generally not constant time and depend on the properties of the strings. However, LCP Merge sort improves recursive merge sort to the extent that its computational complexity remains O(n lg n) on average. We performed a number of experiments to compare LCP Merge sort with other string sorting algorithms to evaluate its practical performance and the experimental results showed that LCP Merge sort is efficient even in the real-world.
    Download PDF (556K)
  • Nobutaka Kawaguchi, Hiroshi Shigeno, Ken-ichi Okada
    2008 Volume 3 Issue 2 Pages 246-257
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    In this paper, we propose d-ACTM/VT, a network-based worm detection method that effectively detects hit-list worms using distributed virtual AC tree detection. To detect a kind of hit-list worms named Silent worms in a distributed manner, d-ACTM was proposed. d-ACTM detects the existence of worms by detecting tree structures composed of infection connections as edges. Some undetected infection connections, however, can divide the tree structures into small trees and degrade the detection performance. To address this problem, d-ACTM/VT aggregates the divided trees as a tree named Virtual AC tree in a distributed manner and utilizes the tree size for detection. Simulation result shows d-ACTM/VT reduces the number of infected hosts before detection by 20% compared to d-ACTM.
    Download PDF (611K)
  • Manabu Hirano, Takeshi Okuda, Suguru Yamaguchi
    2008 Volume 3 Issue 2 Pages 258-271
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Future networks everywhere will be connected to innumerable Internet-ready home appliances. A device accepting connections over a network must be able to verify the identity of a connecting device in order to prevent device spoofing and other malicious actions. In this paper, we propose a security mechanism for an inter-device communication. We state the importance of a distingushing and binding mechanism between a device's identity and its ownership information to realize practical inter-device authentication. In many conventional authentication systems, the relationship between the device's identity and the ownership information is not considered. Therefore, we propose a novel inter-device authentication framework guaranteeing this relationship. Our prototype implementation employs a smart card to maintain the device's identity, the ownership information and the access control rules securely. Our framework efficiently achieves secure inter-device authentication based on the device's identity, and authorization based on the ownership information related to the device. We also show how to apply our smart card system for inter-device authentication to the existing standard security protocols.
    Download PDF (1141K)
  • Eiji Kawai, Suguru Yamaguchi
    2008 Volume 3 Issue 2 Pages 272-284
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    The performance of a network server is directly influenced by its network I/O management architecture, i.e., its network I/O multiplexing mechanism. Existing benchmark tools focus on the evaluation of high-level service performance of network servers that implement specific application-layer protocols or the evaluation of low-level communication performance of network paths. However, such tools are not suitable for performance evaluation of server architectures. In this study, we developed a benchmark tool for network I/O management architectures. We implemented five representative network I/O management mechanisms as modules: multi-process, multi-thread, select, poll, and epoll. This modularised implementation enabled quantitative and fair comparisons among them. Our experimental results on Linux 2.6 revealed that the select-based and poll-based servers had no performance advantage over the others and the multi-process and multi-thread servers achieved a high performance almost equal to that of the epoll-based server.
    Download PDF (620K)
  • Ahmed S. Zekri, Stanislav G. Sedukhin
    2008 Volume 3 Issue 2 Pages 285-300
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    As increasing clock frequency approaches its physical limits, a good approach to enhance performance is to increase parallelism by integrating more cores as coprocessors to general-purpose processors in order to handle the different workloads in scientific, engineering, and signal processing applications. In this paper, we propose a many-core matrix processor model consisting of a scalar unit augmented with b×b simple cores tightly connected in a 2D torus matrix unit to accelerate matrix-based kernels. Data load/store is overlapped with computing using a decoupled data access unit that moves b×b blocks of data between memory and the two scalar and matrix processing units. The operation of the matrix unit is mainly processing fine-grained b×b matrix multiply-add (MMA) operations. We formulate the data alignment operations including matrix transposition and skewing as MMA operations in order to overlap them with data load/store. Two fundamental linear algebra algorithms are designed and analytically evaluated on the proposed matrix processor: the Level-3 BLAS kernel, GEMM, and the LU factorization with partial pivoting, the main step in solving linear systems of equations. For the GEMM kernel, the maximum speed of computing measured in FLOPs/cycle is approached for different matrix sizes, n, and block sizes, b. The speed of the LU factorization for relatively large values of n ranges from around 50-90% of the maximum speed depending on the model parameters. Overall, the analytical results show the merits of using the matrix unit for accelerating the matrix-based applications.
    Download PDF (1955K)
  • Yuki Karasawa, Hideya Iwasaki
    2008 Volume 3 Issue 2 Pages 301-315
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Skeletal parallel programming makes both parallel programs development and parallelization easier. The idea is to abstract generic and recurring patterns within parallel programs as skeletons and provide them as a library whose parallel implementations are transparent to the programmer. SkeTo is a parallel skeleton library that enables programmers to write parallel programs in C++ in a sequential style. However, SkeTo's matrix skeletons assume that a matrix is dense, so they are incapable of efficiently dealing with a sparse matrix, which has many zeros, because of duplicated computations and commutations of identical values. This problem is solved by re-formalizing the matrix data type to cope with sparse matrices and by implementing a new C++ class of SkeTo with efficient sparse matrix skeletons based on this new formalization. Experimental results show that the new skeletons for sparse matrices perform well compared to existing skeletons for dense matrices.
    Download PDF (433K)
  • Takuo Yonezawa, Yukiyoshi Kameyama
    2008 Volume 3 Issue 2 Pages 316-326
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    We study the control operators “control” and “prompt” which manage part of continuations, that is, delimited continuations. They are similar to the well-known control operators“shift” and “reset”, but differ in that the former is dynamic, while the latter is static. In this paper, we introduce a static type system for “control”and “prompt” which does not use recursive types. We design our type system based on the dynamic CPS transformation recently proposed by Biernacki, Danvy and Millikin. We also introduce let-polymorphism into our type system, and show that our type system satisfies several important properties such as strong type soundness.
    Download PDF (278K)
  • Takashi Kaburagi, Takashi Matsumoto
    2008 Volume 3 Issue 2 Pages 327-340
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    We present a novel algorithm to predict transmembrane regions from a primary amino acid sequence. Previous studies have shown that the Hidden Markov Model (HMM) is one of the powerful tools known to predict transmembrane regions; however, one of the conceptual drawbacks of the standard HMM is the fact that the state duration, i.e., the duration for which the hidden dynamics remains in a particular state follows the geometric distribution. Real data, however, does not always indicate such a geometric distribution. The proposed algorithm utilizes a Generalized Hidden Markov Model (GHMM), an extension of the HMM, to cope with this problem. In the GHMM, the state duration probability can be any discrete distribution, including a geometric distribution. The proposed algorithm employs a state duration probability based on a Poisson distribution. We consider the two-dimensional vector trajectory consisting of hydropathy index and charge associated with amino acids, instead of the 20 letter symbol sequences. Also a Monte Carlo method (Forward/Backward Sampling method) is adopted for the transmembrane region prediction step. Prediction accuracies using publicly available data sets show that the proposed algorithm yields reasonably good results when compared against some existing algorithms.
    Download PDF (773K)
  • Morihiro Hayashida, Tatsuya Akutsu, Hiroshi Nagamochi
    2008 Volume 3 Issue 2 Pages 341-350
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    This paper proposes a novel clustering method based on graph theory for analysis of biological networks. In this method, each biological network is treated as an undirected graph and edges are weighted based on similarities of nodes. Then, maximal components, which are defined based on edge connectivity, are computed and the nodes are partitioned into clusters by selecting disjoint maximal components. The proposed method was applied to clustering of protein sequences and was compared with conventional clustering methods. The obtained clusters were evaluated using P-values for GO(GeneOntology) terms. The average P-values for the proposed method were better than those for other methods.
    Download PDF (437K)
  • Masanori Kakuta, Shugo Nakamura, Kentaro Shimizu
    2008 Volume 3 Issue 2 Pages 351-361
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Protein-protein interactions play an important role in a number of biological activities. We developed two methods of predictingprotein-protein interaction site residues. One method uses only sequence information and the other method uses both sequence and structural information. We used support vector machine (SVM) with a position specific scoring matrix (PSSM) as sequence information and accessible surface area(ASA) of polar and non-polar atoms as structural information. SVM is used in two stages. In the first stage, an interaction residue is predicted by taking PSSMs of sequentially neighboring residues or taking PSSMs and ASAs of spatially neighboring residues as features. The second stage acts as a filter to refine the prediction results. The recall and precision of the predictor using both sequence and structural information are 73.6% and 50.5%, respectively. We found that using PSSM instead of frequency of amino acid appearance was the main factor of improvement of our methods.
    Download PDF (614K)
  • Takayuki Higo, Keiki Takadama
    2008 Volume 3 Issue 2 Pages 362-374
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    This paper proposes a novel method, Hierarchical Importance Sampling (HIS) that can be used instead of population convergence in evolutionary optimization based on probability models (EOPM)such as estimation of distribution algorithms and cross entropy methods. In HIS, multiple populations are maintained simultaneously such that they have different diversities, and the probability model of one population is built through importance sampling by mixing with the other populations. This mechanism can allow populations to escape from local optima. Experimental comparisons reveal that HIS outperforms general EOPM.
    Download PDF (444K)
Media (processing) and Interaction
  • Saori Tanaka, Kaoru Nakazono, Masafumi Nishida, Yasuo Horiuchi, Akira ...
    2008 Volume 3 Issue 2 Pages 375-384
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Sign language is a visual language in which main articulators are hands, torso, head, and face. For simultaneous interpreters of Japanese sign language (JSL) and spoken Japanese, it is very important to recognize not only the hands movement but also prosody such like head, eye, posture and facial expression. This is because prosody has grammatical rules for representing the case and modification relations in JSL. The goal of this study is to introduce an examination called MPR (Measurement of Prosody Recognition) and to demonstrate that it can be an indicator for the other general skills of interpreters. For this purpose, we conducted two experiments: the first studies the relationship between the interpreter's experience and the performance score on MPR (Experiment-1), and the second investigates the specific skill that can be estimated by MPR (Experiment-2). The data in Experiment-1 came from four interpreters who had more than 1-year experience as interpreters, and more four interpreters who had less than 1-year experience. The mean accuracy of MPR in the more experienced group was higher than that in the less experienced group. The data in Experiment-2 came from three high MPR interpreters and three low MPR interpreters. Two hearing subjects and three deaf subjects evaluated their skill in terms of the speech or sign interpretation skill, the reliability of interpretation, the expeditiousness, and the subjective sense of accomplishment for the ordering pizza task. The two experiments indicated a possibility that MPR could be useful for estimating if the interpreter is sufficiently experienced to interpret from sign language to spoken Japanese, and if they can work on the interpretation expeditiously without making the deaf or the hearing clients anxious. Finally we end this paper with suggestions for conclusions and future work.
    Download PDF (882K)
  • Ikuo Kobayashi, Koichi Furukawa
    2008 Volume 3 Issue 2 Pages 385-398
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    We investigate an Abductive Logic Programming (ALP) framework to find appropriate hypotheses to explain both professional and amateur skill performance, and to distinguish and diagnose amateur faulty performance. In our approach, we provide two kinds of rules: motion integrity constraints and performance rules. Motion integrity constraints are essential to formulate skillful performance, as they prevent the generation of hypotheses that contradict the constraints. Performance rules formulate the problem of achieving difficult physical tasks in terms of preferred body movements as well as preferred muscles usage and preferred posture. We also formulate the development of skills in terms of default logic by considering the basic skills as defaults, and advanced skills as exceptions. In this case, we introduce preferences in integrity constraints: either hard integrity constraints to be always satisfied or soft integrity constraints which can be ignored if necessary. Finally we apply this framework to realize skill diagnosis.
    Download PDF (340K)
  • Masaki Suwa
    2008 Volume 3 Issue 2 Pages 399-408
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Based on the conjecture that acquisition of embodied expertise is a phenomenon that occurs through interactions among the learner's verbal thoughts, perception, physical movements and the surrounding environment, Suwa [2005b] has claimed the significance of dealing with subjective data such as verbalizedthoughts in researches on embodied skills, and has advocated a theory on meta-cognitive verbalization. The present paper, based on the empirical findings in playing darts game, provides a cognitive model of embodiedmeta-cognitive verbalization. This model theorizes what kinds of cognitive processes involve embodiedmeta-cognitive verbalization, and how these processes change a learner's thoughts, perception, actions andself-awareness to those, and thereby promote acquisition of embodied expertise.
    Download PDF (358K)
  • Kazutoshi Kudo, Tatsuyuki Ohtsuki
    2008 Volume 3 Issue 2 Pages 409-420
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Human movements are produced in variable external/internal environments. Because of this vari-ability, the same motor command can result in quite different movement patterns. Therefore, to produce skilled movements humans must coordinate the variability, not try to exclude it. In addition, because hu-man movements are produced in redundant and complex systems, a combination of variability should be observed in different anatomical/physiological levels. In this paper, we introduce our research about human movement variability that shows remarkable coordination among components, and between organism and environment. We also introduce nonlinear dynamical models that can describe a variety of movements as a self-organization of a dynamical system, because the dynamical systems approach is a major candidate to understand the principle underlying organization of varying systems with huge degrees-of-freedom.
    Download PDF (720K)
  • Kevin J. Binkley, Masafumi Hagiwara
    2008 Volume 3 Issue 2 Pages 421-431
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    In this study, we propose the stop and go particle swarm optimization (PSO) algorithm, a new method to dynamically adapt the PSO population size. Stop and go PSO (SG-PSO) takes advantage of the fact that in practical problems there is a limit to the required accuracy of the optimization result. In SG-PSO, particles are stopped when they have approximately reached the required accuracy. Stopped particles do not consume valuable function evaluations. Still, the information contained in the stopped particles' state is not lost, but rather as the swarm evolves, the particles may become active again, behaving as a memory for the swarm. In addition, as an extension to the SG-PSO algorithm we propose the mixed SG-PSO (MSG-PSO) algorithm. In the MSG-PSO algorithm each particle is given a required accuracy, and through the accuracy settings global search and local search can be balanced. Both SG-PSO and MSG-PSO algorithms are straightforward modifications to the standard PSO algorithm. The SG-PSO algorithm shows strong improvements over the standard PSO algorithm on multimodal benchmark functions from the PSO literature while approximately equivalent results are observed on unimodal benchmark functions. The MSG-PSO algorithm outperforms the standard PSO algorithm on both unimodal and multimodal benchmark functions.
    Download PDF (381K)
  • Hanno Ackermann, Kenichi Kanatani
    2008 Volume 3 Issue 2 Pages 432-442
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    We accelerate the time-consuming iterations for projective reconstruction, a key component of self-calibration for computing 3-D shapes from feature point tracking over a video sequence. We first summarize the algorithms of the primal and dual methods for projective reconstruction. Then, we replace the eigenvalue computation in each step by the power method. We also accelerate the power method itself. Furthermore, we introduce the SOR method for accelerating the subspace fitting involved in the iterations. Using simulated and real video images, we demonstrate that the computation sometimes becomes several thousand times faster.
    Download PDF (1044K)
Computer Networks and Broadcasting
  • Minghua Pei, Kotaro Nakayama, Takahiro Hara, Shojiro Nishio
    2008 Volume 3 Issue 2 Pages 443-453
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Since Semantic Web is increasing in size and variety of resources, it is difficult for users to find the information that they really need. Therefore, it is necessary to provide an efficient and precise method without explicit specification for the Web resources. In this paper, we proposed the novel approach of integrating four processes for Web resource categorization. The processes can extract both the explicit relations extracted from the ontologies in a traditional way and the potential relations inferred from existing ontologies by focusing on some new challenges such as extracting important class names, using WordNet relations and detecting the methods of describing the Web resources. We evaluated the effectiveness by applying the categorization method to a Semantic Web search system, and confirmed that our proposed method achieves a notable improvement in categorizing the valuable Web resources based on incomplete ontologies.
    Download PDF (356K)
  • Mitsuo Hayasaka, Tetsuya Miki
    2008 Volume 3 Issue 2 Pages 454-463
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    Peer-to-Peer multimedia streaming is expected to grow rapidly in the near future. Packet losses during transmission are a serious problem for streaming media as they result in degradation of the quality of service (QoS). Forward Error Correction (FEC) is a promising technique to recover the lost packets and improve the QoS of streaming media. However, FEC may degrade the QoS of all streaming due to the increased congestion caused by the FEC overhead when streaming sessions increase. Although streaming media can be categorized into live and on-demand streaming contents, conventional FEC methods apply the same FEC scheme for both contents without distinguishing them. In this paper, we clarify the effective ranges where each conventional FEC and Retransmission scheme works well. Then, we propose a novel FEC method that distinguishes two types of streaming media and is applied for on-demand streaming contents. It can overcome the adverse effect of the FEC overhead in on-demand streaming contents during media streaming and therefore reduce the packet loss due to the FEC overhead. As a result, the packet loss ratios of both live and on-demand streaming contents are improved. Moreover, it provides the QoS according to the requirements and environments of users by using layered coding of FEC. Thus, packet losses are recovered at each end host and do not affect the next-hop streaming. The numerical analyses show that our proposed method highly improves the packet loss ratio compared to the conventional method.
    Download PDF (558K)
Information Systems and Applications
  • Wei Liu, Hideyuki Tanaka, Kanta Matsuura
    2008 Volume 3 Issue 2 Pages 464-478
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    This paper presents a series of empirical analyses of information-security investment based on a reliable survey of Japanese enterprises. To begin with, after showing our methodology for representing the vulnerability level regarding the threat of computer viruses, we verify the relation between vulnerability level and the effects of information security investment.Although in the first section there is only a weak empirical support of the investment model, one can understand that the representing methodology is worth attempting in empirical analyses in this research field. In the second section, we verify the relations between the probability of computer virus incidents and adopting a set of information security countermeasures. It is shown that “Defense Measure” associated with “Information Security Policy” and “Human Cultivation”has remarkable effects on virus incidents. At the last step, we analyze the effect of continuous investment in the three security countermeasures. The empirical results suggest that virus incidents were significantly reduced in those enterprises which adopted the three countermeasures both in 2002 and in 2003.
    Download PDF (1777K)
  • Kazuya Ueki, Tetsunori Kobayashi
    2008 Volume 3 Issue 2 Pages 479-485
    Published: 2008
    Released on J-STAGE: June 15, 2008
    JOURNAL FREE ACCESS
    To reduce the rate of error in gender classification, we propose the use of an integration framework that uses conventional facial images along with neck images. First, images are separated into facial and neck regions, and features are extracted from monochrome, color, and edge images of both regions. Second, we use Support Vector Machines (SVMs) to classify the gender of each individual feature. Finally, we reclassify the gender by considering the six types of distances from the optimal separating hyperplane as a 6-dimensional vector. Experimental results show a 28.4% relative reduction in error over the performance baseline of the monochrome facial image approach, which until now had been considered to have the most accurate performance.
    Download PDF (4762K)
feedback
Top