IPSJ Digital Courier
Online ISSN : 1349-7456
ISSN-L : 1349-7456
Volume 2
Displaying 1-50 of 75 articles from this issue
  • Erwan Loisant, José Martinez, Hiroshi Ishikawa, Kaoru Katayama
    Article type: Research Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 1-13
    Published: 2006
    Released on J-STAGE: January 11, 2006
    JOURNAL FREE ACCESS
    Going one step ahead feedback querying in integrating users into a search process, navigation is the more recent approach to finding images in a large image collection by using content-based information. Rather than using queries or going into a feedback querying process that would be both heavy in terms of human-computer interaction and computer processing time, navigation on a pre-computed data structure is easier and smoother for the user. In particular, we found Galois' lattices to be convenient structures for that purpose. However, while properties extracted from images are usually real-valued data, most of the time a navigation structure has to deal with binary links from an image (or a group of images) to another. A trivial solution to get a binary relationship from real-valued data is to apply a threshold, but this solution not only leads to a loss of information but also tends to create sparse areas in the lattice. In this paper, we propose a technique to incrementally build a Galois' lattice from real-valued properties by taking into account the existing structure, thus limiting the size of the lattice by avoiding the creation of sparse nodes. Experiments showed that this technique produces a navigation structure of better quality, making search process faster and more efficient, thus improving user's experience.
    Download PDF (953K)
  • Bat-Odon Purevjii, Masayoshi Aritsugi, Yoshinari Kanamori, Cherri M. P ...
    Article type: Research Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 14-24
    Published: 2006
    Released on J-STAGE: January 11, 2006
    JOURNAL FREE ACCESS
    We present a flexible framework for evaluating data-dependent access control systems. Based on logical formalism, the framework is general enough to simulate all existing systems. In this paper, we analyze and compare currently available access control systems and demonstrate how they can be simultaneously extended and simplified using our framework. A series of examples and a cross-comparative analysis clearly demonstrate the advantages of our framework over previous methods.
    Download PDF (267K)
  • Hidehisa Takamizawa, Kazuyuki Nakajima, Masayoshi Aritsugi
    Article type: Data Models and Database Design
    Subject area: Regular Paper
    2006 Volume 2 Pages 25-38
    Published: 2006
    Released on J-STAGE: January 11, 2006
    JOURNAL FREE ACCESS
    Making effective use of cache can give good performance. In this paper, Array-Based Cache conscious trees (ABC trees for short) are proposed for realizing good performance of not only search operation but also update operation. The logical structure and manipulation of an ABC tree are similar to those of a B+-tree. The initial space of an array for an ABC tree as it is supposed to be a complete tree is allocated. This allows the tree to have contiguous memory space for its core and to reduce the number of pointers in it. As a result, the key capacity of a node increases and we can make effective use of cache. We also present an enhancement of ABC trees, which can increase the capacity of an ABC tree with overflow nodes. We describe how we can decide whether to create an overflow node when a node overflows for performance. Some experimental studies show that ABC trees can give good performance of operations under certain conditions.
    Download PDF (504K)
  • Amril Syalim, Toshihiro Tabata, Kouichi Sakurai
    Article type: Network Security
    Subject area: Technical Note
    2006 Volume 2 Pages 39-44
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    A database service provider (DSP) is a provider of an Internet service for maintaining data so that users can access their data any time and anywhere via the Internet. The DSP model involves several challenges, including the issue of data confidentiality. In this paper we propose a Usage Control (UCON) model and architecture that can be enforced to support data confidentiality in the DSP model. Usage Control (UCON) is a unified model of access control that has been recently introduced as next generation access control. The basic idea of our UCON model for DSPs is separation of the control domain in a DSP into two parts: a database provider domain and a database user domain. In the database provider domain, the access control system controls access by users to database services. In the database user domain, the access control system controls access by other users to a user's database. Through this separation, we can define an access control policy for each domain independently.
    Download PDF (310K)
  • Takao Yamashita
    Article type: Ubiquitous Computing
    Subject area: Regular Paper
    2006 Volume 2 Pages 45-56
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    A scalable load distribution method for view divergence control of statistically defined data freshness in replicated database systems is proposed. This method enables a number of mobile and fixed client nodes to continue their processes even when they lose connectivity to a network or do not have sufficient bandwidth to meet application requirements, which is very likely to occur to mobile client nodes. This can be achieved by renewing copies of data in client nodes while they maintain connectivity to a network so that their copies of data are sufficiently fresh to meet application requirements while they lose network connectivity. The load distribution for view divergence control is achieved by determining multiple sets of replicas from which client nodes retrieve the values of data through read transactions. Client nodes calculate the value of data that reflect updates which have already reached one or more elements in the determined set of replicas. We show that our method reduces the load of processing read transactions to less than about 1/40 of that in the original method in order to improve data freshness to about 2/5 of the maximum update delay in a large-scale network.
    Download PDF (587K)
  • Toshihiro Shikama, Tatsuji Munaka, Takashi Watanabe, Tadanori Mizuno
    Article type: Protocols
    Subject area: Regular Paper
    2006 Volume 2 Pages 57-69
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    A link layer protocol based on SR (Selective-Repeat) ARQ (Automatic Repeat reQuest) is required to achieve high performance over a lossy and large delay link. In spite of its effectiveness, SR ARQ has problems of large resequencing delay and bursty packet output due to the resequencing function. To mitigate these problems, this paper proposes a scheme where the resequencing is not performed by a receiver of SR ARQ, but by TCP end to end. The proposed scheme has advantages of scalability with regard to link bandwidth, the number of TCP connections on the link, and the size of the TCP window. It also preserves the end-to-end semantics of TCP and requires no modification to the existing TCP. To suppress unnecessary retransmissions by TCP, a sender of SR ARQ is aware of retransmitted TCP data packets and drops duplicate ACKs due to out-of-order packets, which are caused by the lack of resequencing by SR ARQ. The effectiveness of this proposed scheme is evaluated by simulations, which show that it attains high performance in comparison with other schemes.
    Download PDF (801K)
  • Shintaro Ueda, Shin-ichiro Kaneko, Nobutaka Kawaguchi, Hiroshi Shigeno ...
    Article type: Network Security
    Subject area: Regular Paper
    2006 Volume 2 Pages 70-80
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    Real-time streaming services are attracting attention. However, an adversary can compromise the safety of these services in ways such as data tampering, spoofing, and repudiation. In this paper we propose a real-time Stream Authentication scheme for Video streams called SAVe. Each packet in the stream is authenticated to correspond to the packet loss seen in UDP-based streaming. The amount of redundancy distributed to each frame is also adjusted according to the importance of each frame, to take account of the special characteristics of video such as differences in importance of and dependencies between frames. Since temporal and spatial compression techniques are adopted for video stream encoding, SAVe is efficient in terms of making important frames robust to packet loss. The simulation results show that the authentication rate is on average approximately equivalent to that of previously proposed schemes. An improvement of 50% in the playing rate over previously proposed schemes can be seen when the packet loss rate is 20%.
    Download PDF (325K)
  • Toshihiro Shikama, Shoichiro Seno, Takashi Watanabe, Tadanori Mizuno
    Article type: Network Quality and Control
    Subject area: Regular Paper
    2006 Volume 2 Pages 81-93
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    An SR (Selective-Repeat) ARQ (Automatic Repeat reQuest) protocol is used to recover from packet errors effectively over a low-quality communication channel. This SR ARQ has a problem of large delay due to resequencing of received packets. To mitigate this problem, the PFRS (Per-Flow ReSequencing) scheme was proposed, where the resequencing is performed independently for each upper-layer flow, while detection of lost packets and associated retransmissions are performed on the basis of the whole flows multiplexed over SR ARQ. This paper models the SR ARQ protocol, where the maximum number of retransmissions is limited, by a collection of simple stop-and-wait protocols, and shows numerical calculation results for the delay distribution of retransmission and resequencing. The validity of the analysis is confirmed by comparing numerical calculations with simulation results. The results prove the effectiveness of the PFRS scheme for the case where the number of flows over SR ARQ is large.
    Download PDF (684K)
  • Arjan Durresi, Leonard Barolli, Mukundan Sridharan, Sriram Chellappan, ...
    Article type: Network Quality and Control
    Subject area: Regular Paper
    2006 Volume 2 Pages 94-107
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    Efficient bandwidth allocation and low delays remain important goals, especially in high-speed networks. Existing end-to-end congestion control schemes (such as TCP+AQM/RED) have significant limitations to achieve these goals. In this paper, we present a new and simple congestion control scheme, called Load Early Detection(LED), that achieves high link efficiency and low persistent queue length. The major novelty of LED is that it gauges the congestion level by measuring the traffic load instead of the queue length. We show analytically that this feature of LED enables it to detect congestion earlier and react faster than other schemes. To gain insight into the behavior of LED and compare it with RED, we analyze a simple fluid model, and study the relation between stability and throughput, especially for low delays. We evaluate the performance of LED using extensive ns2 simulations over a wide range of network scenarios. Our simulation results show that LED achieves significantly higher link efficiency than RED (up to 83%), REM(up to 141%), and PI controller (up to 88%), especially in the presence of low delays.
    Download PDF (511K)
  • Teruyuki Hasegawa, Toru Hasegawa
    Article type: Protocols
    Subject area: Regular Paper
    2006 Volume 2 Pages 108-119
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    This paper describes a novel mechanism for achieving sufficient TCP performance in bulk data transfer in some asymmetrical environment without introducing additional functions in customer premises. On today's Internet, several types of broadband access media have an asymmetrical bandwidth characteristic, whereby the downstream link bandwidth is larger than the upstream. However, such an asymmetrical environment may cause TCP performance degradation due to upstream link congestion. To solve this problem, we propose a PEP-based approach to improve TCP file downloading throughput by reducing upstream traffic, using our “oversized MSS with compulsory IP fragmentation”technique. The results of our performance evaluation show that our experimental proxy implementation is capable of accelerating TCP throughput to about three times as fast as without the proxy.
    Download PDF (916K)
  • Masayuki Terada, Kensaku Mori, Kazuhiko Ishii, Sadayuki Hongo, Tomonor ...
    Article type: Ubiquitous Computing
    Subject area: Regular Paper
    2006 Volume 2 Pages 120-132
    Published: 2006
    Released on J-STAGE: February 08, 2006
    JOURNAL FREE ACCESS
    This paper proposes a framework based on a new architecture that allows distributed smartcards to interact with one another as well as with application programs on their hosts. Since these interactions are handled distribution-transparently through message dispatching agents deployed on each host, the smartcards can autonomously conduct distributed protocols without turning to off-card application programs. The proposed framework thus reduces the complexity of application programs and makes it easier to develop smartcard-based services that offer a high level of functionality. The feasibility of the framework is evaluated and confirmed by implementing a smartcard-based optimistic fair trading protocol for electronic vouchers on this framework.
    Download PDF (439K)
  • Hirokazu Nishio, Md. Altaf-Ul-Amin, Ken Kurokawa, Shigehiko Kanaya
    Article type: Self-organizing Maps
    Subject area: Regular Paper
    2006 Volume 2 Pages 133-137
    Published: 2006
    Released on J-STAGE: March 01, 2006
    JOURNAL FREE ACCESS
    Self-organizing maps(SOM) are a kind of neural network, and are applied to many fields. In many applications, there are borders surrounding the neuron arrangement. It causes a problem which is called ‘Border effect’ because the number of neighborhood neurons of a neuron near a border is different from that of a neuron near the center. This problem can be solved by arranging neurons uniformly on the surface of a sphere. But by the conventional method we cannot arrange neurons of arbitrary number. Therefore, it is inconvenient to use. Here we developed a method to arrange the neurons of arbitrary number by dividing a helix which covers the surface of a sphere into equal parts. It also can arrange neurons on a sphere more uniformly than the conventional method.
    Download PDF (406K)
  • Toshiaki Takayanagi, Hidenori Kawamura, Azuma Ohuchi
    Article type: Cellular Automata
    Subject area: Regular Paper
    2006 Volume 2 Pages 138-144
    Published: 2006
    Released on J-STAGE: March 01, 2006
    JOURNAL FREE ACCESS
    We propose a cellular automaton model of a tumor tissue consisting of a square lattice, tumor cells, cytotoxic T lymphocytes (CTLs) and cytokine. Considering that CTLs circulate in vivo and migrate into a tumor tissue, the square is open and CTLs move through the square. By calculating the cellular automaton model, we obtained these results: the growth of a mass of tumor cells surrounded by CTLs; the rejection of tumor cells by CTLs; and an approximate equilibrium between tumor cells and CTLs. Analysis of the results indicates that the attachment between tumor cells and CTLs is important for the rejection of tumor cells by CTLs.
    Download PDF (1055K)
  • Hiroshi Uehara, Kenichi Yoshida
    Article type: Contents Processing
    Subject area: Regular Paper
    2006 Volume 2 Pages 145-154
    Published: 2006
    Released on J-STAGE: March 08, 2006
    JOURNAL FREE ACCESS
    This paper proposes a method for creating the viewers' side annotations that reflect viewers' attentions on TV dramas. Internet bulletin boards are filled with large amount of viewers' dialogues concerning TV programs. Our approach is to extract the viewers' attention embedded in these dialogues and express them in the form of the graphical structures called attention graphs. The attention graphs act as viewers' side annotations. Viewers' side annotations assist in providing viewers with hints to locate their favorite scenes in full-length TV programs. In general, Internet bulletin boards are described without any particular order and are expressed in a poor grammatical manner. Our approach is to statistically recognize the notable frequencies of the words which represent viewers' attentions out from such unstructured documents. Attention graphs are the outcome of this approach. Attention graphs are evaluated by three types of tests. The test results demonstrate that attention graphs sufficiently act as viewers' side annotations, in terms of pointing out which scene the viewers pay attention to and clarifying how deeply viewers are impressed by the scenes.
    Download PDF (832K)
  • Yukio Hayashi
    Article type: Survey
    Subject area: Regular Paper
    2006 Volume 2 Pages 155-164
    Published: 2006
    Released on J-STAGE: March 08, 2006
    JOURNAL FREE ACCESS
    The scale-free (SF) structures that commonly appear in many complex networks are a hot topic in social, biological, and information sciences. These self-organized generation mechanisms are expected to be useful for efficient communication or robust connectivity in socio-technological infrastructures. This paper is the first review of geographical SF network models. We discuss the essential generation mechanisms for inducing the structures with power-law behavior, and consider the properties of planarity and link length. Distributed design of geographical SF networks without the crossing and long-range links that cause interference and dissipation problems is very important for many applications such as communications, power grids, and sensor systems.
    Download PDF (289K)
  • Naoya Hatta, Niko Demus Barli, Chitaka Iwama, Luong Dinh Hung, Daisuke ...
    Article type: Processor Architectures
    Subject area: Regular Paper
    2006 Volume 2 Pages 165-173
    Published: 2006
    Released on J-STAGE: March 29, 2006
    JOURNAL FREE ACCESS
    On-chip interconnects are becoming a major power consumer in scaled VLSI design. Consequently, bus power reduction has become effective for total power reduction on chip multiprocessors and system-on-a-chip requiring long interconnects as buses. In this paper, we advocate the use of bus serialization to reduce bus power consumption. Bus serialization decreases the number of wires and increases the pitch between the wires. The wider pitch decreases the coupling capacitances of the wires, and consequently reduces bus power consumption. Evaluation results indicate that our technique can reduce bus power consumption by 30% in the 45nm technology process.
    Download PDF (571K)
  • Tetsuya Sakai
    Article type: Research Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 174-188
    Published: 2006
    Released on J-STAGE: March 29, 2006
    JOURNAL FREE ACCESS
    This paper examines the problem of evaluating systems that aim at finding one highly relevant document with high precision. Such a task is important for modern search environments such as the Web where recall is unimportant and/or unmeasurable. Reciprocal Rank is a metric designed for finding one relevant document with high precision, but it can only handle binary relevance. We therefore introduce a new metric called O-measure, for high precision, high relevance search, and show (a) How the task of finding one relevant document is different from that of finding as many relevant documents as possible, in both binary and graded relevance settings;and (b) How the task of finding one highly relevant document is different from that of finding any one relevant document. We use four test collections and the corresponding sets of formal runs from the NTCIR-3 Crosslingual Information Retrieval track to compare the tasks and metrics in terms of resemblance, stability and discrimination power in system ranking.
    Download PDF (300K)
  • Shinichi Motomura, Takao Kawamura, Kazunori Sugahara
    Article type: Mobile Computing
    Subject area: Regular Paper
    2006 Volume 2 Pages 189-197
    Published: 2006
    Released on J-STAGE: April 05, 2006
    JOURNAL FREE ACCESS
    A new logic-based mobile agent framework named Maglog is proposed in this paper. In Maglog, a concept called “field” is introduced. By means of this concept, the following functions are realized: (1) agent migration, which is a function that enables agents to migrate between computers, (2) inter-agent communication, which is indirect communication with other agents through the field, (3) adaptation, which is a function that enables agents to execute programs stored in the field. We have implemented Maglog in a Java environment. The program of an agent, which is a set of Prolog clauses, is translated into Java source code by our Maglog translator, and is then compiled into Java classes by a Java compiler. The effectiveness of Maglog is confirmed through descriptions of two applications: a distributed e-learning system and a scheduling arrangement system.
    Download PDF (348K)
  • Shu Zhang, Katsushi Kobayashi
    Article type: Network System Management Technologies
    Subject area: Regular Paper
    2006 Volume 2 Pages 198-206
    Published: 2006
    Released on J-STAGE: April 05, 2006
    JOURNAL FREE ACCESS
    Routing changes of the interior gateway protocol (IGP), especially unexpected ones, can significantly affect the connectivity of a network. Although such changes can occur quite frequently in a network, most operators have hardly noticed them because of a lack of effective tools. In this paper, we introduce Rtanaly, a system to (i) detect IGP routing changes in real-time and instantly alert operators of the detected changes, (ii) quantify routing changes over the long term to provide operators with a general view on the routing stability of a network, (iii) estimate the impact of routing changes, and (iv) help operators troubleshoot in response to unexpected changes. Rtanaly has the following features: (i) it supports all three widely deployed IGPs- OSPFv2, OSPFv3, and IS-IS, (ii) it uses a completely passive approach, (iii) it visually displays the measurement results, and(iv) it is accessible through the web. We present the results of measurements that we have performed with Rtanaly as well as some observed pathological behavior to show its effectiveness. We have released the first version of Rtanaly as free software and its distribution is based on a BSD-style license.
    Download PDF (324K)
  • Hiroaki Kamoda, Masaki Yamaoka, Shigeyuki Matsuda, Krysia Broda, Morri ...
    Article type: Formal Methods
    Subject area: Regular Paper
    2006 Volume 2 Pages 207-221
    Published: 2006
    Released on J-STAGE: May 10, 2006
    JOURNAL FREE ACCESS
    The specification of access control policies for large, multi-organization applications is difficult and error-prone. Sophisticated policies are needed for fine-grained control of access to large numbers of entities, resulting in many policies specified by different security administrators. Techniques such as role based access control (RBAC) have been proposed to group policies and provide a framework for inheriting policies based on role hierarchies. RBAC does not prevent inconsistencies and conflicts arising in the policy specifications, though, which can lead to information leaks or prevent required access. This paper proposes an approach using free variable tableaux to detect conflicts and redundant policies resulting from the combination of various types of authorization and constraint policies. This approach uses static analysis to enable complete detection of modality and static constraint policy conflicts.
    Download PDF (323K)
  • Ichiro Mitsuhashi, Michio Oyamaguchi, Yoshikatsu Ohta, Toshiyuki Yamad ...
    Article type: Computational Theory
    Subject area: Regular Paper
    2006 Volume 2 Pages 222-234
    Published: 2006
    Released on J-STAGE: May 10, 2006
    JOURNAL FREE ACCESS
    The word and unification problems for term rewriting systems (TRSs)are most important ones and their decision algorithms have various useful applications in computer science. Algorithms of deciding joinability for TRSs are often used to obtain algorithms that decide these problems. In this paper, we first show that the joinability problem is undecidable for linear semi-constructor TRSs. Here, a semi-constructor TRS is such a TRS that all defined symbols appearing in the right-hand side of each rewrite rule occur only in its ground subterms. Next, we show that this problem is decidable both for confluent semi-constructor TRSs and for confluent semi-monadic TRSs. This result implies that the word problem is decidable for these classes, and will be used to show that unification is decidable for confluent semi-constructor TRSs in our forthcoming paper.
    Download PDF (280K)
  • Jing Kong, Xiangshi Ren
    Article type: User Interfaces and Interactive Systems
    Subject area: Technical Note
    2006 Volume 2 Pages 235-237
    Published: 2006
    Released on J-STAGE: May 10, 2006
    JOURNAL FREE ACCESS
    Using effective target width (We) in Fitts' law has been widely used for evaluating one directional pointing tasks. However, concrete methods of calculating We have not been officially unified. This paper concentrates on resolving this problem. A specially designed and controlled experiment is described. The results reveal that the method of mapping all the abscissa data into one unified relative coordinate system to do calculation is better for modeling human computer interfaces than dividing data into two groups according to corresponding target sides and mapping them into two separate coordinate systems.
    Download PDF (120K)
  • Seiji Umatani, Hirokazu Shobayashi, Masahiro Yasugi, Taiichi Yuasa
    Article type: Regular Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 238-247
    Published: 2006
    Released on J-STAGE: May 24, 2006
    JOURNAL FREE ACCESS
    An important issue in implementing high-level programming languages for use by translators into C is how to support high-level language features not available in C. Java's exception handling is one such feature, and translating it into portable C, which only uses C-style control structures, involves some challenges. Previous studies have proposed ways of translating the Java-style try-catch construct into C. In this paper, we propose a new scheme for implementing it in an efficient and portable manner. We use our parallel language OPA, which is an extended Java language, and its translator. In our scheme, we do not use troublesome setjmp/longjmp routines for non-local jumps. Instead, we check the occurrences of exceptions using functions' return values. Java also has the try-finally construct, mainly used for cleaning up, which cannot be translated directly into C-style control structures. To implement it, we developed a new scheme with integer values corresponding to continuation targets. Compared with other techniques, ours has advantages in both the runtime overhead and the generated code size. For these two features, using some benchmark programs, we measured the performance of our scheme and compared it with those of some other schemes.
    Download PDF (212K)
  • Satoshi Okui, Taro Suzuki
    Article type: Regular Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 248-261
    Published: 2006
    Released on J-STAGE: May 24, 2006
    JOURNAL FREE ACCESS
    We offer a pattern-matching algorithm based on incomplete regular expression (IRE, for short) types. IRE types extend the regular expression types introduced by Hosoya and Pierce in the programming language XDuce. Pattern-matching for IRE-typed expressions provides a capability to uniformly access “context” parts of XML document trees within the framework of pattern-matching theory; we do not rely on external facilities (or notations) such as XPath. In order to describe our pattern-matching algorithm, we adopt a rule-based approach; that is, we present our algorithm as a set of a few simple transformation rules. These rules simulate a transition in non-deterministic top-down tree automata(though we do not deal with tree automata explicitly), while also accumulating bindings for pattern variables. Our pattern-matching algorithm is sound and complete: it enumerates all correct but no incorrect solutions. We give rigorous proofs of these properties. A small but non-trivial example illustrates the expressiveness of our framework.
    Download PDF (231K)
  • Tasuku Hiraishi, Masahiro Yasugi, Taiichi Yuasa
    Article type: Regular Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 262-279
    Published: 2006
    Released on J-STAGE: May 24, 2006
    JOURNAL FREE ACCESS
    The SC language system was developed to provide a transformation-based language extension scheme for SC languages (extended/plain C languages with an S-expression-based syntax). Using this system, many flexible extensions to the C language can be implemented by means of transformation rules over S-expressions at low cost, mainly because of the preexisting Common Lisp capabilities for manipulating S-expressions. This paper presents the LW-SC (Lightweight-SC) language as an important application of this system, featuring nested functions (i.e., functions defined inside other functions). A function can manipulate its caller's local variables (or local variables of its indirect callers) by indirectly calling a nested function of its callers. Thus, many high-level services with “stack walk” can be easily and elegantly implemented by using LW-SC as an intermediate language. Moreover, such services can be implemented efficiently because we designed and implemented LW-SC to provide “lightweight” nested functions by aggressively reducing the costs of creating and maintaining nested functions. The GNU C compiler also provides nested functions as an extension to C, but our sophisticated translator to standard C is more portable and efficient for occasional “stack walk.”
    Download PDF (457K)
  • Akio Nakajima, Ryotaro Kobayashi, Hideki Ando, Toshio Shimada
    Article type: System Evaluation
    Subject area: Regular Paper
    2006 Volume 2 Pages 280-288
    Published: 2006
    Released on J-STAGE: May 31, 2006
    JOURNAL FREE ACCESS
    Chip multiprocessors (CMPs), which recently became available with the advance of LSI technology, can outperform current superscalar processors by exploiting thread-level parallelism (TLP). However, the effectiveness of CMPs unfortunately depends greatly on their applications. In particular, they have so far not brought any significant benefit to non-numerical programs. This study explores what techniques are required to extract large amounts of TLP in non-numerical programs. We focus particularly on three techniques: thread partitioning with various control structure levels, speculative thread execution, and speculative register communication. We evaluate these techniques by examining the upper bound of the TLP, using trace-driven simulations. Our results are as follows. First, little TLP can be extracted without both of the speculations in any of the partitioning levels. Second, with the speculations, available TLP is still limited in conventional function-level and loop-level partitioning. However, it increases considerably with basic block-level partitioning. Finally, in basic block-level partitioning, focusing on control-equivalence instead of post-domination can significantly reduce the compile time, with a modest degradation of TLP.
    Download PDF (243K)
  • Luong Dinh Hung, Shuichi Sakai
    Article type: System Evaluation
    Subject area: Regular Paper
    2006 Volume 2 Pages 289-297
    Published: 2006
    Released on J-STAGE: May 31, 2006
    JOURNAL FREE ACCESS
    The amount of task-level parallelism (TLP) in a runtime workload is useful information for determining the efficient usage of multiprocessors. This paper presents mechanisms for dynamically estimating the amount of TLP in runtime workloads. Modifications are made to the operating system (OS) to collect information about processor utilization and task activities, from which the TLP can be calculated. By effectively utilizing the time stamp counter (TSC) hardware, the task activities can be monitored with fine time resolution, which enables the TLP to be estimated with fine granularity. We implemented the mechanisms on a recent version of Linux. Evaluation results indicate that the mechanisms can estimate the TLP accurately for various workloads. The overheads imposed by the mechanisms are small.
    Download PDF (430K)
  • Hiroshi Murakami
    Article type: Numerical Algorithm
    Subject area: Regular Paper
    2006 Volume 2 Pages 298-317
    Published: 2006
    Released on J-STAGE: May 31, 2006
    JOURNAL FREE ACCESS
    When large matrix problems are treated, the locality of storage reference is very important. Usually higher locality of storage reference is attained by means of block algorithms. This paper introduces an implementation of block Householder transformation based on the block reflector (Schreiber, 1988) or “GGT” representation rather than on the method using “WYT” representations or compact “WYT” or “YTYT”(Bischof, 1993, etc.). This version of block Householder transformation can be regarded as a most natural extension of the original non-blocked Householder transformation, with the matrix elements of the algorithm changed from numbers to small matrices. Thus, an algorithm that uses the non-blocked version of Householder transformation can be converted into the corresponding block algorithm in the most natural manner. To demonstrate the implementation of the Householder method based on the block reflector described in this paper, block tridiagonalization of a dense real symmetric matrix is carried out to calculate the required number of eigenpairs, following the idea of the two-step reduction method(Bischof, 1996, etc.).
    Download PDF (930K)
  • Masato Ogata, Kagenori Kajihara, Takaaki Kikukawa, Takafumi Terada
    Article type: Special Purpose System
    Subject area: Regular Paper
    2006 Volume 2 Pages 318-327
    Published: 2006
    Released on J-STAGE: May 31, 2006
    JOURNAL FREE ACCESS
    We have been developing a volumetric computing graphics cluster system in the form of a PC cluster with its rendering and calculation performance enhanced by graphics boards and dedicated devices. The goal of the system is to perform real-time simulation for a practical surgical simulator. A space-partition scheme for parallel processing inevitably requires data communications for both simulation and visualization. We studied the effects of both types of communications through experiments. On the basis of the results, we discuss a performance model and propose a performance metric for time-restricted processing. To provide an example, we evaluated our VGCluster system by using the proposed metric. The metric shows the effect of sustaining scalability by using a dedicated image-composition device.
    Download PDF (611K)
  • Shuangching Chen, Shugang Wei
    Article type: Computer Architecture
    Subject area: Regular Paper
    2006 Volume 2 Pages 328-337
    Published: 2006
    Released on J-STAGE: June 07, 2006
    JOURNAL FREE ACCESS
    High-speed signed-digit (SD) architectures for weighted-to-residue (WTOR) and residue-to-weighted (RTOW) number conversions with the moduli set (2n, 2n-1, 2n+1) are proposed. The complexity of the conversion has been greatly reduced by using compact forms for the multiplicative inverse and the properties of modular arithmetic. The simple relationships of WTOR and RTOW number conversions result in simpler hardware requirements for the converters. The primary advantages of our method are that our conversions use the modulo m signed-digit adder (MSDA) only and that the constructions are simple. We also investigate the modular arithmetic between binary and SD number representation using circuit designs and a simulation, and the results show the importance of SD architectures for WTOR and RTOW number conversions. Compared to other converters, our methods are fast and the execution times are independent of the word length. We also propose a high-speed method for converting an SD number to a binary number representation.
    Download PDF (287K)
  • Yuki Yoshikawa, Satoshi Ohtake, Michiko Inoue, Hideo Fujiwara
    Article type: Design for Testability
    Subject area: Regular Paper
    2006 Volume 2 Pages 338-347
    Published: 2006
    Released on J-STAGE: June 07, 2006
    JOURNAL FREE ACCESS
    We propose a non-scan design-for-testability (DFT) method at register-transfer level (RTL) based on hierarchical test generation: the DFT method makes paths in a data path single-port-change (SPC) two-pattern testable. For combinational logic in an RTL circuit, an SPC two-pattern test launches transitions at the starting points of paths corresponding to only one input port (an input, which has some bits, of an RTL module) and sets the other ports stable. Hence, during test application, the original hold function of a register can be used for stable inputs if the hold function exists. Our DFT method can reduce area overhead compared to methods that support arbitrary two-pattern tests without losing the quality of robust test and non-robust test. Experimental results show that our method can reduce area overhead without losing the quality of test. Furthermore, we propose a method of reducing over-test by removing a subset of sequentially untestable paths from the target of test.
    Download PDF (399K)
  • Hideyuki Ichihara, Masakuni Ochi, Michihiro Shintani, Tomoo Inoue
    Article type: Design for Testability
    Subject area: Regular Paper
    2006 Volume 2 Pages 348-356
    Published: 2006
    Released on J-STAGE: June 07, 2006
    JOURNAL FREE ACCESS
    Test compression/decompression schemes using variable-length coding, e.g., Huffman coding, efficiently reduce the test application time and the size of the storage on an LSI tester. In this paper, we propose a model of an adaptive decompressor for variable-length coding and discuss its property. By using a buffer, the decompressor can operate at any input and output speed without a synchronizing feedback mechanism between an ATE and the decompressor, i.e., the proposed decompressor model can adapt to any test environment. Moreover, we propose a method for reducing the size of the buffer embedded in the decompressor. Since the buffer size depends on the order in which test vectors are input, reordering test vectors can reduce the buffer size. The proposed algorithm is based on fluctuations in buffered data for each test vector. Experimental results show a case in which the ordering algorithm reduced the size of the buffer by 97%.
    Download PDF (365K)
  • Hiroshi Mineno, Yuki Kawashima, Susumu Ishihara, Tadanori Mizuno
    Article type: Network Service Basics
    Subject area: Regular Paper
    2006 Volume 2 Pages 357-368
    Published: 2006
    Released on J-STAGE: July 05, 2006
    JOURNAL FREE ACCESS
    A variety of access technologies have been deployed over the last few years, and mobile nodes now have multiple network interfaces. In addition, mobile communications has been an active research area for several years. We have developed a link aggregation system with Mobile IP. This link aggregation system, called SHAKE, is intended to improve the throughput and reliability of wireless communication by making it possible to use multiple Internet-linked mobile nodes simultaneously and disperse the traffic between mobile nodes and a correspondent node. This paper describes a collaboration mechanism that provides packet-based and flow-based traffic distributive control based on user preferences and the cooperating nodes' states. Evaluations of a prototype implementation show that the mechanism is advantageous for a cooperative communication system like SHAKE. The results of experiments reveal that the flow-based traffic distribution is effective when the delay jitters of the nodes are extremely different.
    Download PDF (562K)
  • Yuxin Wang, Keizo Oyama
    Article type: Research Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 369-381
    Published: 2006
    Released on J-STAGE: July 12, 2006
    JOURNAL FREE ACCESS
    This paper proposes a method for gathering researchers' homepages(or entry pages) by applying new simple and effective page group models for exploiting the mutual relations between the structure and content of a page group, aiming at narrowing down the candidates with a very high recall. First, 12 property-based keyword lists that correspond to researchers' common properties are created and are assigned either organization-related or other. Next, several page group models (PGMs) are introduced taking into consideration the link structure and URL hierarchy. Although the application of PGMs generally causes a lot of noises, modified PGMs with two original techniques are introduced to reduce these noises. Then, based on the PGMs, the keywords are propagated to a potential entry page from its surrounding pages, composing a virtual entry page. Finally, the virtual entry pages that score at least a threshold number are selected. The effectiveness of the method is shown by comparing it to a single-page-based method through experiments using a 100GB web data set and a manually created sample data set.
    Download PDF (501K)
  • Wenxin Liang, Haruo Yokota
    Article type: Research Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 382-392
    Published: 2006
    Released on J-STAGE: July 12, 2006
    JOURNAL FREE ACCESS
    XML is widely applied to represent and exchange data on the Internet. However, XML documents from different sources may convey nearly or exactly the same information but may be different on structures. In previous work, we have proposed LAX(Leaf-clustering based Approximate XML join algorithm), in which the two XML document trees are divided into independent subtrees and the approximate similarity between them are determined by the tree similarity degree based on the mean value of the similarity degrees of matched subtrees. Our previous experimental results show that LAX, comparing with the tree edit distance, is more efficient in performance and more effective for measuring the approximate similarity between XML documents. However, because the tree edit distance is extremely time-consuming, we only used bibliography data of very small sizes to compare the performance of LAX with that of the tree edit distance in our previous experiments. Besides, in LAX, the output is oriented to the pair of documents that have larger tree similarity degree than the threshold. Therefore, when LAX is applied to the fragments divided from large XML documents, the hit subtree selected from the output pair of fragment documents that has large tree similarity degree might not be the proper one that should be integrated. In this paper, we propose SLAX (Subtree-class Leaf-clustering based Approximate XML join algorithm) for integrating the fragments divided from large XML documents by using the maximum match value at subtree classes. And we conduct further experiments to evaluate SLAX, comparing with LAX, by using both real large bibliography and bioinformatics data. The experimental results show that SLAX is more effective than LAX for integrating both large bibliography and bioinformatics data at subtree classes.
    Download PDF (923K)
  • Imari Sato, Katsushi Ikeuchi
    Article type: Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 393-406
    Published: 2006
    Released on J-STAGE: July 12, 2006
    JOURNAL FREE ACCESS
    This paper addresses two issues of image-based modeling for synthesizing photorealistic appearances of real objects under natural illumination conditions: capturing real-world illumination and modeling complex appearances of real objects for variable illumination. The appearance of an object changes significantly depending on its surface reflectance properties and the lighting condition of the scene in which it is placed. It is thus important to provide not only appropriate surface reflectance properties of an object in a scene but also natural illumination conditions of that scene so that a realistic appearance of the object can be synthesized under the provided natural illumination conditions. We first discuss an image-based approach and an inverse lighting approach for capturing and modeling real-world illumination. Then we shift our attention to how to model and synthesize complex appearances of real objects seen under natural illumination conditions.
    Download PDF (3056K)
  • Daisuke Miyazaki, Katsushi Ikeuchi
    Article type: Papers
    Subject area: Regular Paper
    2006 Volume 2 Pages 407-427
    Published: 2006
    Released on J-STAGE: July 12, 2006
    JOURNAL FREE ACCESS
    Recently, techniques developed in the field of computer graphics and virtual reality have been applied to many environments, with the result that measuring the 3D shapes of real objects has become increasingly important. However, few methods have been proposed to measure the 3D shape of transparent objects such as glass and acrylics. In this paper, we introduce three methods that estimate the surface shape of transparent objects by using polarization analysis. The first method determines the surface shape of a transparent object by using knowledge established in the research field of thermodynamics. The second method determines the surface shape of a transparent object by using knowledge established in the research field of differential geometry. The third method gives an initial value of the surface shape and then determines the true surface shape of a~transparent object by iterative computation. At the end of the paper, we discuss the advantages and the disadvantages of these three methods.
    Download PDF (2087K)
  • Thanda Oo, Hiroshi Kawasaki, Yutaka Ohsawa, Katsushi Ikeuchi
    Article type: Paper
    Subject area: Regular Paper
    2006 Volume 2 Pages 428-440
    Published: 2006
    Released on J-STAGE: July 12, 2006
    JOURNAL FREE ACCESS
    The effect of reflection and transparency, which results from shiny or glass-like transparent materials, is superimposed on captured images of many actual outdoor scenes. The presence of such an incidental effect in a captured image has made it difficult to apply computer vision algorithms and has led to erroneous results. Moreover, it disturbs the texture acquisition of an outdoor scene, an important topic for the Computer Graphics (CG) and Inteligence Transportation System (ITS) community. This paper presents an optimal method for the automatic separation of reflected and transparent layers even if the scene is complicated with view-dependent effects and depth disparity in a 3D environment. The method is based on epipolar plane image (EPI) analysis. The method is not like the conventional edge-based EPI analysis, but instead it is a color-based analysis. First, we separate EPI into two layers by our original color-based EPI analysis. Then, original image sequence is separated into reflected and transparent layers by using the separated EPI. To demonstrate the effectiveness of our method, we implement the algorithm and present the results of experiments using synthesized and real scene images including indoor and outdoor scenes.
    Download PDF (1352K)
  • Tsuyoshi Kato, Wataru Fujibuchi, Kiyoshi Asai
    Article type: Pattern Recognition
    Subject area: Regular Paper
    2006 Volume 2 Pages 441-451
    Published: 2006
    Released on J-STAGE: July 26, 2006
    JOURNAL FREE ACCESS
    Recently there has been a surge of interest in kernel methods such as support vector machine due to their flexibility and high performance. It is important how to define a kernel for kernel methods. Most of kernels are defined by inner-product of feature vectors in some vector space. In this paper we discuss an approach which constructs a kernel matrix from distances between examples instead of feature vectors. Namely, the input data of our algorithm are the distances among examples, not feature vectors. Dissimilar to most of conventional kernels where kernel functions are explicitly given and the kernel matrices are determined by simple calculations, our algorithm rather builds a kernel matrix by maximizing its entropy subject to distance constraints. The maximization problem is convex, so we can always attain to the optimal solution. Experiments using artificial data show the benefits of our algorithm. In addition, we apply this method to analysis of heterogeneous microarray gene expression data, and report the experimental results.
    Download PDF (529K)
  • Manodha Gamage, Mitsuo Hayasaka, Tetsuya Miki
    Article type: Network Quality and Control
    Subject area: Regular Paper
    2006 Volume 2 Pages 452-464
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    The QoS provided by today's best effort Internet is not good enough for real-time premium traffic. It is believed that the QoS guarantees of the Internet could be better provided by connection-oriented networks. IP/MPLS is one such technology and these connection-oriented networks are inherently more vulnerable to network failures. Network failures can broadly be classified into two types, ie., link/path and degraded failures. Degraded failures that account for about 50% of total failures are detected by control-plane timers. The control plane and the data plane of IP/MPLS networks are logically separated and therefore a failure in the control plane should not immediately terminate communications on the data plane. The Virtual Path Hopping (VPH) concept proposed in this study distinguishes these degraded failures from link/path failures and avoids the terminations of data communications on the data plane that are due to these degraded failures. It changes the traffic carrying a degraded VP to a new VP in a predetermined VP-pool before the control-plane timer expires due to degraded failure. This paper discusses the concept of VPH and its numerous advantages such as load balancing, traffic distribution, and the ability to support dual-failure situations. Computer simulations were conducted and the results are presented to support the concept of VPH.
    Download PDF (3120K)
  • Peng Yang, Goichiro Hanaoka, Yang Cui, Rui Zhang, Nuttapong Attrapadun ...
    Article type: Security Infrastructure
    Subject area: Regular Paper
    2006 Volume 2 Pages 465-477
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    Identity based encryption (ΙΒε) schemes have been flourishing since the very beginning of this century. In ΙΒε, proving the security of a scheme in the sense of IND-ID-CCA2is widely believed to be sufficient to claim that the scheme is also secure in the senses of both SS-ID-CCA2 and NM-ID-CCA2. The justification for this belief is the relations among indistinguishability (IND), semantic security (SS) and non-malleability (NM). However these relations have been proved only for conventional public key encryption (ΡΚε) schemes in previous works. The fact is that ΙΒε and ΡΚε have a difference of special importance, i.e., only in ΙΒε can the adversaries perform a particular attack, namely, the chosen identity attack. In this paper we have shown that security proved in the sense of IND-ID-CCA2 is validly sufficient for implying security in any other sense in ΙΒε. This is to say that the security notion, IND-ID-CCA2, captures the essence of security for all ΙΒε schemes. To show this, we first formally defined the notions of security for ΙΒε, and then determined the relations among IND, SS and NM in ΙΒε, along with rigorous proofs. All of these results take the chosen identity attack into consideration.
    Download PDF (265K)
  • Katsuyuki Okeya, Tetsu Iwata
    Article type: Dependability
    Subject area: Regular Paper
    2006 Volume 2 Pages 478-488
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    Side channel attacks are a serious menace to embedded devices with cryptographic applications, which are utilized in sensor and ad hoc networks. In this paper, we discuss how side channel attacks can be applied against message authentication codes, even if the countermeasures are taken to protect the underlying block cipher. In particular, we show that EMAC, OMAC, and PMAC are vulnerable to our attacks. We also point out that our attacks can be applied against RMAC, TMAC, and XCBC. Based on simple power analysis, we show that several key bits can be extracted, and based on differential power analysis, we present a selective forgery against these MACs. Our results suggest that protecting block ciphers against side channel attacks is insufficient, and countermeasures are needed for MACs as well.
    Download PDF (315K)
  • Yasunobu Nohara, Toru Nakamura, Kensuke Baba, Sozo Inoue, Hiroto Yasuu ...
    Article type: Mobile Computing
    Subject area: Regular Paper
    2006 Volume 2 Pages 489-497
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    Unlinkability, the property that prevents an adversary recognizing whether outputs are from the same user, is an important concept in RFID. Although hash-based schemes can provide unlinkability by using a low-cost hash function, existing schemes are not scalable since the server needs O(N) hash calculations for every ID matching, where N is the number of RFID devices. Our solution is the K-steps ID matching scheme, which can reduce the number of hash calculations on the server to O(logN). In this paper, we explain the protocol, describe a test implementation, and discuss the application of this scheme to practical RFID systems. We also compare the scheme with other hash-based schemes from various viewpoints.
    Download PDF (283K)
  • Kazuhide Fukushima, Shinsaku Kiyomoto, Toshiaki Tanaka
    Article type: Security and Society
    Subject area: Regular Paper
    2006 Volume 2 Pages 498-512
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    Program analysis techniques have improved steadily over the past several decades, and these techniques have made algorithms and secret data contained in programs susceptible to discovery. Obfuscation is a technique to protect against this threat. Obfuscation schemes that encode variables have the potential to hide both algorithms and secret data. We define five types of attack — data-dependency attacks, dynamic attacks, instruction-guessing attacks, numerical attacks, and brute force attacks — that can be launched against existing obfuscation schemes. We then propose an obfuscation scheme which encodes variables in a code using an affine transformation. Our scheme is more secure than that of Sato, et al. because it can protect against dependency attacks, instruction-guessing attacks, and numerical attacks. We describe the implementation of our scheme as an obfuscation tool for C/C++ code.
    Download PDF (326K)
  • Hiroaki Kikuchi, Kensuke Abe, Shohachiro Nakanishi
    Article type: Network Security
    Subject area: Regular Paper
    2006 Volume 2 Pages 513-523
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    Certificate revocation is a critical issue for a practical, public-key infrastructure. A new efficient revocation protocol using a one-way hash tree structure (instead of the classical list structure, which is known as a standard for revocation) was proposed and examined to reduce communication and computation costs. A tree approach, however, may run in O(n) time, in the worst case, i.e., when all the entries are sorted in descending order. A red-black tree is a binary sorted tree, with one extra bit per node, which is used for balancing the tree and guarantees that search and insertion operations take O(logn), even in the worst case. In this paper, we propose a binary red-black hash tree for certificate revocation and prove that it reduces costs more than any other tree balancing approache. We also compare the estimated communication and computation cost reductions of the red-black hash tree to those of the conventional binary search tree.
    Download PDF (358K)
  • Sk. Md. Mizanur Rahman, Atsuo Inomata, Masahiro Mambo, Eiji Okamoto
    Article type: Mobile Computing
    Subject area: Regular Paper
    2006 Volume 2 Pages 524-536
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    Due to the infrastructure-less, dynamic, and broadcast nature of radio transmissions, communications in mobile ad-hoc networks, MANETs, are susceptible to malicious traffic analysis. After performing traffic analysis, an attacker conducts an intensive attack (i.e., a target-oriented attack) against a target node specified by the traffic analysis. Because of the degradation of both throughput and security of routing, traffic analysis and its subsequent target-oriented attack are known as serious problems in regards to MANETs. Basically, position information of routing nodes is very sensitive data in MANETs, where even nodes not knowing each other establish a network temporarily. It is desirable that position information is kept secret. All of these problems are especially prominent in position-based routing protocols of MANETs. Therefore a new position-based routing protocol, which keeps routing nodes anonymous, thereby preventing possible traffic analysis, is proposed. The proposed scheme uses a time-variant temporary identifier, Temp ID, which is computed from the time and position of a node and used for keeping the node anonymous. Only the position of a destination node is required for the route discovery, and the Temp ID is used for establishing a route for sending data. A receiver dynamic-handshake scheme is designed for determining the next hop on-demand by using the Temp ID. The level of anonymity and the performance of this scheme were evaluated. The evaluation results show that the proposed scheme ensures the anonymity of both route and nodes and robustness against a target-oriented attack and other attacks. Moreover, this scheme does not depend on node density as long as nodes are connected in the network.
    Download PDF (507K)
  • Isao Echizen, Yasuhiro Fujii, Takaaki Yamada, Satoru Tezuka, Hiroshi Y ...
    Article type: Security Infrastructure
    Subject area: Regular Paper
    2006 Volume 2 Pages 537-550
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    Redundant coding is a basic method of improving the reliability of detection and survivability after image processing. It embeds watermarks repeatedly in every frame or region and can thus prevent errors due to the accumulation of frames or regions during watermark detection. Redundant coding, however, is not always effective after image processing because the watermark signal may be attenuated by the accumulation procedure when image processing removes watermarks from specific frames or regions. We therefore propose a method of detection to prevent the attenuation of the watermark signal by accumulating a subset of the regions so that the accumulated region has a minimal bit-error rate, which is estimated from the region. Experimental evaluations using actual motion pictures have revealed that the new method can improve watermark survivability after MPEG encoding by an average of 15.7% and can widely be used in correlation-based watermarking.
    Download PDF (1074K)
  • Hideyo Mamiya, Atsuko Miyaji
    Article type: Security Infrastructure
    Subject area: Regular Paper
    2006 Volume 2 Pages 551-560
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    In the execution of signature on a smart card, side channel attacks such as simple power analysis (SPA) have become serious threat12). There are the fixed procedure method and the indistinguishable method for SPA resistant methods. The indistinguishable method conceals all branch instructions by using indistinguishable addition formulae but may reveal the hamming-weight when an addition chain with the un-fixed-hamming-weight is used. In the case of hyper-elliptic curve, the indistinguishable method has not been proposed yet. In this paper, we give an indistinguishable addition formulae of hyper-elliptic curve. We also give algorithms which output the fixed-hamming-weight representation for indistinguishable addition formulae and works with or without computation table, which can dissolve the above mentioned problem on the indistinguishable method and are also applied to an elliptic curve scalar multiplication.
    Download PDF (214K)
  • Michal Szymaniak, Guillaume Pierre, Maarten van Steen
    Article type: Invited Paper
    Subject area: Invited Paper
    2006 Volume 2 Pages 561-572
    Published: 2006
    Released on J-STAGE: August 09, 2006
    JOURNAL FREE ACCESS
    This paper presents HotZone, an algorithm to place replicas in a wide-area network such that the client-to-replica latency is minimized. Similar to the previously proposed HotSpot algorithm, HotZone places replicas on nodes that along with their neighboring nodes generate the highest load. In contrast to HotSpot, however, HotZone provides nearly-optimal results by considering overlapping neighborhoods. HotZone relies on a geometric model of Internet latencies, which effectively reduces the cost of placing K replicas among N potential replica locations from O(N 2) to O(N·max(logN, K)).
    Download PDF (468K)
  • Takashi Amisaki, Shin-ichi Fujiwara
    Article type: Grid
    Subject area: Regular Paper
    2006 Volume 2 Pages 573-583
    Published: September 15, 2006
    Released on J-STAGE: September 27, 2006
    JOURNAL FREE ACCESS
    This paper reports a protein-simulation grid that uses grid remote procedure calls (GridRPCs)to a special-purpose cluster machine for molecular dynamics simulations. The grid was implemented using Ninf-G, Torque, LAM, and the Globus Toolkit. To avoid the inefficiency of a single GridRPC session using all the nodes of the cluster, we designed the grid so that it works efficiently when multiple GridRPC sessions share the cluster. This was done by putting the dedicated nodes(PCs with special computation boards)under the management of the Torque system, thus enabling the manager to dynamically configure a cluster with the requested number of dedicated nodes. In addition, a new job type was added to the Globus toolkit and new backend procedure was added to Ninf-G. The Ninf-G stub was separated from processes that actually perform the force evaluation on the dedicated nodes. Simulations for two proteins gave promising results. Simulations performed using a four-node cluster and a 100-Mbps LAN for GridRPC sessions were 4.6-17.0 times faster than the same simulation performed on the local client PC, while their communication overhead was less than 20% of total execution time. Even when the the four-node cluster machine was shared between two distinct simulations of proteins, the two GridRPC communications did not interfere with each other. This showed the efficacy of multiple GridRPC sessions.
    Download PDF (264K)
feedback
Top