-
Yoshiko Yasuda, Shinichi Kawamoto, Atsushi Ebata, Jun Okitsu, Tatsuo H ...
Article type: Parallel and Distributed Processing Technology
Subject area: Regular Paper
2005 Volume 1 Pages
1-14
Published: 2005
Released on J-STAGE: January 12, 2005
JOURNAL
FREE ACCESS
RX-NAS (Replicated eXpandable Network-Attached Storage), a scalable, reliable clustered NAS system designed for entry-level NAS, has been developed. RX-NAS is based on X-NAS, which is a simple clustered NAS architecture for entry-level NAS, and improves the reliability of X-NAS by adding new sets of X-NASs to the original one. The core feature of RX-NAS, namely on-line replication, replicates original file objects to new sets of X-NASs for each file block in real-time without changing clients' environments. RX-NAS has other key features for maintaining the manageability of entry-level NAS; namely, new synchronization and resynchronization functions can easily replicate original files and directories to other X-NAS systems completely or partially without changing clients' environments. In addition, its health-check function can eliminate a limitation on the configuration of RX-NAS and detect and report errors that occur in the RX-NAS system. To validate the RX-NAS concept, an RX-NAS prototype was designed and tested according to the NFSv3 implementation. These tests show that the RX-NAS improves system reliability while maintaining 80% of the throughput of X-NAS.
View full abstract
-
Motoharu Miyake, Hiroshi Inamura, Osamu Takahashi
Article type: Wireless/Mobile Networks
Subject area: Recommended Paper
2005 Volume 1 Pages
15-25
Published: 2005
Released on J-STAGE: January 12, 2005
JOURNAL
FREE ACCESS
A spurious timeout (STO) leads to an unnecessary go-back-N retransmission and throughput degradation, which negatively impacts the user's TCP performance. In this paper, we propose an STO detection and congestion window control algorithm based on the first acknowledgment following an RTO monitoring event for suppressing both the unnecessary retransmission and throughput degradation. This method strongly supports the enhancement of existing mobile communications systems because it does not require additional information or receiver modification.
View full abstract
-
Daisuke Matsubara, Kazuho Miki, Hidenori Inouchi, Tohru Hoshi
Article type: Architecture for Advanced Collaboration Environment
Subject area: Regular Paper
2005 Volume 1 Pages
26-35
Published: 2005
Released on J-STAGE: January 12, 2005
JOURNAL
FREE ACCESS
We propose a P2P file sharing system that enables flexible and intuitive information sharing and management among large group of users. The proposed system (NRBS: Network Resource Browsing System) supports a virtual directory that allows users to organize and manage distributed files based on simple and intuitive user interface. The system has a central management server that controls each user client in the system, which allows centralized security management and strict content control. In this paper, we compare conventional approach for managing P2P file sharing, and then propose a system based on virtual directory. We explain the mechanism for associating links on the virtual directory to actual file data stored in user clients' terminals. We evaluate the system based on usability, manageability, and security. The result shows that using virtual directory improves the usability and manageability while providing strict file security.
View full abstract
-
Hidenori Sato, Hiroto Matsuoka, Hitoshi Kitazawa, Akira Onozawa
Article type: Computer Graphics
Subject area: Regular Paper
2005 Volume 1 Pages
36-45
Published: 2005
Released on J-STAGE: January 19, 2005
JOURNAL
FREE ACCESS
An algorithm for reconstructing photorealistic 3D model from multiple-view images is proposed. The idea is based on the surface light field approach. In the algorithm, a geometric model is reconstructed as a visual hull using an image-based multi-pass algorithm we have developed, then the hull is represented as a quadrilateral-meshed surface. Next, colors of input images are assigned onto each vertex according to viewing directions using a new data structure we have developed. The structure is a hexagonal tessellation based on expansion and replacement of a buckyball. Finally, the hexagonal tessellation is represented as a hexagonal image whose pixels represent colors of corresponding input images. Experimental results for real objects show that 3D data can be successfully generated automatically in a short time and that photorealistic data can be viewed from arbitrary viewpoints even for objects with reflective or translucent surfaces.
View full abstract
-
Xiaodong Lu, Ivan Luque, Kinji Mori
Article type: P2P Data Allocation and Search Methods
Subject area: Regular Paper
2005 Volume 1 Pages
46-53
Published: 2005
Released on J-STAGE: January 19, 2005
JOURNAL
FREE ACCESS
The adaptable and efficient infrastructure for popular information provision and utilization is very important in the distributed information service system. In this paper, we introduce a rating oriented distributed information system sustained by push/pull mobile agents to cope with the rapidly changing of providers and users' requirements. Based on this environment, an autonomous information allocation technology is proposed, to put the most popular information near to users. Moreover, when users' demands change, an autonomous information reallocation technology is proposed to achieve load balancing and guarantee that users can always get the information from a certain pathway. We proved the effectiveness of the proposed system through the simulation and comparison with the conventional system. Moreover, advantages through applying autonomous information reallocation within the system is quantitatively shown.
View full abstract
-
Daisuke Takemoto, Shigeaki Tagashira, Satoshi Fujita
Article type: P2P Data Allocation and Search Methods
Subject area: Regular Paper
2005 Volume 1 Pages
54-62
Published: 2005
Released on J-STAGE: January 19, 2005
JOURNAL
FREE ACCESS
In this paper, we propose several distributed zone partitioning schemes for Content-Addressable Networks (CAN), that is known as a pure peer-to-peer system based on the Distributed Hash Table (DHT). The main objective of the proposed schemes is to balance the load of nodes in the CAN system, in such a way that every node receives almost the same number of inquiries from the other nodes in the system. The result of simulations implies that, by using the proposed schemes instead of a randomized zone partitioning scheme originally implemented in the CAN system, we could reduce the response time for each inquiry to less than 75%.
View full abstract
-
Hiroyuki Yamada, Hiroyuki Morikawa, Tomonori Aoyama
Article type: Protocol and Tools
Subject area: Regular Paper
2005 Volume 1 Pages
63-74
Published: 2005
Released on J-STAGE: January 19, 2005
JOURNAL
FREE ACCESS
Our target is to support both small delay and small delay fluctuation of real-time traffic in IEEE802.11-based wireless LANs by using a decentralized manner. Several previous researches which aimed at supporting real-time traffic in IEEE802.11 wireless LANs developed decentralized control mechanisms achieving small delay of real-time traffic by differentiating real-time traffic from non-real-time traffic, but they cannot achieve small delay fluctuation because of the burst feature of the IEEE802.11 backoff mechanism. We propose a decentralized control mechanism for suppressing delay fluctuation in IEEE802.11-based wireless LANs. Our main proposal is a new backoff algorithm, called decentralized delay fluctuation control (DDFC), which can suppress delay fluctuation in a fully decentralized manner. DDFC can be easily used in IEEE802.11-based wireless LANs by replacing the current backoff algorithm of IEEE802.11 with DDFC. We examine the performance of DDFC, which is assumed to be used for real-time traffic in an IEEE802.11-based wireless LAN, by simulation. The results of computer simulation confirm that we can achieve not only small delay but also small delay fluctuation of real-time traffic in IEEE802.11-based wireless LANs by controlling real-time traffic according to DDFC.
View full abstract
-
Takuji Iimura, Hiroaki Hazeyama, Youki Kadobayashi
Article type: P2P Applications
Subject area: Regular Paper
2005 Volume 1 Pages
75-90
Published: 2005
Released on J-STAGE: January 19, 2005
JOURNAL
FREE ACCESS
Today's Multi-player Online Games (MOGs) are challenged by infrastructure requirements because of their server-centric nature. Peer-to-peer overlay networks are an interesting alternative if they can implement the set of functions that are traditionally performed by centric game servers. In this paper, we propose a
Zoned Federation Model (ZFM) to adapt MOGs to peer-to-peer overlay networks. We also introduce the concept of
zone and
zone owner to MOGs. A zone is some part of the whole game world, and a zone owner is a game sever of a specific zone. According to the demands of the game program, each node actively changes its role to a zone owner. By dividing the whole game world into several zones, workloads of the centric game server can be distributed to a federation of zones. In order to reduce response latency overhead on data exchanges between a zone owner and its clients, we limit the use of a Distributed Hash Table(DHT) to the rendezvous point of each zone; actual data exchanges are carried out through direct TCP connection between a zone owner and its members. We also use the DHT as backup storage media to cope with the resignation of a zone owner. We have implemented this zoned federation model as a middle layer between the game program and the DHT, and we evaluate our implementation with a prototypical multi-player game. Evaluation results indicate that our approach enables game creators to design scalable MOGs on the peer-to-peer environment with a short response latency which is acceptable for MOGs.
View full abstract
-
Katsuhiko Sato, Michiaki Katsumoto, Tetsuya Miki
Article type: P2P and Network Architectures
Subject area: Regular Paper
2005 Volume 1 Pages
91-103
Published: 2005
Released on J-STAGE: January 19, 2005
JOURNAL
FREE ACCESS
The Future Vision Distribution Service (FVDS) (Sato, et al., 2004) is an innovative video delivery service concept based on the Internet ITS (Intelligent Transport System) area, and Source Mobility Support Multicasting (SMM) (Sato, et al., 2004) consists of new multicasting techniques needed to realize FVDS. However, FVDS and SMM lack sufficient application-level protocols to manage preceding and following vehicles on the same route, and have a drawback of topological constraints in the access network to accommodate micro-mobility. To solve these problems, this paper presents two new protocols. The Mobile Vehicle Management (MVM) protocol is an application-level protocol that discovers a preceding vehicle on the same route and provides its multicast address to following vehicles by using information from the global positioning system (GPS) and a route-guiding function. The Self-Organizing Tree (SOT) protocol is a network-level protocol for use in the access network that organizes radio base-stations into a logical tree topology in a self-forming and self-healing manner. This eliminates the topological constraints and provides the network with robustness against failures and flexibility for network design. To show how these protocols further the implementation of FVDS and SMM, this paper also gives details of a software implementation model for each network node system; specifically, we have designed a software architecture, composition elements for each protocol and their relations, and internal state-machines for each element.
View full abstract
-
Susumu Yamazaki, Michihiro Matsumoto, Tsuneo Nakanishi, Teruaki Kitasu ...
Article type: Regular Papers
Subject area: Regular Paper
2005 Volume 1 Pages
104-116
Published: 2005
Released on J-STAGE: January 26, 2005
JOURNAL
FREE ACCESS
Aspect-orientation is a new programming paradigm that can localize a cross-cutting concern in a single module. This paper proposes a new type of Java bytecode analyzer framework based on aspect-orientation. It includes several new design and implementation techniques that are general or specific to the domain of language systems. We also observe that aspect-orientation improves extensibility, type safety, execution efficiency, and simplicity of the API, when compared with existing analyzer frameworks based on object-orientation such as Soot. This paper reports the following: structural extension of elementary objects maintaining type safety and execution efficiency; separation of a bytecode parser and concrete instruction sets; a visitor based on the stack-machine model; binary operations that are simple, extensive, and easy to maintain; and separation of nonfunctional concerns such as verification. We also observe that AspectJ currently has two limitations: it is not sufficiently expressive to structure aspects strongly depending on the inner structure; and it does not provide a general approach to write advice that cannot be described with information of its pointcut only.
View full abstract
-
Reynald Affeldt, Naoki Kobayashi, Akinori Yonezawa
Article type: Regular Papers
Subject area: Regular Paper
2005 Volume 1 Pages
117-127
Published: 2005
Released on J-STAGE: January 26, 2005
JOURNAL
FREE ACCESS
We show how to model and verify a concurrent program using the Coq proof assistant. The program in question is an existing mail server written in Java. The approach we take is to use an original library that provides a language for modeling, a logic, and lemmas for verification of concurrent programs. First, we report on the modeling of the mail server. Using the language provided by the library, we build a model by (1) translating the original program and (2) building appropriate abstractions to model its environment. Second, we report on the verification of a property of the mail server. We compare this library-based approach with an alternative approach that directly appeals to the Coq language and logic for modeling and specification. We show that the library-based approach has many advantages. In particular, non-functional aspects (communications, non-determinism, multi-threading) are handled directly by the library and therefore do not require complicated modeling. Also, the model can be directly run using existing compilers or virtual machines, thus providing us with a certified implementation of the mail server.
View full abstract
-
Susumu Yamazaki, Takayuki Kando, Michihiro Matsumoto, Tsuneo Nakanishi ...
Article type: Regular Papers
Subject area: Regular Paper
2005 Volume 1 Pages
128-140
Published: 2005
Released on J-STAGE: January 26, 2005
JOURNAL
FREE ACCESS
A recent trend in program development is the employment of generic software components such as libraries and frameworks. Typically, however, it is difficult to achieve both genericity and runtime execution efficiency simultaneously. Therefore, many researchers have studied program specialization, which is one technique to translate a generic program automatically into an efficient program specialized for a specific runtime environment. However, it is very difficult to implement a system that can specialize practical applications. Although some possible reasons exist for the problem, this paper focuses on the problems of instruction-dependent processes. Each necessary analysis for existing program specializer systems must include instruction-dependent processes. Therefore, not only does the code size of the specializer get larger, but maintainability and reliability are also degraded. Then, we propose a new algorithm of logic-based binding time analysis using reaching definitions analysis, which is widely used among many other analyses. We also evaluate how this technique improves the implementation simplicity of binding time analyzer: the code size is reduced by about 10%. Especially, instruction-dependent processes are almost entirely eliminated.
View full abstract
-
Eiju Hirowatari, Kouichi Hirata, Tetsuhiro Miyahara, Setsuo Arikawa
Article type: Regular Papers
Subject area: Regular Paper
2005 Volume 1 Pages
141-152
Published: 2005
Released on J-STAGE: February 09, 2005
JOURNAL
FREE ACCESS
Inductive inference gives us a theoretical model of concept learning from examples. In this paper, we study
refutably and
reliably inductive inference of
recursive real-valued functions. First we introduce the new criteria RealRefEx for refutable inference and RealRelEx for reliable inference. Then, we compare these two criteria with RealEx for identification in the limit, RealFin for learning finitely and RealNum¡ for learning by enumeration that have been already introduced in the previous works, and investigate their interaction. In particular, we show that RealRefEx and RealRelEx are closed under union, as similar as the criteria RefEx and RelEx for inductive inference of recursive functions.
View full abstract
-
Ta Quoc Viet, Tsutomu Yoshinaga, Ben A. Abderazek, Masahiro Sowa
Article type: System Software
Subject area: Regular Paper
2005 Volume 1 Pages
153-165
Published: 2005
Released on J-STAGE: February 09, 2005
JOURNAL
FREE ACCESS
This paper proposes a middle-grain approach to construct hybrid MPI-OpenMP solutions for SMP clusters from an existing MPI algorithm. Experiments on different cluster platforms show that our solutions exceed the solutions that are based on the de-facto MPI model in most cases, and occasionally by as much as 40% of performance. We also prove an automatic outperformance of a thread-to-thread communication model over a traditional process-to-process communication model in hybrid solutions. In addition, the paper performs a detailed analysis on the hardware and software factors affecting the performance of MPI in comparison to hybrid models.
View full abstract
-
Sasiporn Usanavasin, Takahiro Nakamori, Shingo Takada, Norihisa Doi
Article type: Web Databases
Subject area: Regular Paper
2005 Volume 1 Pages
166-180
Published: 2005
Released on J-STAGE: March 09, 2005
JOURNAL
FREE ACCESS
Today the WWW contains not only a tremendous amount of information but also a variety of Web applications, such as online shopping. Searching for such applications can be difficult and time consuming because current keyword-based approaches do not always deliver results that match the user's intention. The key underlying problem is that keywords do not capture the semantics of the user's query and the functional capabilities of Web applications. This paper presents a multi-faceted approach for searching Web applications. In our approach, the search process is based on various facets, such as service functionality, item (product), result type, inputs/outputs, and the detailed information of items. It is based on matching queries and Web application profiles that are described in DAML-S. The matching process is augmented with the use of ontologies. The result of applying our approach to a set of Web applications resulted in a precision of 93% and a recall of 99%.
View full abstract
-
Saranya Maneeroj, Yuka Kato, Katsuya Hakozaki
Article type: Research Papers
Subject area: Regular Paper
2005 Volume 1 Pages
181-192
Published: 2005
Released on J-STAGE: April 13, 2005
JOURNAL
FREE ACCESS
This paper proposes an advanced movie recommender system. The system is primarily based on the content-based collaborative filtering or hybrid filtering technique. The distinctive point of this system lies in an improved neighborhood formation method in order to get high-quality neighbors. Current hybrid systems only use user's opinions as user's rating values in selecting neighbors. That causes loss of user preference features, and tends to provide poor quality neighbors. The proposed system uses the user's opinions on various features of user preferences in selecting neighbors. That results in high-quality neighbors and high-quality recommendations will be obtained accordingly. An experimental movie recommender system, called Advanced Yawara system has been developed to prove the effectiveness of the method. The evaluation results show that the Advanced Yawara system provides higher quality of recommendations than current hybrid systems.
View full abstract
-
Xiangshi Ren, Jing Kong, Xing-Qi Jiang
Article type: User Interfaces
Subject area: Regular Paper
2005 Volume 1 Pages
193-203
Published: 2005
Released on J-STAGE: April 20, 2005
JOURNAL
FREE ACCESS
As a well-known human performance model, Fitts' law, a tool for evaluating pointing devices, has been accepted and applied for a long time in the human computer interaction field. However, doubts now exist regarding its validity. One challenging job for those who are doing research on human performance models is to resolve the problem relating to input hits' distribution (i.e., spatial constraint). We developed a new model based on temporal distribution to alter the traditional models. The new model and the traditional models are compared in two experiments using AIC (Akaike's Information Criterion), a criterion for statistical model selection. The results show that the new model is better than the traditional ones in performance evaluation.
View full abstract
-
Hisashige Ando, Akira Asato, Motoyuki Kawaba, Hideki Okawara, William ...
Article type: Processor Architecture
Subject area: Regular Paper
2005 Volume 1 Pages
204-215
Published: 2005
Released on J-STAGE: June 01, 2005
JOURNAL
FREE ACCESS
The pursuit of instruction-level parallelism using more transistors produces diminishing returns and also increases power dissipation of general purpose processors. This paper studies a chip multi-processor (CMP) with smaller processor cores as a means to achieve high aggregate throughput and improved energy efficiency. The benefit of this design approach increases as the number of cores on a chip increases, as enabled by semiconductor process scaling. The feasibility of a processor core 40% of the size of a baseline high performance processor that delivers about 70% of its performance is shown. The CMP populated by smaller cores to fill the same silicon area delivers 2.3 times higher performance in transaction processing represented by TPC-C benchmarks than the baseline processor scaled into the same technology. The CMP also achieves 38% higher energy efficiency.
View full abstract
-
Masasi Eto, Youki Kadobayashi, Suguru Yamaguchi
Article type: Distributed System Operation and Management
Subject area: Regular Paper
2005 Volume 1 Pages
216-225
Published: 2005
Released on J-STAGE: June 08, 2005
JOURNAL
FREE ACCESS
This paper presents an architecture which investigates the consistency of AS policies in all of the publicly accessible Internet Routing Registry (IRR) databases in the world. We also propose an architecture to prevent the increase of inconsistency. Since inconsistency hampers the connectivity between ASs, the consistency of IRR databases is crucial for the stable operation of the Internet. Through our investigations, we have found that a significant proportion of AS policies are either specified incorrectly or outdated. Based on this observation, we propose and implement a system that detects these inconsistencies and notifies operators so that they can be corrected. Finally, we evaluate these systems.
View full abstract
-
Yiwen Wang, Dongju Li, Tsuyoshi Isshiki, Hiroaki Kunieda
Article type: Hardware/Software Co-design
Subject area: Regular Paper
2005 Volume 1 Pages
226-233
Published: 2005
Released on J-STAGE: June 08, 2005
JOURNAL
FREE ACCESS
The paper presents firstly a novel system-on-chip (SoC) architecure consisting of a 32-bit RISC processor, on-chip memory, state-of-the art IPs and embedded full-custom bit serial FPGA(BSFPGA) I/O interface. The system inherits the compatibility of AMBA architecture, the flexibility of BSFPGA, so that it can be used for various types of applications without any additional I/O pins. Example application is to realize fingerprint authentication system in a chip. The paper presents secondly design flow for SoC, including a full-custom block. With RTL model for BSFPGA, it enables a timing simulation of the total system in RTL level and a timing verification in transistor level.
View full abstract
-
Cui Zhu, Hiroyuki Kitagawa, Christos Faloutsos
Article type: Research Papers
Subject area: Regular Paper
2005 Volume 1 Pages
234-243
Published: 2005
Released on J-STAGE: July 13, 2005
JOURNAL
FREE ACCESS
Detecting outliers is an important problem, in applications such as fraud detection, financial analysis, health monitoring and so on. It is typical of most such applications to possess high dimensional datasets. Many recent approaches detect outliers according to some reasonable, pre-defined concepts of an outlier (e.g., distance-based, density-based, etc.). Most of these concepts are proximity-based which define an outlier by its relationship to the rest of the data. However, in high dimensional space, the data becomes sparse which implies that every object can be regarded as an outlier from the point of view of similarity. Furthermore, a fundamental issue is that the notion of which objects are outliers typically varies between users, problem domains or, even, datasets. In this paper, we present a novel solution to this problem, by detecting outliers based on user examples for high dimensional datasets. By studying the behavior of projections of such a few outlier examples in the dataset, the proposed method discovers the hidden view of outliers and picks out further objects that are outstanding in the projection where the examples stand out greatly. Our experiments on both real and synthetic datasets demonstrate the ability of the proposed method to detect outliers that match users' intentions.
View full abstract
-
Robby T. Tan, Katsushi Ikeuchi
Article type: Papers
Subject area: Regular Paper
2005 Volume 1 Pages
244-267
Published: 2005
Released on J-STAGE: July 13, 2005
JOURNAL
FREE ACCESS
In the real world, the color appearances of objects are generally not consistent. It depends principally on two factors: illumination spectral power distribution (illumination color) and intrinsic surface properties. Consequently, to obtain objects' consistent color descriptors, we have to deal with those two factors. The former is commonly referred to as color constancy: a capability to estimate and discount the illumination color, while the latter is identical to the problem of recovering body color from highlights. This recovery is crucial because highlights emitted from opaque inhomogeneous objects can cause the surface colors to be inconsistent with regard to the change of viewing and illuminant directions. We base our color constancy methods on analyzing highlights or specularities emitted from opaque inhomogeneous objects. We have successfully derived a linear correlation between image chromaticity and illumination chromaticity. This linear correlation is clearly described in inverse-intensity chromaticity space, a novel two-dimensional space we introduce. Through this space, we become able to effectively estimate illumination chromaticity (illumination color)from both uniformly colored surfaces and highly textured surfaces in a single integrated framework, thereby making our method significantly advanced over the existing methods. Meanwhile, for separating reflection components, we propose an approach that is based on an iterative framework and a specular-free image. The specular-free image is an image that is free from specularities yet has different body color from the input image. In general, the approach relies principally on image intensity and color. All methods of color constancy and reflection-components separation proposed in this paper are analyzed based on physical phenomena of the real world, making the estimation more accurate, and have strong basics of analysis. In addition, all methods require only a single input image. This is not only practical, but also challenging in term of complexity.
View full abstract
-
Masato Aoba, Yoshiyasu Takefuji
Article type: Regular Papers
Subject area: Regular Paper
2005 Volume 1 Pages
268-281
Published: 2005
Released on J-STAGE: July 13, 2005
JOURNAL
FREE ACCESS
We propose a neural preprocess approach for video-based gesture recognition system. Second-order neural network (SONN) and self-organizing map (SOM) are employed for extracting moving hand regions and for normalizing motion features respectively. The SONN is more robust to noise than frame difference technique. Obtained velocity feature vectors are translated into normalized feature space by the SOM with keeping their topology, and the transition of the activated node in the topological map is classified by DP matching. The topological nature of the SOM is quite suited to data normalization for the DP matching technique. Experimental results show that those neural networks effectively work on the gesture pattern recognition. The SONN shows its noise reduction ability for noisy backgrounds, and the SOM provides the robustness to spatial scaling of input images. The robustness of the SOM to spatial scaling is based on its robustness to velocity scaling.
View full abstract
-
Shinsaku Kiyomoto, Toshiaki Tanaka, Mariko Yoshida, Masahiro Kuroda
Article type: Network Security
Subject area: Regular Paper
2005 Volume 1 Pages
282-293
Published: 2005
Released on J-STAGE: August 10, 2005
JOURNAL
FREE ACCESS
In this paper, we propose a new authentication mechanism for the mobile environments, called
Self-Delegation. In the mechanism, a user stores information that relates to strict authentication in a tamper-resistant module that can be kept securely at home. Time-limited authority is delegated to the mobile terminal by communicating with the tamper-resistant module on a local basis. After the delegation, a remote service can authenticate the user for a limited time. We propose two self-delegation schemes, and analyze the security of the proposed scheme based on a security model that we define. Furthermore, we have implemented the self-delegation and authentication protocols on a PDA and a Java card, both of which have ISO14443 I/F, and show the feasibility of the implemented protocols.
View full abstract
-
Raylin Tso, Ying Miao, Takeshi Okamoto, Eiji Okamoto
Article type: Security Infrastructure
Subject area: Regular Paper
2005 Volume 1 Pages
294-303
Published: 2005
Released on J-STAGE: August 10, 2005
JOURNAL
FREE ACCESS
In the Shamir (
t,
n)-threshold scheme, the dealer constructs a random polynomial
f(
x) ∈
GF(
p)[
x] of degree
at most t-1 in which the constant term is the secret
K ∈
GF(
p). However, if the chosen polynomial
f(
x) is of degree less than
t-1, then a conspiracy of any
t-1 participants can reconstruct the secret
K;on the other hand, if the degree of
f(
x) is greater than
t-1, then even
t participants can not reconstruct the secret
K properly. To prevent these from happening, the degree of the polynomial
f(
x) should be exactly equal to
t-1 if the dealer claimed that the threshold of this scheme is
t. There also should be some ways for participants to verify whether the threshold is exactly
t or not. A few known verifiable threshold schemes provide such ability but the securities of these schemes are based on some cryptographic assumptions. The purpose of this paper is to propose some threshold-verification protocols for the Shamir (
t,
n)-threshold scheme from the viewpoint of unconditional security.
View full abstract
-
Hidenori Kuwakado, Hatsukazu Tanaka
Article type: Security Infrastructure
Subject area: Recommended Paper
2005 Volume 1 Pages
304-312
Published: 2005
Released on J-STAGE: August 10, 2005
JOURNAL
FREE ACCESS
When a hard drive (HDD) is recycled, it is recommended that all files on the HDD are repeatedly overwritten with random strings for protecting their confidentiality. However, it takes a long time to overwrite them. This problem is solved by applying the all-or-nothing transform (AONT) to the filesystem of the HDD. To use the HDD economically, it is desirable to use a length-preserving AONT (LP-AONT). Whereas previous AONTs cause the increase of size of a file, and no LP-AONT is secure under previous security definitions. However, it does not mean that the LP-AONT is useless;previous security definitions are too strict in practical applications. Then, by introducing the ambiguity of a message, we propose more practical security definitions of the AONT. We also show the secure implementation of the LP-AONT under the proposed security definitions. The analysis shows that our implementation is nearly optimal in terms of the success probability of an adversary. It means that the ambiguity of one message block allows us to construct the LP-AONT as secure as previous AONTs.
View full abstract
-
Raylin Tso, Ying Miao, Takeshi Okamoto, Eiji Okamoto
Article type: Security Infrastructure
Subject area: Regular Paper
2005 Volume 1 Pages
313-321
Published: 2005
Released on J-STAGE: August 10, 2005
JOURNAL
FREE ACCESS
Verifiable secret sharing schemes proposed so far can only allow participants to verify whether their shares are correct or not. In this paper, we propose a new protocol which can allow participants not only to verify the correctness of their shares but also to revise the faulty shares. It is achieved in a cooperative way by participants, but without any assistance from the dealer. This protocol, to the best of our knowledge, is the first one providing such kind of ability. Correcting shares by participants instead of the dealer is important in many situations. In addition, this protocol is also useful for adding new participants without the dealer's assistance.
View full abstract
-
Yuichiro Kanzaki, Hiroshi Igaki, Masahide Nakamura, Akito Monden, Ken- ...
Article type: Security and Society
Subject area: Regular Paper
2005 Volume 1 Pages
322-334
Published: 2005
Released on J-STAGE: August 10, 2005
JOURNAL
FREE ACCESS
This paper presents a method to evaluate the risk of information leakage in software processes for security-sensitive applications. A software process is modeled as a series of sub-processes, each of which produces new work products from input products. Since a process is conducted usually by multiple developers, knowledge of work products is shared among the developers. Through the collaboration, a developer may share with others the knowledge of products that are not related to the process. We capture the transfer of such irrelevant product knowledge as information leakage in a software process. In this paper, we first formulate the problem of information leakage by introducing a formal software process model. Then, we propose a method to derive the probability that each developer
d knows each work product
p at a given process of software development. The probability reflects the possibility that someone leaked the knowledge of
p to
d. We also conduct three case studies to show the applicability of leakage to practical settings. In the case studies, we evaluate how the risk of information leakage is influenced by the collaboration among developers, the optimal developer assignment and the structure of the software process. As a result, we show that the proposed method provides a simple yet powerful means to perform quantitative analysis on information leakage in a security-sensitive software process.
View full abstract
-
Atsuko Miyaji, Kozue Umeda
Article type: Security Infrastructure
Subject area: Regular Paper
2005 Volume 1 Pages
335-348
Published: 2005
Released on J-STAGE: August 10, 2005
JOURNAL
FREE ACCESS
The concept of group signature allows a group member to sign message anonymously on behalf of the group. In the event of a dispute, a designated entity can reveal the identity of a signer. Previous group signature schemes use an RSA signature based membership certificate and a signature based on a proof of knowledge (SPK) in order to prove the possession of a valid membership certificate. In these schemes, all of SPKs are generated over an unknown-order group, which requires more work and memory compared with a publicly-known-order group.
In this paper, we present an efficient group signature scheme with a membership revocation function. Our membership certificate is based on a Nyberg-Rueppel signature (NR-signature) over a known-order group. We also reconstruct all SPKs that prove to have “
valid” (non-revoked) membership certificate. As a result, our scheme is more efficient than another group signature based on NR-signature.
View full abstract
-
Yusuke Sakabe, Masakazu Soshi, Atsuko Miyaji
Article type: Security and Society
Subject area: Regular Paper
2005 Volume 1 Pages
349-361
Published: 2005
Released on J-STAGE: August 10, 2005
JOURNAL
FREE ACCESS
In Java programs, it is difficult to protect intellectual property rights and secret information in untrusted environments, since they are easy to decompile and reverse engineer. Consequently realization of software obfuscation becomes increasingly important. Unfortunately previous software obfuscation techniques share a major drawback that they do not have a theoretical basis and thus it is unclear how effective they are. Therefore we shall propose new software obfuscation techniques for Java in this paper. Our obfuscation techniques take advantage of features of object-oriented languages, and they drastically reduce the precision of points-to analysis of the programs. We show that determining precise points-to analysis in obfuscated programs is NP-hard and the fact provides a theoretical basis for our obfuscation techniques. Furthermore, in this paper we present some empirical experiments, whereby we demonstrate the effectiveness of our approaches.
View full abstract
-
Carlos Perez Leguizamo, Minoru Takaishi, Sotaro Kimura, Kinji Mori
Article type: Distributed Systems Operation and Management
Subject area: Regular Paper
2005 Volume 1 Pages
362-369
Published: 2005
Released on J-STAGE: September 07, 2005
JOURNAL
FREE ACCESS
With the exponential growth of the World Wide Web in recent years and the progress of information technology, the business market is changing its strategy for a modern online business environment. Autonomous Decentralized Database System(ADDS), based on autonomous coordinating subsystems, has been proposed as a system architecture in order to meet the innovative e-business requirements. Autonomy and decentralization of subsystems help achieving high response time in highly competitive situation and an autonomous Mobile Agent based coordination has been proposed to achieve flexibility in a highly dynamic environment. Further, this paper proposes the autonomous reconfiguration technology in ADDS in order to adapt the system to changing heterogeneous requirements. Based on the complementary requirements of the subsystems, heterogeneity distribution has been defined to accomplish the reconfiguration process. This autonomous reconfiguration technology consists of autonomous division and integration of the system, based on the ever-changing situation, to assure high response time. Simulation of the proposed technology shows its effectiveness for a large scale system and rapidly evolving situations.
View full abstract
-
Motoharu Miyake, Hiroshi Inamura
Article type: Network Protocols
Subject area: Regular Paper
2005 Volume 1 Pages
370-380
Published: 2005
Released on J-STAGE: September 07, 2005
JOURNAL
FREE ACCESS
Conventional TCP fails to achieve optimal TCP performance since it does not handle well the loss of retransmitted segments, generated by the fast retransmit algorithm and the response of partial ACK under fast recovery. This paper introduces an algorithm to recover these lost retransmissions for NewReno TCP and details the steps to implement it. It provides careful retransmission by considering the loss of unacknowledged segments. The algorithm is followed by two options for restoring the congestion control state; both reduce the moderate transmission rate which mitigates network congestion.
ns2 simulations show that the algorithm can overcome the loss of retransmitted segments. Moreover, it also suppresses the unnecessary throughput degradation more effectively than is possible with the recovery of lost retransmissions in Reno TCP and equals the performance offered by the SACK-based algorithm. The two options for restoring the congestion control state are also shown to offer adequate performance under retransmitted segment loss.
View full abstract
-
Tomonori Fujita, Masanori Ogawara
Article type: High Reliability
Subject area: Regular Paper
2005 Volume 1 Pages
381-393
Published: 2005
Released on J-STAGE: September 14, 2005
JOURNAL
FREE ACCESS
Arbre is a file system that guarantees the integrity of the entire file system on an untrusted remote block-level storage system with a small amount of trusted storage. Arbre does not require changes to the remote block-level storage systems commercially available today. Even if an unauthorized person manages to get complete access to the storage system, they cannot modify or replay any data without being detected. This is achieved by organizing all file system blocks as a single tree and storing their hashes as a part of metadata to later verify the correctness of their contents.
View full abstract
-
Takahiro Hirofuchi, Eiji Kawai, Kazutoshi Fujikawa, Hideki Sunahara
Article type: Network
Subject area: Regular Paper
2005 Volume 1 Pages
394-406
Published: 2005
Released on J-STAGE: September 14, 2005
JOURNAL
FREE ACCESS
Personal computing with affordable computers and their peripheral devices becomes more popular. To use such devices more efficiently and improve their usability, people want to share surrounding peripheral devices between computers without any modification of their own computing environments. The recent device sharing technologies in the pervasive computing area are not sufficient for the peripheral devices of personal computers because these technologies do not provide the network-transparency for applications and device drivers. In this paper, we propose USB/IP as a transparent device sharing mechanism over IP network. This advanced device sharing mechanism is based on the modern sophisticated peripheral interfaces and their supports in operating systems. By the
Virtual Host Controller Interface Driver we have implemented as a peripheral bus driver, users can share diverse devices over networks without any modification in existing operating systems and applications. The experiments show that USB/IP has fairly practical I/O performance for various USB devices, including isochronous ones. We also describe the performance optimization criteria for the further improvements.
View full abstract
-
Harumichi Nishimura
Article type: Invited Papers
Subject area: Invited Paper
2005 Volume 1 Pages
407-414
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
The notion of advised computation was introduced by Karp and Lipton to represent non-uniform complexity in terms of Turing machines. Since then, advised computation has been one of the basic concepts of computational complexity. Recently, the research of advised computation has been originated also in the field of quantum computing. This paper reviews the study of advised quantum computation.
View full abstract
-
Kazuo Iwama, Rudy Raymond, Shigeru Yamashita
Article type: Computational Theory
Subject area: Regular Paper
2005 Volume 1 Pages
415-425
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
An oracle with bias ε is an oracle that answers queries correctly with a probability of at least 1/2+ε. In this paper, we study the upper and lower bounds of quantum query complexity of oracles with bias ε. For general upper bounds, we show that for any quantum algorithm solving some problem with high probability using
T queries of perfect quantum oracles, i.e., oracles with ε =1/2, there exists a quantum algorithm solving the same problem, also with high probability, using
O(
T/ε) queries of the corresponding biased quantum oracles. As corollaries we can show robust quantum algorithms and gaps between biased quantum and classical oracles, e.g., by showing a problem where the quantum query complexity is
O(
N/ε) but the classical query complexity is lower bounded by Ω(
N log
N/ε
2). For general lower bounds, we generalize Ambainis' quantum adversary argument to biased quantum oracles and obtain the first lower bounds with explicit bias factor. As one of its applications we can provide another proof of the optimality of quantum algorithms for the so-called quantum Goldreich-Levin problem which was proved before by Adcock, et al. using different and more complicated methods.
View full abstract
-
Yumiko Murakami, Masaki Nakanishi, Shigeru Yamashita, Katsumasa Watana ...
Article type: Computational Theory
Subject area: Regular Paper
2005 Volume 1 Pages
426-435
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
Even though quantum computation is useful for solving certain problems, classical computation is more powerful in some cases. Thus, it is significant to compare the abilities of quantum computation and its classical counterpart, based on such a simple computation model as automata. In this paper we focus on the quantum pushdown automata which were defined by Golovkins in 2000, who showed that the class of languages recognized by quantum pushdown automata properly contains the class of languages recognized by finite automata. However, no one knows the entire relationship between the recognitive abilities of quantum and classical pushdown automata. As a part, we show a proposition that quantum pushdown automata can deterministically solve a certain problem that cannot be solved by any deterministic pushdown automata.
View full abstract
-
Kentaro Imafuku, Hideki Imai
Article type: Computational Theory
Subject area: Regular Paper
2005 Volume 1 Pages
436-441
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
We study a distinction problem of two heat baths with given information from ancilla (quantum or classical two level system) which is interacting with the heat baths. Analysing dynamics of ancilla system in a Markov approximation region, we show that the process in which“quantum ancilla” acquires the quantum distinguishability is described as a bifurcation process from the time evolution of classical distinguishability.
View full abstract
-
Masaki Nakanishi
Article type: Invited Papers
Subject area: Invited Paper
2005 Volume 1 Pages
442-449
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
Quantum automata have been studied as simple quantum computation models. They can be considered models of small (or restricted)quantum computers. In this paper, we give descriptions of several kinds of quantum automata and show their power in comparison to their classical counterparts. We also give descriptions of quantum automata that have additional classical computational resources. Introducing classical computational resources can enhance the power of quantum automata, since this approach relaxes such restrictions as reversible state transitions.
View full abstract
-
Mitsuru Hamada
Article type: Invited Papers
Subject area: Invited Paper
2005 Volume 1 Pages
450-460
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
This article gives a formula to evaluate the performance of a class of algebraic codes. The class includes quantum codes as well as classical ones. The formula relates a bound on the weight spectrum (distribution), or its generalization, of a code with an upper bound on its decoding error probability.
View full abstract
-
Kazuo Iwama, Akinori Kawachi, Shigeru Yamashita
Article type: Invited Papers
Subject area: Invited Paper
2005 Volume 1 Pages
461-469
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
This paper reviews researches on quantum oracle computations when oracles are not perfect, i.e., they may return wrong answers. We call such oracles
biased oracles, and discuss the formal model of them. Then we provide an intuitive explanation how quantum search with biased oracles by Høyer, et al.(2003) works. We also review the method, by Buhrman, et al.(2005), to obtain all the answers of a quantum biased oracle without any overhead compared to the perfect oracle case. Moreover, we discuss two special cases of quantum biased oracles and their interesting properties, which are not found in the classical corresponding cases. Our discussion implies that the model of quantum biased oracle adopted by the existing researches is natural.
View full abstract
-
Hirotada Kobayashi, François Le Gall
Article type: Invited Papers
Subject area: Invited Paper
2005 Volume 1 Pages
470-477
Published: 2005
Released on J-STAGE: October 05, 2005
JOURNAL
FREE ACCESS
After Shor's discovery of an efficient quantum algorithm for integer factoring, hidden subgroup problems play a central role in developing efficient quantum algorithms. In spite of many intensive studies, no efficient quantum algorithms are known for hidden subgroup problems for many non-Abelian groups. Of particular interest are the hidden subgroup problems for the symmetric group and for the dihedral group, because an efficient algorithm for the former implies an efficient solution to the graph isomorphism problem, and that for the latter essentially solves a certain lattice-related problem whose hardness is assumed in cryptography. This paper focuses on the latter case and gives a comprehensive survey of known facts related to the dihedral hidden subgroup problem.
View full abstract
-
Seika Abe, Masami Hagiya, Ikuo Nakata
Article type: Regular Papers
Subject area: Regular Paper
2005 Volume 1 Pages
478-495
Published: 2005
Released on J-STAGE: October 26, 2005
JOURNAL
FREE ACCESS
This paper describes a generic intermediate language, called LIR, and a retargetable code generator for LIR, both of which were developed as part of the compiler for the COINS (COmpiler INfraStructure) project. Although the purpose and basic concepts of LIR are similar to those of RTL, the intermediate language of GCC, LIR is defined as an independent and self-contained programming language that has the constructs of a high-level language, such as variable declarations, and their formal semantics. As a result, LIR has several advantages over other representations currently in use. We can describe all compiler back-end passes as program transformations. Consequently, LIR provides a concise interface for interaction with the compiler for users who want to replace part of the compiler with their code. Most of the recently developed, retargetable code generators, such as Burg, IBurg, etc., are based on the DP matching method. Their machine description language consists of rewriting rules with code generation actions. However, the rules that are used to describe a machine do not correspond directly to any of the existing instructions of the target machine. On the other hand, the description language of GCC consists of descriptive statements that correspond to each of the target's existing machine instructions. However, the GCC code generator does not produce optimal code because it is not based on the DP method. Our code generator makes use of both the DP and GCC methods by translating the GCC descriptive statements into rewriting rules that are suitable for use by the DP matching method. Furthermore, DP matching is also implemented as a kind of transformation of an LIR program, and later transformations such as register allocation are applied to the resulting LIR program.
View full abstract
-
Taro Suzuki, Satoshi Okui
Article type: Regular Papers
Subject area: Regular Paper
2005 Volume 1 Pages
496-507
Published: 2005
Released on J-STAGE: October 26, 2005
JOURNAL
FREE ACCESS
In this paper we propose a new framework for transformations of XML documents based on an extension of regular expression type, called
incomplete regular expression type. Incomplete regular expression type is a restricted second-order extension of regular expression (RE) type: An incomplete RE type can be applied to
arbitrary RE types as its arguments. It is accomplished by introducing holes to types. We give a version of second-order rewrite systems, designed founded on our type system. Matching between a subterm with the left-hand side of rewrite rules yields a substitution that bind second-order terms to the variables of incomplete type. This feature is practically useful when we want to reuse “the upper parts” of XML documents in the transformation. We demonstrate that the introduction of incomplete regular expression type allows programmers much flexibility. In this paper, we mainly concentrate on the type theoretic issues and give some properties concerning for incomplete types. We also show the type preservation in rewriting.
View full abstract
-
Chunsheng Hua, Haiyuan Wu, Toshikazu Wada, Qian Chen
Article type: Regular Section Papers
Subject area: Regular Paper
2005 Volume 1 Pages
508-517
Published: 2005
Released on J-STAGE: November 02, 2005
JOURNAL
FREE ACCESS
We have proposed a K-means clustering based target tracking method, compared with the template matching, which can work robustly
when tracking an object with hole through which the background can be seen (e.g.,
mosquito coil) (
hereafter we call this problem as the background interfusion
or the interfused background). This paper presents a new method for solving the drawbacks of the previous method, i.e., low speed, instability caused by the change of shape and size. Our new tracking model consists of a single target center, and a variable ellipse model for representing non-target pixels. The contributions of our new method are: 1) The original K-means clustering is replaced by a 2
∞-means clustering, and the non-target cluster center is adaptively picked up from the pixels on the ellipse. This modification reduces the number of distance computation and improves the stability of the target detection as well. 2) The ellipse parameters are adaptively adjusted according to the target detection result. This adaptation improves the robustness against the scale and shape changes of the target. Through the various experiments, we confirmed that our new method improves speed and robustness of our original method.
View full abstract
-
Masanori Takata, Masaki Bandai, Takashi Watanabe
Article type: Special Issue on Selected Papers from ICMU 2005
Subject area: Regular Paper
2005 Volume 1 Pages
518-527
Published: 2005
Released on J-STAGE: November 09, 2005
JOURNAL
FREE ACCESS
In recent years, several MAC (Medium Access Control) protocols using directional antennas have been proposed for mobile ad hoc networks (MANETs) including our proposed MAC protocol called SWAMP (Smart antennas based Wider-range Access MAC Protocol). These are typically referred to as directional MAC protocols. This paper first summarizes the proposed directional MAC protocols and outlines these common issues, which reduce the probability of successful transmissions, such as location information staleness, deafness and directional hidden- and exposed-terminal problems arisen due to directional transmissions. This paper formulates and analyzes the issues of directional MAC protocols, and proposes solutions of these issues, especially for location information staleness. The experimental results show that the mobility prediction and the optimization of parameters associated with location information staleness, such as the beamwidth and lifetime of the table information, may mitigate location information staleness and improve the overall network performance.
View full abstract
-
Takayuki Shiraishi, Munenobu Nagata, Naoki Shibata, Yoshihiro Murata, ...
Article type: Special Issue on Selected Papers from ICMU 2005
Subject area: Regular Paper
2005 Volume 1 Pages
528-536
Published: 2005
Released on J-STAGE: November 09, 2005
JOURNAL
FREE ACCESS
In order to allow tourists to travel to multiple destinations efficiently, we need a personal navigation system which computes and shows the best route to the next destination and facilitates composition of a schedule for visiting those destinations taking account of various requirements such as relative importance among destinations, time restrictions, travel expenses, and so on. There is sometimes a tradeoff between these requirements. In this paper, we extend our existing personal navigation system called P-Tour in the following two ways: (1) allowing users to optimize their tour schedules under multiple conflicting criteria such as total expenses and satisfaction degrees;and (2) navigating users to the next destination in a more efficient way. We have implemented the above extensions and integrated them into P-Tour. Through some experiments, we show the effectiveness of the proposed extensions.
View full abstract
-
Misato Sasaki, Christian Noack, Hidetoshi Yokota, Akira Idoue
Article type: Special Issue on Selected Papers from ICMU 2005
Subject area: Regular Paper
2005 Volume 1 Pages
537-544
Published: 2005
Released on J-STAGE: November 09, 2005
JOURNAL
FREE ACCESS
As location positioning systems such as GPS become popular, there is a growing demand for location-based applications. It is getting easier to utilize map information by connecting a GPS receiver to a PC and PDA. Corresponding to this momentum, GPS receivers embedded into mobile phones and applications using the user location in real-time are widely available. Location information is also gaining attention as a form of meta-data for web content and is applied to the semantic web, which provides advanced search services using meta-data information. Taking advantages of these situations, the creation of web content “on the spot” in a flexible form on a mobile phone is becoming important. In this paper, we propose a system called“LocationWeb”, which provides location-based web content search and creation on the mobile phone. This uses the location information as meta-data of the web content and enables handy search services based on real time location information.
View full abstract
-
Xiaolong Li, Qing-An Zeng
Article type: Special Issue on Selected Papers from ICMU 2005
Subject area: Regular Paper
2005 Volume 1 Pages
545-551
Published: 2005
Released on J-STAGE: November 09, 2005
JOURNAL
FREE ACCESS
In this paper, we use a discrete Markov chain model to analyze the throughput of CSMA/CA protocol in a wireless local area network considering the capture phenomenon, which means the packet with the strongest power may capture the receiver even in the presence of other overlapping packets. The model of capture effect over the theoretical throughput of CSMA/CA protocol in the presence of path loss, shadowing, and Rayleigh fading is obtained and compared with the model without the capture effect.
View full abstract
-
Thilmee M. Baduge, Hirozumi Yamaguchi, Teruo Higashino
Article type: Special Issue on Selected Papers from ICMU 2005
Subject area: Regular Paper
2005 Volume 1 Pages
552-560
Published: 2005
Released on J-STAGE: November 09, 2005
JOURNAL
FREE ACCESS
In this paper, we present a de-centralized protocol to construct a Degree-Bounded Minimum Diameter Tree (DBMDT) as an overlay network. The main objective of this work is to incorporate instable and less powerful hosts such as mobile nodes in overlay multicast trees. The scheme proposed here organizes those nodes in such a way that their low performance (low bandwidth for instance) does not largely affect the entire tree. We derived some basic ideas from our previously proposed protocol MODE and enhanced them to well support the new environment. We have shown from the experimental results that this feature is very effective to keep the diameter as small as possible under the existence of mobile nodes, compared with MODE.
View full abstract