Journal of Information Processing
Online ISSN : 1882-6652
ISSN-L : 1882-6652
Volume 16
Displaying 1-19 of 19 articles from this issue
  • Koji Ara, Naoto Kanehira, Daniel Olguín Olguín, Benjamin ...
    Article type: Regular Paper
    Subject area: Cognition/ Communication/ Environment/ Tool
    2008 Volume 16 Pages 1-12
    Published: 2008
    Released on J-STAGE: April 09, 2008
    JOURNAL FREE ACCESS
    We introduce the concept of sensor-based applications for the daily business settings of organizations and their individual workers. Wearable sensor devices were developed and deployed in a real organization, a bank, for a month in order to study the effectiveness and potential of using sensors at the organizational level. It was found that patterns of physical interaction changed dynamically while e-mail is more stable from day to day. Different patterns of behavior between people in different rooms and teams (p < 0.01), as well as correlations between communication and a worker's subjective productivity, were also identified. By analyzing a fluctuation of network parameters, i.e., “betweenness centrality, ” it was also found that communication patterns of people are different: some people tend to communicate with the same people in regular frequency (which is hypothesized as a typical pattern of throughput-oriented jobs) while some others drastically changed their communication day by day (which is hypothesized as a pattern of creative jobs). Based on these hypotheses, a reorganization, such that people having similar characteristics work together, was proposed and implemented.
    Download PDF (2255K)
  • Naoshi Uchihira, Yuji Kyoya, Sun K. Kim, Katsuhiro Maeda, Masanori Oza ...
    Article type: Regular Paper
    Subject area: Social Network/Interaction
    2008 Volume 16 Pages 13-26
    Published: 2008
    Released on J-STAGE: April 09, 2008
    JOURNAL FREE ACCESS
    Recently, manufacturing companies have been moving into product-based service businesses in addition to providing the products themselves. However, it is not easy for engineers in manufacturing companies to create new service businesses because their skills, mental models, design processes, and organization are optimized for product design and not for service design. In order to design product-based services more effectively and efficiently, systematic design methodologies suitable for the service businesses are necessary. Based on the case analysis of more than 40 Japan-US product-based services, this paper introduces a product-based service design methodology called DFACE-SI. DFACE-SI consists of five steps from service concept generation to service business plan description. Characteristic features of DFACE-SI include visualization tools to facilitate stakeholders' recognition of new opportunities and difficulties of the target product-based service. Opportunities and difficulties are recognized using the customer contact expansion model and the failure mode checklist, respectively, which are extracted from the service case analysis. We apply DFACE-SI to a pilot project and illustrate its effectiveness.
    Download PDF (1382K)
  • Weihua Sun, Junya Fukumoto, Hirozumi Yamaguchi, Shinji Kusumoto, Teruo ...
    Article type: Regular Paper
    Subject area: Network Protocols
    2008 Volume 16 Pages 27-37
    Published: 2008
    Released on J-STAGE: June 11, 2008
    JOURNAL FREE ACCESS
    In this paper, we propose a routing protocol for mobile ad hoc networks called Contact-based Hybrid Routing (CHR) protocol where each node maintains potential routes to the nodes which it encountered. Only one route request message is forwarded along the potential route maintained by the source to the destination. In forwarding the route request message, if an intermediate node finds that the potential route is broken, the node uses the potential route maintained by itself to the next node. Based on this idea, our goal is to reduce the number of route request messages by maintaining a small amount of information at the nodes. The experimental results in random way point mobility and disaster evacuation mobility have shown that CHR could reduce the number of messages while keeping reasonable accessibility to the destinations.
    Download PDF (826K)
  • Ben Yan, Masahide Nakamura, Lydie du Bousquet, Ken-ichi Matsumoto
    Article type: Regular Paper
    Subject area: Software frameworks
    2008 Volume 16 Pages 38-49
    Published: 2008
    Released on J-STAGE: June 11, 2008
    JOURNAL FREE ACCESS
    The home network system (HNS, for short) enables the flexible integration of networked home appliances, which achieves value-added integrated services. Assuring safety within such integrated services is a crucial issue to guarantee a high quality of life in smart home. In this paper, we present a novel framework for the safety of the HNS integrated services. We first propose a way to define safety in the context of the integrated services, which is characterized by local safety, global safety, and environment safety. We then propose a method that can validate the above three kinds of safety for given HNS implementations. Exploiting the concept of Design by Contract (DbC, for short), the proposed method represents every safety property as a contract between a provider and a consumer of an HNS object. The contracts are embedded within the implementations, and then are validated through elaborate testing. We implement the method using Java Modeling Language (JML, for short) and JUnit with a test-case generation tool TOBIAS. Using the proposed framework, one can define and validate the safety of HNS integrated services, systematically and efficiently.
    Download PDF (919K)
  • Tabito Suzuki, Mamoru Ohara, Masayuki Arai, Satoshi Fukumoto, Kazuhiko ...
    Article type: Regular Paper
    Subject area: Fault Tolerance
    2008 Volume 16 Pages 50-63
    Published: 2008
    Released on J-STAGE: June 11, 2008
    JOURNAL FREE ACCESS
    Maintaining replicated data between nodes can improve the dependability of data. We propose a probabilistic trapezoid protocol for replicated data that combines the trapezoid protocol with the concept of a probabilistic quorum system. We analyzed the read availability, the latest version's read availability and the average number of nodes accessed for the protocol. Our numerical evaluations demonstrated that it improves not only the read availability but also the latest version's read availability. Furthermore, when the number of nodes is greater than 100, it could effectively reduce the system's load. We designed and implemented a file transfer protocol to replicate data. Experimental results proved that the trapezoid protocol could achieve a better throughput than the voting system or the grid protocol. Despite node failure, the probabilistic trapezoid protocol also achieved a relatively better throughput.
    Download PDF (621K)
  • Toshihiko Sasama, Hiroshi Masuyama, Kazuya Murakami
    Article type: Technical Note
    Subject area: Networks
    2008 Volume 16 Pages 64-67
    Published: 2008
    Released on J-STAGE: July 09, 2008
    JOURNAL FREE ACCESS
    This paper addresses the problem of broadcasting (and multicasting) focusing on the two points of energy efficient networking and of time efficient computing, where all base stations are fixed and each base station operates as an omni-directional antenna. We developed one broadcasting algorithm based on the Stingy method and based on the above performances. We evaluate this and the other two algorithms based on the Greedy and Dijkstra methods.
    Download PDF (325K)
  • Maike Erdmann, Kotaro Nakayama, Takahiro Hara, Shojiro Nishio
    Article type: Regular Paper
    Subject area: Data Mining
    2008 Volume 16 Pages 68-79
    Published: 2008
    Released on J-STAGE: July 09, 2008
    JOURNAL FREE ACCESS
    With the demand for bilingual dictionaries covering domain-specific terminology, research in the field of automatic dictionary extraction has become popular. However, the accuracy and coverage of dictionaries created based on bilingual text corpora are often not sufficient for domain-specific terms. Therefore, we present an approach for extracting bilingual dictionaries from the link structure of Wikipedia, a huge scale encyclopedia that contains a vast number of links between articles in different languages. Our methods analyze not only these interlanguage links but extract even more translations from redirect page and link text information. In an experiment which we have interpreted in detail, we proved that the combination of redirect page and link text information achieves much better results than the traditional approach of extracting bilingual terminology from parallel corpora.
    Download PDF (528K)
  • Matsuura Satoshi, Fujikawa Kazutoshi, Sunahara Hideki
    Article type: Regular Paper
    Subject area: Networks
    2008 Volume 16 Pages 80-92
    Published: 2008
    Released on J-STAGE: July 09, 2008
    JOURNAL FREE ACCESS
    To handle ubiquitous sensor data, we have been developing a geographical-location-based P2P network called “Mill”. In a ubiquitous sensing environment, data manipulation by users and sensing devices may have some characteristics. For example, sensor data are constantly generated and users keep getting these data in a particular region. When handling such an operation on existing overlay networks, relaying nodes have to process many queries issued by users. In this paper, we discuss an implementation methodology of “Mill” considering the characteristics of data manipulation. And we can also support event driven behaviors and consider the feasibility of the deployment through the implementation based on HTTP. The performance of the implementation shows that one Mill node can handle ten thousands users and sensing devices.
    Download PDF (978K)
  • Toshihiko Yamakami
    Article type: Regular Paper
    Subject area: Data Mining
    2008 Volume 16 Pages 93-99
    Published: 2008
    Released on J-STAGE: July 09, 2008
    JOURNAL FREE ACCESS
    The mobile Internet is characterized by “Easy-come and easy-go” characteristics, which causes challenges for many content providers. The 24-hour clickstream provides a rich opportunity to understand user's behaviors. It also raises the challenge of having to cope with a large amount of log data. The author proposes a stream-mining oriented algorithm for user regularity classification. In the case study section, the author shows the case studies in commercial mobile web sites and presents that the recall rate of the following month revisit prediction reaches 80-90%. The restriction of the stream mining gives a small gap to the recall rates in literature, but the proposed method has the advantage of small working memory to perform the given task of identifying the high revisit ratio users.
    Download PDF (142K)
  • Hiroaki Kikuchi, Masato Terada, Naoya Fukuno, Norihisa Doi
    Article type: Regular Paper
    Subject area: Intrusion detection
    2008 Volume 16 Pages 100-109
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    Given independent multiple access logs, we develop a mathematical model to identify the number of malicious hosts in the current Internet. In our model, the number of malicious hosts is formalized as a function taking two inputs, namely the duration of observation and the number of sensors. Assuming that malicious hosts with statically assigned global addresses perform random port scans to independent sensors uniformly distributed over the address space, our model gives the asymptotic number of malicious source addresses in two ways. Firstly, it gives the cumulative number of unique source addresses in terms of the duration of observation. Secondly, it estimates the cumulative number of unique source addresses in terms of the number of sensors. To evaluate the proposed method, we apply the mathematical model to actual data packets observed by ISDAS distributed sensors over a one-year duration from September 2004, and check the accuracy of identification of the number of malicious hosts.
    Download PDF (590K)
  • Toru Nakanishi, Nobuo Funabiki
    Article type: Regular Paper
    Subject area: Security Infrastructure
    2008 Volume 16 Pages 110-121
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    Group signature schemes with membership revocation have been intensively researched. In this paper, we propose revocable group signature schemes with less computational costs for signing/verification than an existing scheme without the update of the signer's secret keys. The key idea is the use of a small prime number embedded in the signer's secret key. By using simple prime integer relations between the secret prime and the public product of primes for valid or invalid members, the revocation is efficiently ensured. To show the practicality, we implemented the schemes and measured the signing/verification times in a common PC (Core2 DUO 2.13GHz). The times are all less than 0.5 seconds even for relatively large groups (10, 000 members), and thus our schemes are sufficiently practical.
    Download PDF (303K)
  • Tetsuya Izu, Takeshi Shimoyama, Masahiko Takenaka
    Article type: Regular Paper
    Subject area: Security Infrastructure
    2008 Volume 16 Pages 122-129
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    In 2006, Bleichenbacher presented a new forgery attack against the signature scheme RSASSA-PKCS1-v1_5. The attack allows an adversary to forge a signature on almost arbitrary messages, if an implementation is not proper. Since the example was only limited to the case when the public exponent is 3 and the bit-length of the public composite is 3, 072, the potential threat is not known. This paper analyzes Bleichenbacher's forgery attack and shows applicable composite sizes for given exponents. Moreover, we extend Bleichenbacher's attack and show that when 1, 024-bit composite and the public exponent 3 are used, the extended attack succeeds the forgery with the probability 2-16.6.
    Download PDF (251K)
  • Kazuhide Fukushima, Shinsaku Kiyomoto, Toshiaki Tanaka, Kouichi Sakura ...
    Article type: Regular Paper
    Subject area: Security Infrastructure
    2008 Volume 16 Pages 130-141
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    Many group key management schemes that reduce the total communication cost and/or the computational cost imposed on client devices have been proposed. However, optimizations of the key-management structure have not been studied. This paper proposes ways to optimize the key-management structure in a hybrid group key management scheme. The proposed method is able to minimize both the total communication cost and the computational cost imposed on client devices. First, we propose a probabilistic client join/leave model in order to evaluate the communication and computational costs of group key management schemes. This model idealizes client actions generally and considers the existence of the peaks of the joining/leaving frequency. Thus, we can analyze not only the average case scenario but also the worst case scenario using this model. Then, we formalize the total computation cost and the computational cost imposed on client devices in group key management schemes under the model. We present both an average case analysis and a worst case analysis. Finally, we show the parameters that minimize the total communication cost and the computational cost imposed on clients under the model. Our results should be useful in designing a secure group communication system for large and dynamic groups.
    Download PDF (460K)
  • Yi Yin, Yoshiaki Katayama, Naohisa Takahashi
    Article type: Regular Paper
    Subject area: Intrusion detection
    2008 Volume 16 Pages 142-156
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    Packet filtering in firewalls is one of the useful techniques for network security. This technique examines network packets and determines whether to accept or deny them based on an ordered set of filters. If conflicts exist in filters of a firewall, for example, one filter is never executed because of the prevention of a preceding filter, the behavior of the firewall might be different from the administrator's intention. For this reason, it is necessary to detect conflicts in a set of filters. Previous researches that focused on detecting conflicts in filters paid considerable attention to conflicts caused by one filter affecting another, but they did not consider conflicts caused by a combination of multiple filters. We developed a method of detecting conflicts caused by a combination of filters affecting another individual filter based on their spatial relationships. We also developed two methods of finding all requisite filter combinations from a given combination of filters that intrinsically cause errors to another filter based on top-down and bottom-up algorithms. We implemented prototype systems to determine how effective the methods we developed were. The experimental results revealed that the detecting conflicts method and the method of finding all requisite filter combinations based on the bottom-up algorithm can be used for practical firewall policies.
    Download PDF (1238K)
  • Takeshi Okuda, Suguru Yamaguchi
    Article type: Regular Paper
    Subject area: Access control and authentication
    2008 Volume 16 Pages 157-164
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    To secure a network, ideally, all software in the computers should be updated. However, especially in a server farm, we have to cope with unresolved vulnerabilities due to software dependencies. Therefore, it is necessary to understand the vulnerabilities inside the network. Existing methods require IP reachability and dedicated software to be installed in the managed computers. In addition, existing approaches cannot detect vulnerabilities of underlying libraries and uniformly control the communication between computers based only on the vulnerability score. We propose a lightweight vulnerability management system (LWVMS) based on a self-enumeration approach. This LWVMS allows administrators to configure their own network security policy flexibly. It complies with existing standards, such as IEEE802.1X and EAP-TLS, and can operate in existing corporate networks. Since LWVMS does not require IP reachability between the managed server and management servers, it can reduce the risk of invasion and infection in the quarantine phase. In addition, LWVMS can control the connectivity based on both the vulnerabilities of respective components and the network security policy. Since this system can be implemented by a slight modification of open-source software, the developers can implement this system to fit their network more easily.
    Download PDF (295K)
  • Hiroaki Kikuchi, Naoya Fukuno, Tomohiro Kobori, Masato Terada, Tangtis ...
    Article type: Regular Paper
    Subject area: Intrusion detection
    2008 Volume 16 Pages 165-175
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    Computer worms randomly perform port scans to find vulnerable hosts to intrude over the Internet. Malicious software varies its port-scan strategy, e.g., some hosts intensively perform scans on a particular target and some hosts scan uniformly over IP address blocks. In this paper, we propose a new automated worm classification scheme from distributed observations. Our proposed scheme can detect some statistics of behavior with a simple decision tree consisting of some nodes to classify source addresses with optimal threshold values. The choice of thresholds is automated to minimize the entropy gain of the classification. Once a tree has been constructed, the classification can be done very quickly and accurately. In this paper, we analyze a set of source addresses observed by the distributed 30 sensors in ISDAS for a year in order to clarify a primary statistics of worms. Based on the statistical characteristics, we present the proposed classification and show the performance of the proposed scheme.
    Download PDF (557K)
  • Atsuko Miyaji, Kenji Mizosoe
    Article type: Regular Paper
    Subject area: Security Infrastructure
    2008 Volume 16 Pages 176-189
    Published: 2008
    Released on J-STAGE: September 10, 2008
    JOURNAL FREE ACCESS
    Elliptic curve cryptosystems can be constructed over a smaller definition field than the ElGamal cryptosystems or the RSA cryptosystems. This is why elliptic curve cryptosystems have begun to attract notice. This paper explores an efficient fixed-point-scalar-multiplication algorithm for the case of a definition field Fp (p>3) and adjusts coordinates to the algorithm proposed by Lim and Lee. Our adjusted algorithm can give better performance than the previous algorithm in some sizes of the pre-computed tables.
    Download PDF (343K)
  • Zhenbao Liu, Jun Mitani, Yukio Fukui, Seiichi Nishihara
    Article type: Regular Paper
    Subject area: Computer Graphics
    2008 Volume 16 Pages 190-200
    Published: 2008
    Released on J-STAGE: December 10, 2008
    JOURNAL FREE ACCESS
    Rapidly spreading 3D shape applications have led to the development of content-based 3D shape retrieval research. In this paper, we propose a new retrieval method using Spherical Healpix. Spherical Healpix is a new framework for efficient discretization and fast analysis or synthesis of functions defined on the sphere. We analyzed the construction process of this structure and defined a new Spherical Healpix Extent Function. We then analyzed this Spherical Healpix Extent Function using an inverse-construction process from the sphere to the Euclidean plane. We transformed the result of inverse-construction to the frequency domain using a 2D Fourier transform, instead of spherical harmonics, a well-known tool in spherical analysis. We obtained the low-frequency component in the frequency domain by using a Butterworth low-pass filter. The power spectrum of the low frequency component can be used as the feature vector to describe a 3D shape. This descriptor is extracted in the canonical coordinate frame; that is, each 3D-model is first normalized. We have examined this method on the Konstanz Shape Benchmark and SHREC data set, and confirmed its efficiency. We also compared this method with other methods on the same Konstanz Shape Benchmark and SHREC data set and evaluated the shape retrieval performance.
    Download PDF (436K)
  • Masayoshi Shimamura, Katsuyoshi Iida, Hiroyuki Koga, Youki Kadobayashi ...
    Article type: Regular Paper
    Subject area: Network Quality and Control
    2008 Volume 16 Pages 201-218
    Published: 2008
    Released on J-STAGE: December 10, 2008
    JOURNAL FREE ACCESS
    We propose a hose bandwidth allocation method to achieve a minimum throughput assurance (MTA) service for the hose model. Although the hose model, which has been proposed as a novel VPN service model for provider provisioned virtual private networks (PPVPNs), has been proven to be effective for network resource efficiency and configuration complexity, there has been no consideration of a mechanism to assure quality of service (QoS) in the hose model. The basic idea of our method is to gather available bandwidth information from inside a network and use it to divide the available bandwidth into hoses on each bottleneck link. We evaluate and clarify our method through computer simulation runs. The simulation results show that our method can achieve an MTA service in the hose model.
    Download PDF (911K)
feedback
Top