IPSJ Digital Courier
Online ISSN : 1349-7456
ISSN-L : 1349-7456
Volume 3
Displaying 1-50 of 65 articles from this issue
  • Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hir ...
    Article type: Musical Sound Analysis
    Subject area: Regular Paper
    2007 Volume 3 Pages 1-13
    Published: January 15, 2007
    Released on J-STAGE: January 24, 2007
    JOURNAL FREE ACCESS
    This paper presents a new technique for recognizing musical instruments in polyphonic music. Since conventional musical instrument recognition in polyphonic music is performed notewise, i.e., for each note, accurate estimation of the onset time and fundamental frequency (F0) of each note is required. However, these estimations are generally not easy in polyphonic music, and thus estimation errors severely deteriorated the recognition performance. Without these estimations, our technique calculates the temporal trajectory of instrument existence probabilities for every possible F0. The instrument existence probability is defined as the product of a nonspecific instrument existence probabilitycalculated using the PreFEst and a conditional instrument existence probability calculated using hidden Markov models. The instrument existence probability is visualized as a spectrogram-like graphical representation called the instrogram and is applied to MPEG-7 annotation and instrumentation-similarity-based music information retrieval. Experimental results from both synthesized music and real performance recordings have shown that instrograms achieved MPEG-7 annotation (instrument identification) with a precision rate of 87.5% for synthesized music and 69.4% for real performances on average and that the instrumentation similarity measure reflected the actual instrumentation better than an MFCC-based measure.
    Download PDF (878K)
  • Toshihiko Yamakami
    Article type: Networked Services in Contextual and Real-world supports
    Subject area: Regular Paper
    2007 Volume 3 Pages 14-20
    Published: January 15, 2007
    Released on J-STAGE: January 24, 2007
    JOURNAL FREE ACCESS
    As a sub-day-scale behavior analysis, the length of sessions in multiple mobile Internet services was examined. Using mobile clickstreams with user identifiers, two analyses were performed: a preparatory study for timeout values in session identification in 2000 and a long-term observation of session lengths and clicks per session in 2001. The first study showed that 10 minutes is a suitable timeout value for the observed mobile web. The second produced inter-service comparisons and showed the effects of different mobile-Internet-specific factors. The limitations and challenges for mobile-clickstream-based session identification are also discussed.
    Download PDF (265K)
  • Takuo Nakashima
    Article type: Network Quality and Control
    Subject area: Regular Paper
    2007 Volume 3 Pages 21-29
    Published: February 15, 2007
    Released on J-STAGE: February 07, 2007
    JOURNAL FREE ACCESS
    Providing quality of service (QoS) in a business environment requires accurate estimation of Internet traffic, especially HTTP traffic. HTTP traffic, mainly consisting of file transmission traffic, depends on the sizes of files with a heavy-tailed property. Analyzing the performance of specific Web servers and comparing server performance is important in QoS provisioning for user applications. In this paper, we present the experimental analysis of Web server performance using our active measurement. We capture activity data from 1984 diverse Web servers by sending measurement packets 20 times in order to evaluate the spatial properties, and we select 14 Web servers to send packets 2, 000 times in 5-second intervals to evaluate the temporal properties. The main contribution of our study is to provide methods of evaluating the temporal and spatial properties on any Web server by measuring from a remote observation host, and to illustrate the current activity of temporal and spatial properties in a series of figures. Crovella's well known work merely describes the self-similar properties for the total transmission time generated by a heavy-tailed file size distribution. We divided the total transmission time into network and server system dependable elements, of which the heavy-tailed properties are captured. We found that temporal properties consist of two factors: stability and activity in the Poisson process. Robustness of heavy-tailedness in terms of constructing elements of total transmission time was evident from the analysis of spatial properties. Finally, we observed the upper boundary and classified groups in a mean-variance plot of primitive elements of the Web server.
    Download PDF (590K)
  • Jing Cai, Tsutomu Terada, Takahiro Hara, Shojiro Nishio
    Article type: Database Systems
    Subject area: Regular Paper
    2007 Volume 3 Pages 30-41
    Published: February 15, 2007
    Released on J-STAGE: February 07, 2007
    JOURNAL FREE ACCESS
    Recent advances in computer and wireless communication technologies have increased interest in combinations of various techniques such as pull-based data dissemination, data broadcasting, and data caching, as well as investigations into data management in hybrid data delivery systems. This paper proposes cooperative data management based on the Hybrid Wireless Broadcast (HWB) model to provide more efficient data delivery to mobile clients. Our cooperative strategy integrates broadcast scheduling with cache management of the base station, taking heterogeneous and homogeneous access of clients into consideration, by making effective use of HWB data dissemination, i.e., push-based broadcast, pull-based broadcast, and pull-based point-to-point wireless communication. Our simulation results confirmed that the strategy we propose can improve the performance of the system even further.
    Download PDF (770K)
  • Arjan Durresi, Leonard Barolli, Raj Jain, Makoto Takizawa
    Article type: Network Quality and Control
    Subject area: Regular Paper
    2007 Volume 3 Pages 42-54
    Published: February 15, 2007
    Released on J-STAGE: February 07, 2007
    JOURNAL FREE ACCESS
    Congestion remains one of the main obstacles to the Quality of Service (QoS) on the Internet. We think that a good solution to Internet congestion should optimally combine congestion signaling from network and source reaction, with the following as its main goals: minimum losses and delays, maximum network utilization, fairness among flows, and last but not least, scalability of the solution. The solution should not significantly increase the complexity of router operations. In this paper, we present a new traffic management scheme based on an enhanced Explicit Congestion Notification (ECN) mechanism. Our Multilevel ECN (MECN) conveys more accurate feedback information about the network congestion status than the current ECN. We have designed a TCP source reaction that takes advantage of the extra information provided about congestion. Therefore, MECN responds better to congestion by allowing the system to reach the stability point faster, which results in better network performance. We use control theoretical tools verified by ns2simulations to show that MECN can outperform up to twenty times in term of throughput the de facto standard RED/ECN.
    Download PDF (1906K)
  • Shintaro Ueda, Hiroshi Shigeno, Ken-ichi Okada
    Article type: Network Security
    Subject area: Regular Paper
    2007 Volume 3 Pages 55-63
    Published: February 15, 2007
    Released on J-STAGE: February 07, 2007
    JOURNAL FREE ACCESS
    The new video coding standard H.264/AVC offers major improvements in the coding efficiency and flexible mapping to transport layers. It consists of a video coding layer (VCL) and a network abstraction layer(NAL). The VCL carries out the coding, and the NAL encapsulates data from the VCL in a manner where transmission over a broad variety of transport layers is readily enabled. Since no security features are offered, an authentication scheme to authenticate the sender and data integrity is needed. In this paper we propose SANAL, a stream authentication scheme for H.264/AVC. Unlike existing schemes that carry out authentication procedures at the packet level, authentication procedures in SANAL are carried out at the NAL level. This makes it possible to set priorities to H.264/AVC-specific data without interfering with the H.264/AVC features. We implemented a SANAL prototype and carried out comparative evaluations on playout rate, communication overhead, and process load. The evaluation results show that the playout rate is improved by 40% compared to existing schemes.
    Download PDF (430K)
  • Lei Jing, Zixue Cheng, Mizuo Kansen, Tongjun Huang, Shengguo Sun
    Article type: Learning Support
    Subject area: Regular Paper
    2007 Volume 3 Pages 64-74
    Published: February 15, 2007
    Released on J-STAGE: February 07, 2007
    JOURNAL FREE ACCESS
    Embedding educational functions into devices in the everyday environment is an important task for advocates of ubiquitous learning. In this paper, we discuss how to add a reminder service to a schoolbag. An educational function would be added to the device to help pupils remember belongings. Reminding oneself of things is a difficult task and reminder services have been an important subject of computer applications. However, most reminding tools are used for business, not education. Most such services use PDAs as terminals and require the user to create the reminder list by him or herself, making it too complex for some pupils to use. The systems also seldom pay attention to helping users learn how to avoid forgetting. In this research, a ubiquitous learning support system that makes use of schoolbags is presented to assist pupils in managing their personal items. With RFID and infrared sensors, a microcontroller embedded in a schoolbag can monitor what has been put in or taken out of the schoolbag and automatically maintain a schoolbag's items list. Such a bag also enables teachers to make up a schedule that specifies required items for given days. The microcontroller then compares the schedule with the items list in the schoolbag and provides a reminder service for the pupil. In addition to the reminder service, which is based on principles of behavior modification, the paper also proposes a series of methods to help pupils form good personal management habits and reduce their dependence on outside machines.
    Download PDF (1239K)
  • Naoki Nakamura, Debasish Chakraborty, Apichet Chayabejara, Gen Kitagat ...
    Article type: Wireless/Mobile Networks
    Subject area: Regular Paper
    2007 Volume 3 Pages 75-85
    Published: February 15, 2007
    Released on J-STAGE: February 07, 2007
    JOURNAL FREE ACCESS
    The MAC protocol of IEEE 802.11 reduces hidden terminal problem using the RTS/CTS handshaking mechanism. However, it lacks the ability to release or reallocate a channel that was reserved but not used. For Mobile Ad hoc NETworks (MANET), this often keeps the channel unnecessarily inaccessible, resulting in inefficient channel utilization. In this paper, we propose schemes for releasing and reallocating unused channels in MANET deployed over IEEE 802.11. We introduce a Network Allocation Vector (NAV) updating method that can improve the performance of the existing RTS Validation scheme and our proposed Extra Frame Transmission scheme. We then combine these schemes as a further improvement. Through simulations, done in different scenarios with varying networking load and node density, we were able to show that combining these schemes leads to a throughput improvement of up to 40%. In addition, our proposed mechanisms have no compatibility problems.
    Download PDF (749K)
  • Hiroshi Sakakibara, Masato Saito, Hideyuki Tokuda
    Article type: Wireless/Mobile Networks
    Subject area: Regular Paper
    2007 Volume 3 Pages 86-97
    Published: February 15, 2007
    Released on J-STAGE: February 07, 2007
    JOURNAL FREE ACCESS
    This paper presents a mechanism that aggregates the bandwidths of multiple network interfaces on computers in mobile environments. Research and development on wireless technologies has recently been extremely active. As a result, various kinds of wireless media have been emerging and these have been installed on computers. However, the demand for broader bandwidths has not yet been met. We designed and developed a socket-level bandwidth aggregation mechanism (SBAM) that offer broader bandwidths using multiple wireless interfaces on computers. The most distinctive feature of SBAM is that the mechanism is implemented as a feature of operating systems. This involves two benefits. First, it is possible to deploy SBAM gradually in computers on the Internet. If the bandwidth aggregation mechanism (BAM) is achieved in a network stack, it must be implemented in all computers on the Internet. However, it is not necessary to implement it in all computers on the Internet, since it can recognize its own existence on peer hosts and determine whether to use BAM or not. Second, it does not require modifications to most existing software. For example, if the mechanism is achieved as a feature of a transport protocol, such as TCP, all applications using TCP must be re-written to adapt to the new transport layer protocol. However, SBAM does not require this, since the socket hides the existence of the mechanism. We evaluated SBAM in an actual wireless network environment. We achieved an increase in throughput by a factor of 1.6. SBAM offers broader bandwidths for applications by utilizing various wireless devices on computers while it avoids modifications to existing environments.
    Download PDF (625K)
  • Hiroyuki Sato, Hernán Aguirre, Kiyoshi Tanaka
    Article type: Regular Paper
    Subject area: Regular Paper
    2007 Volume 3 Pages 98-113
    Published: February 15, 2007
    Released on J-STAGE: February 28, 2007
    JOURNAL FREE ACCESS
    This paper proposes a method to enhance single population multiobjective evolutionary algorithms (MOEAs) by searching based on local dominance and local recombination. In this method, first, all fitness vectors of individuals are transformed to polar coordinate vectors in objective function space. Then, the population is iteratively divided into several subpopulations by using declination angles. As a result, each sub-population covers a sub-region in the multiobjective space with its individuals located around the same search direction. Next, local dominance is calculated separately for each sub-population after alignment of its principle search direction by rotation. Selection, recombination, and mutation are applied to individuals within each sub-population. The proposed method can improve the performance of MOEAs that use dominance based selection, and can reduce the entire computational cost to calculate dominance among solutions as well. In this paper we verify the effectiveness of the proposed method obtaining Pareto optimal solutions in two representative MOEAs, i.e. NSGA-II and SPEA2, with Multiobjective 0/1 Knapsack Problems.
    Download PDF (1001K)
  • Hiromoto Susawa, Toshihiro Tsuji, Kazumasa Hiramatsu, Takashi Jimbo, T ...
    Article type: Regular Paper
    Subject area: Regular Paper
    2007 Volume 3 Pages 114-124
    Published: February 15, 2007
    Released on J-STAGE: February 28, 2007
    JOURNAL FREE ACCESS
    Compositional variation in initial growth was observed in Liquid Phase Epitaxy (LPE) experiments. It is thought that the cause is the flow of the melt which is induced by the slide of the entire melt from the place where there isn't substrate to the substrate before growth. The phenomenon was simulated with a one dimensional model in InGaP growth with a convection term added to a diffusion-limited model. The velocity of flow in the melt was approximated to Stokes's first problem. It was shown that composition of In in grown solid with the flow is larger than that without the flow. In this paper, InGaP LPE growth is considered and a two dimensional model of the flow is used. For the solute transport, a two dimensional model is also adopted except at the growth interface. A result similar to the one dimensional model was obtained. In the two dimensional model, In composition is greater than that in the case without flow for the greater part of the growth time, but decreases when the flow transports dilute solution. This decrease doesn't appear in the one dimensional model. This phenomenon can be explained by considering the influence of the flow on the mole fraction near the boundary layer.
    Download PDF (852K)
  • Yoshinao Takemae, Takehiko Ohno, Ikuo Yoda, Shinji Ozawa
    Article type: Group Interaction Support and Groupware
    Subject area: Regular Paper
    2007 Volume 3 Pages 125-133
    Published: March 15, 2007
    Released on J-STAGE: March 07, 2007
    JOURNAL FREE ACCESS
    This paper presents a method for automatically estimating human interruptibility in the home environment. To make online remote communication smoother, determining if it is appropriate to interrupt the remote partner is critical. As a first step in achieving this goal, several audio-visual features, extracted from data streams captured by a camera and a microphone, are correlated to human assessments of own interruptibility. Based on these features, the level of interruptibility is estimated using the trained Support Vector Regression (SVR) technique. Finally, we discuss the potential of our method based on the results of several experiments.
    Download PDF (347K)
  • Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hir ...
    Article type: Music Interfaces
    Subject area: Regular Paper
    2007 Volume 3 Pages 134-144
    Published: March 15, 2007
    Released on J-STAGE: March 07, 2007
    JOURNAL FREE ACCESS
    This paper presents a highly functional audio player, called Drumix, that allows a listener to control the volume, timbre, and rhythmic patterns (drum patterns)of bass and snare drums within existing audio recordings in real time. A demand for active music listening has recently emerged. If the drum parts of popular songs could be manipulated, listeners could have new musical experiences by freely changing their impressions of the pieces (e.g., making the drum performance more energetic)instead of passively listening to them. To achieve this, Drumix provides three functions for rearranging drum parts, i.e., a volume control function that enables users to cut or boost the volume of each drum with a drum-specific volume slider, a timbre change function that allows them to replace the original timbre of each drum with another selected from a drop-down list, and a drum-pattern editing function that enables them to edit repetitive patterns of drum onsets on a graphical representation of their scores. Special musical skills are not required to use these functions. Subjective experiments revealed that Drumix could add a new dimension to the way listeners experience music.
    Download PDF (421K)
  • Takashi Takenouchi, Masaru Ushijima, Shinto Eguchi
    Article type: Original Papers
    Subject area: Original Paper
    2007 Volume 3 Pages 145-152
    Published: April 15, 2007
    Released on J-STAGE: April 04, 2007
    JOURNAL FREE ACCESS
    In this paper, we propose GroupAdaBoost which is a variant of AdaBoost for statistical pattern recognition. The objective of the proposed algorithm is to solve the “ p » n ”problem arisen in bioinformatics. In a microarray experiment, gene expressions are observed to extract any specific pattern of gene expressions related to a disease status. Typically, p is the number of investigated genes and n is number of individuals. The ordinary method for predicting the genetic causes of diseases is apt to over-learn from any particular training dataset because of the“ p » n ” problem. We observed that GroupAdaBoost gave a robust performance for cases of the excess number p of genes. In several real datasets which are publicly available from web-pages, we compared the analysis of results among the proposed method and others, and a small scale of simulation study to confirm the validity of the proposed method. Additionally the proposed method effectively worked for the identification of important genes.
    Download PDF (188K)
  • Shuhei Kimura, Katsuki Sonoda, Soichiro Yamane, Koki Matsumura, Mariko ...
    Article type: Original Papers
    Subject area: Original Paper
    2007 Volume 3 Pages 153-163
    Published: April 15, 2007
    Released on J-STAGE: April 04, 2007
    JOURNAL FREE ACCESS
    A model based on a set of differential equations can effectively capture various dynamics. This type of model is therefore ideal for describing genetic networks. Several genetic network inference algorithms based on models of this type have been proposed. Most of these inference methods use models based on a set of differential equations of the fixed form to describe genetic networks. In this study, we propose a new method for the inference of genetic networks. To describe genetic networks, the proposed method does not use models of the fixed form, but uses neural network models. In order to interpret obtained neural network models, we also propose a method based on sensitivity analysis. The effectiveness of the proposed methods is verified through a series of artificial genetic network inference problems.
    Download PDF (342K)
  • Junya Seo, Shigeto Seno, Yoichi Takenaka, Hideo Matsuda
    Article type: Original Papers
    Subject area: Original Paper
    2007 Volume 3 Pages 164-173
    Published: April 15, 2007
    Released on J-STAGE: April 04, 2007
    JOURNAL FREE ACCESS
    In bioinformatics, dealing with tools to analyze biological data becomes important. Those tools are provided by various institutions and the number of the tools is rapidly increasing. Recently many institutions have been offering those tools and access to databases with Web service technologies. The workflow technology is one of the ways to manage those tools and it is becoming available in bioinformatics. In order to compose workflows, several research groups develop and provide workflow composing tools. And consequently, the concept “Workflow Reuse” is also arisen in order to help workflow composition. Nevertheless it is still difficult to search for the reusable workflows from the repository of workflows in the current situation. In this paper, we propose a method to extract reusable workflows from the repository by using currently available information. We could extract some functionally similar workflows as reusable ones. By extracting reusable workflows efficiently, researchers can compose their workflow more easily.
    Download PDF (372K)
  • Tatsuya Akutsu, Hiroki Arimura, Shinichi Shimozono
    Article type: Original Papers
    Subject area: Original Paper
    2007 Volume 3 Pages 174-182
    Published: April 15, 2007
    Released on J-STAGE: April 04, 2007
    JOURNAL FREE ACCESS
    This paper studies the local multiple alignment problem, which is, given protein or DNA sequences, to locate a region (i.e., a substring) of fixed length from each sequence so that the score determined from the set of regions is optimized. We consider the following scoring schemes: the relative entropy score (i.e., average information content), the sum-of-pairs score and a relative entropy-like score introduced by Li, et al. We prove that multiple local alignment is NP-hard under each of these scoring schemes. In particular, we prove that multiple local alignment is APX-hard under relative entropy scoring. It implies that unless P =NP there is no polynomial time algorithm whose worst case approximation error can be arbitrarily specified(precisely, a polynomial time approximation scheme). Several related theoretical results are also provided.
    Download PDF (265K)
  • Yoshifumi Okada, Wataru Fujibuchi, Paul Horton
    Article type: Original Papers
    Subject area: Original Paper
    2007 Volume 3 Pages 183-192
    Published: April 15, 2007
    Released on J-STAGE: April 04, 2007
    JOURNAL FREE ACCESS
    A gene expression module (module for short) is a set of genes with shared expression behavior under certain experimental conditions. Discovering of modules enables us to uncover the function of uncharacterized genes or genetic networks. In recent years, several biclustering methods have been suggested to discover modules from gene expression data matrices, where a bicluster is defined as a subset of genes that exhibit a highly correlated expression pattern over a subset of conditions. Biclustering however involves combinatorial optimization in selecting the rows and columns composing modules. Hence most existing algorithms are based on heuristic or stochastic approaches and produce possibly sub-optimal solutions. In this paper, we propose a novel biclustering method, BiModule, based on a closed itemset enumeration algorithm. By exhaustive enumeration of such biclusters, it is possible to select only biclusters satisfying certain criteria such as a user-specified bicluster size, an enrichment of functional annotation terms, etc. We performed comparative experiments to existing salient biclustering methods to test the validity of biclusters extracted by BiModule using synthetic data and real expression data. We show that BiModule provides high performance compared to the other methods in extracting artificially-embedded modules as well as modules strongly related to GO annotations, protein-protein interactions and metabolic pathways.
    Download PDF (864K)
  • Kou Amano, Hiroaki Ichikawa, Hidemitsu Nakamura, Hisataka Numa, Kaoru ...
    Article type: Database/Software Paper
    Subject area: Database/Software Paper
    2007 Volume 3 Pages 193-197
    Published: April 15, 2007
    Released on J-STAGE: April 04, 2007
    JOURNAL FREE ACCESS
    Recently, clustering has been recognized as an important and fundamental method that analyzes and classifies large-scale sequence data to provide useful information. We developed a novel clustering method designated as Self-organizing clustering (SOC) that uses oligonucleotide frequencies for large-scale DNA sequence data. We implemented SOC as a command-line program package, and developed a server that provides access to it enabling visualization of the results.SOC effectively and quickly classifies many sequences that have low or no homology to each other. The command-line program is downloadable at http://rgp.nias.affrc.go.jp/programs/. The on-line web site is publicly accessible at http://rgp.nias.affrc.go.jp/SOC/. The common gateway interface (CGI) for the server is also provided within the package.
    Download PDF (394K)
  • Qamar Uddin Khand, Sumudu Dematapitiya, Sato Saga, Junji Maeda
    Article type: Intellectual Creative Tasks
    Subject area: Regular Paper
    2007 Volume 3 Pages 198-206
    Published: April 15, 2007
    Released on J-STAGE: April 11, 2007
    JOURNAL FREE ACCESS
    This paper presents a grid snapping technique, called multi-resolution fuzzy grid snapping (MFGS), that enables automatic mouse cursor snapping for a multi-resolution grid system. Quick and frequent switching between high- and low-resolution grid snapping is essential to make geometrical drawings that include both fine and coarse structures when using CAD systems with ordinary single-resolution grid systems. MFGS is intended to relieve users of this tedious manual switching. MFGS dynamically selects an appropriate snapping resolution level from a multi-resolution grid system according to the pointing behavior of the user. MFGS even supports an extremely fine grid resolution level, which is referred to as the no-snapping level. We show experimental results which demonstrate that MFGS is an effective grid snapping technique that speeds up low-resolution grid snapping while retaining the ability to snap to high-resolution grids. Furthermore, we examine the role of fuzziness in MFGS and its effect on snapping performance.
    Download PDF (2261K)
  • Daisuke Kimura
    Article type: Theory of Programs
    Subject area: Regular Paper
    2007 Volume 3 Pages 207-243
    Published: April 15, 2007
    Released on J-STAGE: April 11, 2007
    JOURNAL FREE ACCESS
    Wadler proposed the dual calculus, which corresponds to classical sequent calculus LK, and studied the relationship between the λμ-calculus and the dual calculus as equational systems to explain the duality between call-by-value and call-by-name in a purely syntactical way. Wadler left an open question whether one can obtain similar results by replacing the equations with reductions. This paper gives one answer to his question. We first refine the λμ-calculus as reduction systems by reformulating sum types and omitting problematic reduction rules that are not simulated by reductions of the dual calculus. Secondly, we give translations between the call-by-name λμ-calculus and the call-by-name dual calculus, and show that they preserve the call-by-name reductions. We also show that the compositions of these translations become identity maps up to the call-by-name reductions. We also give translations for the call-by-value systems, and show that they satisfy properties similar to the call-by-name translations. Thirdly, we introduce translations between the call-by-value λμ-calculus and the call-by-name one by composing the above translations with duality on the dual calculus. We finally obtain results corresponding to Wadler's, but our results are based on reductions.
    Download PDF (469K)
  • Daisuke Tasaki, Megumi Katou, Shinji Mizuno, Minoru Okada
    Article type: Computer Graphics
    Subject area: Regular Paper
    2007 Volume 3 Pages 244-254
    Published: May 15, 2007
    Released on J-STAGE: May 09, 2007
    JOURNAL FREE ACCESS
    The authors propose a method of producing virtual mezzotint using a physics-based rendering approach. Mezzotint is a traditional copperplate printing technique. An important characteristic is its gradations from black to white. This is acquired in three phases during the plate-making process, i.e., by roughening, scraping, and burnishing. Numerous dots and burrs are created with a rocker in the roughening phase over the entire surface of the copper plate to obtain black areas in the print. The burrs are removed in the second phase with a scraper to yield halftones. The plate surface is finally polished with a burnisher in the last phase to produce the white areas. A method of simulating these phases and physical phenomena is discussed in this paper to make mezzotint enjoyable even for beginners and children. Zigzag-stroke patterns and paraboloidal-dot models are applied to the rocker to simulate the roughening phase. Reducing and smoothing models are applied to the scraper and burnisher to simulate the scraping and burnishing phases. The feasibility of the proposed method is demonstrated by observing and comparing actual and virtual plate surfaces with determined patterns and actual pieces of handcrafted work.
    Download PDF (2883K)
  • Masahiro Konishi, Takashi Nakada, Tomoaki Tsumura, Hiroshi Nakashima, ...
    Article type: Processor Architecture
    Subject area: Regular Paper
    2007 Volume 3 Pages 255-268
    Published: May 15, 2007
    Released on J-STAGE: May 30, 2007
    JOURNAL FREE ACCESS
    This paper proposes an efficient algorithm to find the worst case flush timings for a given program with respect to the number of branch mispredictions. We first give a basic algorithm based on dynamic programming which takes O(N2F) computation time for a program with Nconditional branches and F flush timings. We then show it can be improved to achieve a computation time of approximately O(NF) for practical programs with its proof obtained through an evaluation with SPEC CPU95 benchmarks.
    Download PDF (511K)
  • Mizuki Oka, Kazuhiko Kato
    Article type: Network Security
    Subject area: Regular Paper
    2007 Volume 3 Pages 269-279
    Published: June 15, 2007
    Released on J-STAGE: June 06, 2007
    JOURNAL FREE ACCESS
    We propose the Eigen Co-occurrence Matrix (ECM) method, which is a modeling method for tracking the behaviors of an individual, system, or network in terms of event sequences of discrete data. Our method uses the correlation between events in a sequence to extract distinct characteristics. A key idea behind the ECM method is to regard a sequence as a serialized sequence that originally had structural relations and to extract the embedded dependencies of the events. To test its retrieval performance, we applied the ECM method to the problem of anomaly detection in intrusion detection systems. Specifically, we used the method to model a UNIX command sequence and attempted to detect intruders masquerading as valid users. The experimental results reveal that the ECM method offers distinct characteristic models for analyzing event sequences.
    Download PDF (719K)
  • Derek J. Corbett, Daniel Cutting
    Article type: Special Issue on Selected Papers from ICMU2006
    Subject area: Regular Paper
    2007 Volume 3 Pages 280-292
    Published: June 15, 2007
    Released on J-STAGE: June 06, 2007
    JOURNAL FREE ACCESS
    AD LOC is a novel system for mobile devices to collaboratively tie persistent virtual notes to physical locations. Notes that are relevant to particular locations can be created and then cached using serendipitously formed one-hop wireless ad hoc network connections. The location provides an address to which the information is relevant and devices attempt to keep the information stored at devices which remain close to this address. In simulation experiments, even under conditions of nodal flux, it is possible to retrieve over 90% of the notes that have been stored.
    Download PDF (533K)
  • Richard Glassey, Graeme Stevenson, Robert Ian Ferguson
    Article type: Special Issue on Selected Papers from ICMU2006
    Subject area: Regular Paper
    2007 Volume 3 Pages 293-301
    Published: June 15, 2007
    Released on J-STAGE: June 06, 2007
    JOURNAL FREE ACCESS
    The partially connected nature of mobile and ubiquitous computing environments presents software developers with hard challenges. Mobile code has been suggested as a natural fit for simplifying software development for these environments. However, existing strategies for discovering mobile code assume an underlying fixed, stable network. An alternative approach is required for mobile environments, where network size may be unknown and reliability cannot be guaranteed. This paper introduces AMOS, a mobile object platform augmented with a structured overlay network that provides a fully decentralised approach to the discovery of mobile objects. We demonstrate how this technique has better reliability and scalability properties than existing strategies, with minimal communication overhead. Building upon this novel discovery strategy, we show how load balancing of mobile objects in an AMOS network can be achieved through probabilistic means.
    Download PDF (472K)
  • Kagenari Yamamoto, Miki Yamamoto
    Article type: Special Issue on Selected Papers from ICMU2006
    Subject area: Regular Paper
    2007 Volume 3 Pages 302-308
    Published: June 15, 2007
    Released on J-STAGE: June 06, 2007
    JOURNAL FREE ACCESS
    Ad hoc network which requires no infrastructure equipment such as access points is paid great attention in recent years. In ad hoc network, each host controls its wireless access in a distributed fashion. Generally, carrier sensing is used for detecting channel availability. By using RTS and CTS control frames, MAC sub-layer predicts the channel use and resolves hidden-terminal problem. Broadcast nature of RTS and CTS control frames leads to inefficient spatial reuse. One promising solution for this inefficient spatial reuse is usage of directional antenna. In ad hoc networks, heterogeneous network configuration is general, which means each host is equipped with different network facilities. For antennas, it will be general situation that not all hosts are equipped with directional antenna. This paper evaluates throughput performance of the ad hoc network in this realistic situation and shows how much performance improvement deployment of directional antenna will bring to the network.
    Download PDF (365K)
  • Hamadoun Diall, Kishore Raja, Ioannis Daskalopoulos, Stephen Hailes, G ...
    Article type: Special Issue on Selected Papers from ICMU2006
    Subject area: Regular Paper
    2007 Volume 3 Pages 309-319
    Published: June 15, 2007
    Released on J-STAGE: June 06, 2007
    JOURNAL FREE ACCESS
    The Sensor Cube platform is an ultra-compact, modular and power-aware way of building sensor networks; they are based on a stackable hardware design supported by a Tiny OS based operating environment. The Sensor Cube hardware measures 14 × 14 ×18mm3 and features an integrated coplanar antenna, a design that results in an ultra compact footprint. A core characteristic of the system is that its modular design allows for each of the radio, processor, sensing and power management layers to be interchangeable in a Lego-like manner. Moreover, its low power radio (based on the 2.4GHz Nordic nRF2401 design) and microcontroller (based on the Texas Instruments MSP430) allow for very efficient operation. The Sensor Cube operating and software development environment is derived from Tiny OS, which has been modified to meet the hardware requirements, in particular by introducing a power-aware and reliable ALOHA-type MAC protocol. In this paper we present our experience with the Sensor Cube platform and, in particular, the implications of its ultra-compact design on system performance, specifically as it relates to the characteristics and limitations of the radio unit.
    Download PDF (294K)
  • Md. Nurul Huda, Eiji Kamioka, Shigeki Yamada
    Article type: Special Issue on Selected Papers from ICMU2006
    Subject area: Regular Paper
    2007 Volume 3 Pages 320-331
    Published: June 15, 2007
    Released on J-STAGE: June 06, 2007
    JOURNAL FREE ACCESS
    Existing cryptography-based algorithms for multi-party computation problems (MPCP) are either too complex to be used practically or applicable only to the specific applications for which they have been developed. In addition, traditional (non-cryptography-based)algorithms do not provide good privacy protection for MPCPs. This paper presents the security system design, intuitive analysis, and empirical evaluation of an agent server platform for protecting privacy in solving multi-party computation problems with traditional algorithms. We have devised the security policies for an agent server platform and showed how the enforcement of those policies can effectively protect privacy while solving many MPCPs. Experimental analysis shows that the presented agent platform security system can significantly enhance privacy protection while solving the problems with traditional algorithms.
    Download PDF (898K)
  • Mariann Hauge, Øivind Kure
    Article type: Special Issue on Selected Papers from ICMU2006
    Subject area: Regular Paper
    2007 Volume 3 Pages 332-342
    Published: June 15, 2007
    Released on J-STAGE: June 06, 2007
    JOURNAL FREE ACCESS
    The current 3G-Cellular radio access network cannot support many concurrent high data rate unicast or multicast flows due to limited radio resources. We have proposed a heterogeneous wireless network architecture intended for point-to-multipoint services, to improve the availability of such services to mobile users. The architecture consists of a 3G-Cellular network, supported by a number of local ad hoc networks that are established on demand. In this framework the 3G multipoint-channel range is reduced while the unicast and signalling connections are maintained. Local ad hoc networks are used to forward the multicast data onto users located outside the shortened 3G multicast-channel range. In this paper we present a performance analysis of multicast streaming on the heterogeneous network architecture. The simulation results are complemented with a sensitivity analysis identifying the impact that parameters like node mobility and traffic patterns will have. The results verify that the architecture and the routing protocol are able to provide multicast services with acceptable quality to the multicast subscribers, while conserving 3G-Cellular radio resources.
    Download PDF (683K)
  • Guoqiang Li, Mizuhito Ogawa
    Article type: Regular Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 343-368
    Published: June 15, 2007
    Released on J-STAGE: June 27, 2007
    JOURNAL FREE ACCESS
    Trace analysis for a security protocol represents every possible run as a trace and analyzes whether any insecure run is reachable. The number of traces will be infinite due to (1) infinitely many sessions of a protocol, (2) infinitely many principals in the network, and (3) infinitely many messages that intruders can generate. This paper presents an on-the-fly model checking method by restricting/abstracting these infinite factors to a finite model. First, we restrict a typed process calculus to avoid recursive operations, so that only finitely many sessions are considered. Next, a bound variable is introduced as an index of a message to represent its intended destination, so that an unbounded number of principals are finitely described. Then, messages in which irrelevant parts are reduced in a protocol are unified to a parametric message based on the type information. We implement the on-the-fly model checking method using Maude, and automatically detect the flaws of several security protocols, such as the NSPK protocol and the Woo-Lam protocol, etc..
    Download PDF (364K)
  • Toru Kato
    Article type: Regular Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 369-379
    Published: June 15, 2007
    Released on J-STAGE: June 27, 2007
    JOURNAL FREE ACCESS
    The ambient calculus is a process algebra designed for describing mobile processes. It has a layered structure of ambients that enables us to describe not only mobile processes but also the world in which the processes move around such as computer networks and freight systems. When we describe such a system with ambients, however, malicious processes can destroy nodes of the network or alter the construction of the system. Thus, several mobility types for the ambient calculus have been proposed to enable us to give each node desirable characteristics that prevent malicious processes from acting harmfully. Gordon and Cardelli, the originators of the ambient calculus, also defined an equivalence relation for the untyped ambient calculus. Our previous work pointed out that there exist identified processes up to the relation that have different properties, and it refined the relation so that we can discriminate those processes. This paper shows that the original relation and our previous relation are no longer available for the typed ambient calculus and it presents another relation that is suitable.
    Download PDF (184K)
  • Carl Christian Frederiksen, Masami Hagiya
    Article type: Regular Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 380-403
    Published: June 15, 2007
    Released on J-STAGE: June 27, 2007
    JOURNAL FREE ACCESS
    The transition predicate abstraction framework developed by Podelski, et al. (2005) captures size relations over state transitions which can be used to show infeasibility of certain program computations. In particular, general liveness properties (i.e., properties of infinite computations) can be verified by reducing the verification problem to one of fair termination and then proving that all (infinite) fair computations are infeasible. We present an extension of the algorithm by Podelski, et al. that can be used to improve the precision of transition predicate abstraction as well as speed up analysis time for programs with well-structured control-flow. The main key is to identify sub-computations that can be evaluated independently of their context. Efficiency is then readily improved by analyzing each sub-computation in turn, thus avoiding to reanalyze the effect of a given sub-computations for different contexts. Further, precision can be improved by using stronger methods for extracting summary information about a given sub-computation. We present two versions of the sub-computation based analysis: one for a non-parallel imperative language with loops and recursive procedures, serving as an introduction, and one for the extension of the non-parallel language to a parallel language with synchronous communication via statically named channels.
    Download PDF (456K)
  • Tomohito Masuda, Katsushi Ikeuchi
    Article type: Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 404-423
    Published: June 15, 2007
    Released on J-STAGE: June 27, 2007
    JOURNAL FREE ACCESS
    In this paper, we investigate conventional registration implementation, consisting of rotation and translation, to design the most precise registration so as to accurately restore the 3D shape of an object. To achieve the most accurate registration, our registration implementation needs robustness against data noise, or initial pose and position of data. To verify the accuracy of our implemented registration, we compare the registration behavior with the registration behavior of conventional methods, and evaluate the numerical accuracy of transformation parameter obtained by our registration. However, registration by rigid-body transformation is not enough for modeling and shape comparison: registration with deformation is needed. In this paper, we extend our robust registration to simultaneously estimate the shape parameter as well as the rigid-body transformation parameter. This extension method assumes that the deformation is formulated strictly from the deformation mechanism. We additionally introduce the applications of our extension method.
    Download PDF (4278K)
  • Atsuhiko Banno, Katsushi Ikeuchi
    Article type: Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 424-441
    Published: June 15, 2007
    Released on J-STAGE: June 27, 2007
    JOURNAL FREE ACCESS
    “Modeling from Reality” techniques are making great progress because of the availability of accurate geometric data from three-dimensional digitizers. These techniques contribute to numerous applications in many areas. Among them, one of the most important and comprehensive applications is modeling cultural heritage objects. For a large object, scanning from the air is one of the most efficient methods for obtaining 3D data. We developed a novel 3D measurement system, the Floating Laser Range Sensor (FLRS), in which a range sensor is suspended beneath a balloon. The obtained data, however, have some distortions due to movement of the system during the scanning process. We propose two novel methods to rectify the shape data obtained by the moving range sensor. One method rectifies the data by using image sequences; the other rectifies the data without images. To test these methods, we have conducted a digital archiving project of a large-scale heritage object, in which our algorithms are applied. The results show the effectiveness of our methods. Moreover, both methods are applicable not only to our FLRS, but also to moving range sensors in general.
    Download PDF (2531K)
  • Shintaro Urabe, Jiahong Wang, Eiichiro Kodama, Toyoo Takata
    Article type: Research Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 442-455
    Published: July 15, 2007
    Released on J-STAGE: July 04, 2007
    JOURNAL FREE ACCESS
    Data mining across different companies, organizations, online shops, or the likes, called sites, is necessary so as to discover valuable shared patterns, associations, trends, or dependencies in their shared data. Privacy, however, is a concern. In many situations it is required that data mining should be conducted without any privacy being violated. In response to this requirement, this paper proposes an effective distributed privacy-preserving data mining approach called CRDM (Collusion-Resistant Data Mining). CRDM is characterized by its ability to resist the collusion. Unless all sites but the victim collude, privacy of a site cannot be violated. Considering that for such applications that need not so high a level of security, excess security assurance would incur extra costs in communication, an extension scheme is also presented so that communication cost can be restrained while still maintaining a user-specified level of security. Results of both analytical and experimental performance study demonstrate the effectiveness of CRDM.
    Download PDF (1027K)
  • Toshihiro Shikama, Tetsushi Matsuda, Yoshiaki Terashima, Takashi Watan ...
    Article type: Network Protocols
    Subject area: Regular Paper
    2007 Volume 3 Pages 456-467
    Published: July 15, 2007
    Released on J-STAGE: July 11, 2007
    JOURNAL FREE ACCESS
    This paper studies the situation where multiple IP flows are aggregated over a single wireless channel and an error recovery by retransmissions is performed by Selective-Repeat (SR) ARQ. We propose MQ-PFRS (Multi-QoS Per-Flow Resequencing) ARQ that provides a differentiated service for an IP flow depending on its QoS class, such as real-time or non-real-time. MQ-PFRS ARQ eliminates interferences among IP flows by resequencing received packets independently for each IP flow. It also controls the maximum packet delay by limiting the persistency of retransmissions and the maximum resequencing time for each packet. This paper also presents an analysis of the probability distribution of real-time packet delays. Simulation results show that the delay variation of real-time traffic is effectively controlled by proposed MQ-PFRS ARQ and the packet delay distribution is quite consistent with the results of the analysis. This means that MQ-PFRS is effective for handling multiple QoS classes of IP flows and the quality of real-time traffic transferred by the scheme can be predicted by the analysis.
    Download PDF (744K)
  • Masanori Takata, Masaki Bandai, Takashi Watanabe
    Article type: Adhoc networks
    Subject area: Regular Paper
    2007 Volume 3 Pages 468-479
    Published: July 15, 2007
    Released on J-STAGE: July 11, 2007
    JOURNAL FREE ACCESS
    Several directional MAC protocols for ad hoc networks using directional antennas have been proposed recently. Although directional antennas have great potential such as high spatial reuse and range extension, directional MAC protocols inherently introduce new problems arising from directivity. Deafness is one of the major problems and reduces the performance, caused by a lack of state information from neighbor nodes. This paper presents two directional MAC protocols, DMAC/DA (Directional MAC with Deafness Avoidance) and RI-DMAC (Receiver-Initiated Directional MAC), which handle the deafness problem, and mainly evaluates these protocols through extensive simulation study. DMAC/DA is a proactive handling method for deafness. In DMAC/DA, WTS (Wait To Send) frames are transmitted to notify the on-going communication to potential transmitters that may experience deafness. In this paper, DMAC/DA is enhanced by the next packet notification, called DMAC/DA with NPN, to distinguish transmitters from neighbors. On the other hand, RI-DMAC handles deafness reactively using a combination of sender-initiated and receiver-initiated operations. In RI-DMAC, each node polls a potential deafness node using RTR (Ready To Receive) after the completion of every dialog. The experimental results show that our proposed protocols outperform existing directional MAC protocols in terms of throughput, control overhead and packet drop ratio.
    Download PDF (864K)
  • Yujin Noishiki, Hidetoshi Yokota, Akira Idoue
    Article type: Adhoc networks
    Subject area: Regular Paper
    2007 Volume 3 Pages 480-491
    Published: July 15, 2007
    Released on J-STAGE: July 11, 2007
    JOURNAL FREE ACCESS
    Recent advances in wireless communication technologies have attracted ad hoc networking, which enables mobile users to communicate with each other without any infrastructure. While many ad hoc routing protocols have been proposed, ad hoc communication and its applications are not widespread. The handiest wireless devices, namely, mobile phone terminals, are expected to expedite widespread use of ad hoc communications. More mobile phones have been equipped with short distance wireless interfaces such as wireless LAN and Bluetooth. When ad hoc communication is realized on such mobile phones with these wireless interfaces, ad hoc communications will be available anytime, anywhere, which we believe will become a driving force for ad hoc networking technology. However, although the performance of mobile phones has improved greatly over the past several years, their resources are still limited compared to laptop PCs. In this paper, we first present design principles for mobile ad hoc communications based on cases of assumed use. According to the design principles, we implement ad hoc networking protocols as well as their applications on mobile phones to verify their effectiveness. Based on the results of evaluations, we discuss the performance characteristics and potential application areas.
    Download PDF (639K)
  • Pao Sriprasertsuk, Akiko Seki, Wataru Kameyama
    Article type: Applications in Humanities
    Subject area: Regular Paper
    2007 Volume 3 Pages 492-505
    Published: August 15, 2007
    Released on J-STAGE: August 08, 2007
    JOURNAL FREE ACCESS
    Almost all traditional methods of advertisement distribution have been concerned only with primary information distribution via certain kinds of media. However, the rapid growth of the Internet and interactive media have demonstrated the power and efficiency of secondary information distribution of information by consumers such as words of mouth and free-mail. However, an advertisement distribution model which can be used to analyze and measure the effectiveness of such secondary distribution has never been discussed. Therefore, in this paper, we propose an advertisement distribution model and show how to use the model to analyze both primary and secondary information distributions. Our experiment and analytical results are also discussed. The experimental result shows that the proposed model can be used to measure and analyze the effectiveness of advertisement distribution.
    Download PDF (667K)
  • Naoharu Yamada, Kenji Sakamoto, Goro Kunito, Yoshinori Isoda, Kenichi ...
    Article type: Mobile Computing
    Subject area: Regular Paper
    2007 Volume 3 Pages 506-517
    Published: August 15, 2007
    Released on J-STAGE: August 08, 2007
    JOURNAL FREE ACCESS
    This paper proposes human activity recognition based on the actual semantics of the human's current location. Since no predefined semantics of location can adequately identify human activity, we automatically identify the semantics from things by focusing on the association between things and human activities with the things. Ontology is used to deal with the various possible representations (terms) of each thing, identified by a RFID tag, and a multi-class Naive Bayesian approach is applied to detect multiple actual semantics from the terms. Our approach is suitable for automatically detecting possible activities even given a variety of object characteristics including multiple representations and variability. Simulations with actual thing datasets and experiments in an actual environment demonstrate its noise tolerance and ability to rapidly detect multiple actual semantics from existing things.
    Download PDF (1231K)
  • Shuji Morisaki, Akito Monden, Haruaki Tamada, Tomoko Matsumura, Ken-ic ...
    Article type: Development Environments and Automated Technologies
    Subject area: Regular Paper
    2007 Volume 3 Pages 518-527
    Published: August 15, 2007
    Released on J-STAGE: August 08, 2007
    JOURNAL FREE ACCESS
    This paper proposes a method to mine rules from a software project data set that contains a number of quantitative attributes such as staff months and SLOC. The proposed method extends conventional association analysis methods to treat quantitative variables in two ways: (1) the distribution of a given quantitative variable is described in the consequent part of a rule by its mean value and standard deviation so that conditions producing the distinctive distributions can be discovered. To discover optimized conditions, (2) quantitative values appearing in the antecedent part of a rule are divided into contiguous fine-grained partitions in preprocessing, then rules are merged after mining so that adjacent partitions are combined. The paper also describes a case study using the proposed method on a software project data set collected by Nihon Unisys Ltd. In this case, the method mined rules that can be used for better planning and estimation of the integration and system testing phases, along with criteria or standards that help with planning of outsourcing resources.
    Download PDF (410K)
  • Ken Hironaka, Kenjiro Taura, Takashi Chikayama
    Article type: Regular Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 528-540
    Published: August 15, 2007
    Released on J-STAGE: August 29, 2007
    JOURNAL FREE ACCESS
    We propose a low-stretch scheme for locating mobile objects in wide-area computing environments. Locating mobile objects in distributed computing systems is a non-trivial problem and has been investigated for decades. The forwarding address algorithm, perhaps the most popular algorithm, requires the previous holder of the object to point to the successive holder, and to forward all requests along this pointer. However, this approach cannot provide any access stretch bounds for wide-area settings, and can incur unlimited communication overhead. This is unacceptable when a large number of objects simultaneously move or when numerous referencers attempt to access an object that has moved. We propose an active update method where nodes in the vicinity of the object's location are notified of its new location via localized update messages. Moreover, we will utilize the overlay topology information to minimize these messages. Referencers beyond the scope of the update will still be able to safely access the object. We will demonstrate that these updates maintain access stretches low even in wide-area settings.
    Download PDF (725K)
  • Tomoya Takenaka, Hiroshi Mineno, Yuichi Tokunaga, Naoto Miyauchi, Tada ...
    Article type: Wireless/Mobile Networks
    Subject area: Regular Paper
    2007 Volume 3 Pages 541-554
    Published: September 15, 2007
    Released on J-STAGE: September 05, 2007
    JOURNAL FREE ACCESS
    Node localization obtained by estimating node positions is an essential technique for wireless multi-hop networks. In this paper, we present an optimized link state routing (OLSR)-based localization (ROULA) that satisfies the following key design requirements: (i) independency from anchor nodes, (ii) robustness for non-convex network topology, and(iii) compatibility with network protocol. ROULA is independent from anchor nodes and can obtain the correct node positions in non-convex network topology. In addition, ROULA is compatible with the OLSR network protocol, and it uses the inherent distance characteristic of multipoint relay (MPR) nodes. We reveal the characteristics of MPR selection and the farthest 2-hop node selection used in ROULA, and describe how these node selections contribute to reducing the distance error for a localization scheme without using ranging devices. We used a simulation to specify appropriate MPR_COVERAGE, which is defined to control the number of MPR nodes in OLSR, and give a comparative performance evaluation of ROULA for various scenarios including non-convex network topology and various deployment radii of anchor nodes. Our evaluation proves that ROULA achieves desirable performance in various network scenarios.
    Download PDF (560K)
  • Haruki Ota, Kazuki Yoneyama, Shinsaku Kiyomoto, Toshiaki Tanaka, Kazuo ...
    Article type: Network Security
    Subject area: Regular Paper
    2007 Volume 3 Pages 555-570
    Published: September 15, 2007
    Released on J-STAGE: September 05, 2007
    JOURNAL FREE ACCESS
    In large-scale networks, users want to be able to communicate securely with each other over a channel that is unreliable. When the existing 2- and 3-party protocols are realized in this situation, there are several problems: a client must hold many passwords and the load on the server concerning password management is heavy. In this paper, we define a new ideal client-to-client general authenticated key exchange functionality, where arbitrary 2-party key exchange protocols are applicable to protocols between the client and server and between servers. We also propose a client-to-client general authenticated key exchange protocol C2C-GAKE as a general form of the client-to-client model, and a client-to-client hybrid authenticated key exchange protocol C2C-HAKE as an example protocol of C2C-GAKE to solve the above problems. In C2C-HAKE, a server shares passwords only with clients in the same realm respectively, public/private keys are used between respective servers, and two clients between different realms share a final session key via the respective servers. Thus, with regard to password management in C2C-HAKE, the load on the server can be distributed to several servers. In addition, we prove that C2C-HAKE securely realizes the above functionality. C2C-HAKE is the first client-to-client hybrid authenticated key exchange protocol that is secure in a universally composable framework with a security-preserving composition property.
    Download PDF (366K)
  • Kazuki Yoneyama, Kazuo Ohta
    Article type: Security Infrastructure
    Subject area: Regular Paper
    2007 Volume 3 Pages 571-584
    Published: September 15, 2007
    Released on J-STAGE: September 05, 2007
    JOURNAL FREE ACCESS
    Though anonymity of ring signature schemes has been studied in many publications, these papers gave different definitions and there has been no consensus. Recently, Bender, et al. proposed two new anonymity definitions of ring signature schemes which are stronger than the previous definitions, that are called anonymity against attribution attacks/full key exposure. In addition, ring signature schemes have two levels of definitions for unforgeability definitions, i.e., existential unforgeability and strong existential unforgeability. In this paper, we will redefine anonymities and unforgeabilities within the universally composable (UC) security framework. First, we will give new ideal functionalities of ring signature schemes for each security level separately. Next, we will show the relations between game-based security definitions and our UC definitions. Finally, we will give another proof for the security of the Bender, et al.'s ring signature scheme within the UC framework. A simulator we constructed in this proof can easily simulate an adversary of existential unforgeability, which can be adaptable to the case of strong existential unforgeability if we assume the exploited signature scheme is a standard single strong existentially unforgeable signature scheme.
    Download PDF (304K)
  • Wei Liu, Hideyuki Tanaka, Kanta Matsuura
    Article type: Security and Society
    Subject area: Regular Paper
    2007 Volume 3 Pages 585-599
    Published: September 15, 2007
    Released on J-STAGE: September 05, 2007
    JOURNAL FREE ACCESS
    This paper presents a series of empirical analyses of information-security investment based on a reliable survey of Japanese enterprises. To begin with, after showing our methodology for representing the vulnerability level regarding the threat of computer viruses, we verify the relation between vulnerability level and the effects of information security investment.Although in the first section there is only a weak empirical support of the investment model, one can understand that the representing methodology is worth attempting in empirical analyses in this research field. In the second section, we verify the relations between the probability of computer virus incidents and adopting a set of information security countermeasures. It is shown that “Defense Measure” associated with “Information Security Policy” and “Human Cultivation”has remarkable effects on virus incidents. At the last step, we analyze the effect of continuous investment in the three security countermeasures. The empirical results suggest that virus incidents were significantly reduced in those enterprises which adopted the three countermeasures both in 2002 and in 2003.
    Download PDF (1637K)
  • Tomohiko Hinoue, Atsuko Miyaji, Takatomi Wada
    Article type: Security Infrastructure
    Subject area: Regular Paper
    2007 Volume 3 Pages 600-609
    Published: September 15, 2007
    Released on J-STAGE: September 05, 2007
    JOURNAL FREE ACCESS
    Knudsen and Meier applied the χ2-attack to RC6. The χ2-attack recovers a key by using high correlations measured by χ2-value. The best χ2-attacks to RC6 whose security is guaranteed theoretically works on 16-round RC6 with 192- and 256-bit key but just 8-round RC6 with 128-bit key, because it recovers keys of RC6 symmetrically, which requires a time complexity of #plaintexts × 254 and a memory complexity of 280 for recovering one key. In this paper, we improve the χ2-attack to reduce the time complexity. We give the theorem that evaluates the success probability of the χ2-attack on RC6 without using any experimental result. Our key recovery attack recovers keys asymmetrically, which requires a time complexity of #plaintexts × 231 and a memory complexity of 252 for recovering one key. As a result, our key recovery attack works on 16-round RC6 with 192- and 256-bit key and 12-round RC6 with 128-bit key. In the case both of 196- and 256-bit keys, our attack surprisingly reduces the time and memory complexity compared with that of the previous attack. We also demonstrate our theorem on RC6-8/4/8 and make sure of the accuracy by comparing our approximation with the experimental results.
    Download PDF (295K)
  • Atsuko Miyaji
    Article type: Security Infrastructure
    Subject area: Regular Paper
    2007 Volume 3 Pages 610-624
    Published: September 15, 2007
    Released on J-STAGE: September 05, 2007
    JOURNAL FREE ACCESS
    An ID-based encryption (IBE) is a public key cryptosystem, in which a user's public key is given as a user ID. In IBE, only a single center generates all user secret keys, which may give the center a load of burdensome work. A hierarchical ID-based encryption (HIBE) is a kind of IBE and overcomes the problem by delegating a user secret key generation to a lower-level center, in which centers form a hierarchical structure. However, all ancestor nodes in HIBE act as centers. That is, any ancestor as well as the root can generate a secret key for any descendant node and, thus, a cipher text to a node can be decrypted by any ancestor node even if the ancestor does not have the same secret key as that of a target node. In this paper, we propose the concept of ancestor-excludable HIBE, in which ancestors with a level less than the designated one can be excluded from a set of privileged ancestors with a right to decrypt a cipher text to a target node. We also give the functional definition together with the security definition. This notion is denoted by AE-HIBE simply. We present the concrete example of AE-HIBE, which can work with constant-size ciphertext and decryption time, independent of the hierarchy level. We prove that our AE-HIBE is selective-ID-CPA secure in the standard model, which can be converted to be selective-ID-CCA secure by applying a general conversion method. Furthermore, AE-HIBE can be naturally applied to the broadcast encryption to realize the efficient public-key version with the user-key size of O(log2 N) and the transmission rate of O(r) for N users and r revoked users. The user-key size is the smallest at the transmission rate of O(r), up to the present.
    Download PDF (484K)
  • Tetsuya Sakai
    Article type: Research Papers
    Subject area: Regular Paper
    2007 Volume 3 Pages 625-642
    Published: September 15, 2007
    Released on J-STAGE: September 26, 2007
    JOURNAL FREE ACCESS
    This paper describes how the bootstrap approach to statistics can be applied to the evaluation of IR effectiveness metrics. More specifically, we describe straightforward methods for comparing the discriminative power of IR metrics based on Bootstrap Hypothesis Tests. Unlike the somewhat ad hoc Swap Method proposed by Voorhees and Buckley, our Bootstrap Sensitivity Methods estimate the overall performance difference required to achieve a given confidence level directly from Bootstrap Hypothesis Test results. We demonstrate the usefulness of our methods using four different data sets (i.e., test collections and submitted runs) from the NTCIR CLIR track series for comparing seven IR metrics, including those that can handle graded relevance and those based on the Geometric Mean. We also show that the Bootstrap Sensitivity results are generally consistent with those based on the more ad hoc methods.
    Download PDF (814K)
feedback
Top