Information and Media Technologies
Online ISSN : 1881-0896
ISSN-L : 1881-0896
Volume 4, Issue 4
Displaying 1-35 of 35 articles from this issue
Hardware and Devices
  • Jun Yao, Kosuke Ogata, Hajime Shimada, Shinobu Miwa, Hiroshi Nakashima ...
    2009 Volume 4 Issue 4 Pages 696-713
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    To reduce the processor energy consumption under low workload and low clock frequency executions, a possible solution is to use ALU cascading while keeping the supply voltage unchanged. This cascading scheme uses a single cycle to execute multiple ALU instructions which have a data dependence relationship between them and thus saves clock cycles for the whole execution. Since the processor energy consumption is the product result of both power and execution time, ALU cascading is expected to help energy optimization for microprocessors operating under low frequency status. To implement ALU cascading in a current superscalar processor, a specific instruction scheduler is required to wakeup a pair of cascadable instructions simultaneously despite there being a data dependence relationship between them. Furthermore, ALU cascading is only applied under low clock frequency execution mode so that the instruction scheduler must support standard scheduling for the normal clock frequency execution. In this paper, we propose an instruction scheduling method that enables the additional wakeup features for the utilization of ALU cascading without large hardware extensions. With this scheduler, the average IPC improvement becomes 3.7% in SPECint2000 and 6.4% in Mediabench, as compared to the baseline execution. The delay of additional hardware required for the ALU cascading purpose is also evaluated to study the complexity of ALU cascading.
    Download PDF (1630K)
  • Gang Zeng, Hiroyuki Tomiyama, Hiroaki Takada
    2009 Volume 4 Issue 4 Pages 714-726
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    A dynamic energy performance scaling (DEPS) framework is proposed for energy savings in hard real-time embedded systems. In this generalized framework, two existing technologies, i.e., dynamic hardware resource configuration (DHRC) and dynamic voltage frequency scaling (DVFS) are combined for energy performance tradeoff. The problem of selecting the optimal hardware configuration and voltage/frequency parameters is formulated to achieve maximal energy savings and meet the deadline constraint simultaneously. Through case studies, the effectiveness of DEPS has been validated.
    Download PDF (607K)
  • Yoshinobu Higami, Kewal K. Saluja, Hiroshi Takahashi, Sin-ya Kobayashi ...
    2009 Volume 4 Issue 4 Pages 727-739
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Conventional stuck-at fault model is no longer sufficient to deal with the problems of nanometer geometries in modern Large Scale Integrated Circuits (LSIs). Test and diagnosis for transistor defects are required. In this paper we propose a fault diagnosis method for transistor shorts in combinational and full-scan circuits that are described at gale level design. Since it is difficult to describe the precise behavior of faulty transistors, we define two types of transistor short models by focusing on the output values of the corresponding faulty gate. Some of the salient features of the proposed diagnosis method are 1) it uses only gate-level simulation and does not use transistor-level simulation like SPICE, 2) it uses conventional stuck-at fault simulator yet it is able to handle transistor shorts, thus suitable for large circuits, and 3) it is efficient and accurate. We apply our method to ISCAS benchmark circuits to demonstrate the effectiveness of our method.
    Download PDF (339K)
  • Yuko Hara, Hiroyuki Tomiyama, Shinya Honda, Hiroaki Takada
    2009 Volume 4 Issue 4 Pages 740-752
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    In general, standard benchmark suites are critically important for researchers to quantitatively evaluate their new ideas and algorithms. This paper proposes CHStone, a suite of benchmark programs for C-based high-level synthesis. CHStone consists of a dozen of large, easy-to-use programs written in C, which are selected from various application domains. This paper also analyzes the characteristics of the CHStone benchmark programs, which will be valuable for researchers to use CHStone for the evaluation of their new techniques. In addition, we present future challenges to be solved towards the practical high-level synthesis.
    Download PDF (395K)
Computing
  • Naoto Yukinawa, Taku Yoshioka, Kazuo Kobayashi, Naotake Ogasawara, Shi ...
    2009 Volume 4 Issue 4 Pages 753-768
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Clustering is a practical data analysis step in gene expression-based studies. Model-based clusterings, which are based on probabilistic generative models, have two advantages: the number of clusters can be determined based on statistical criteria, and the clusters are robust against the observation noises in data. Many existing approaches assume multi-variate Gaussian mixtures as generative models, which are analogous to the use of Euclidean or Mahalanobis type distance as the similarity measure. However, these types of similarity measures often fail to detect co-expressed gene groups. We propose a novel probabilistic model for cluster analyses based on the correlation between gene expression patterns. We also propose a “meta” cluster analysis method to eliminate the dependence of the clustering result on initial values of the clustering algorithm. In empirical studies with a time course gene expression dataset of Bacillus subtilis during sporulation, our method acquires more stable and informative results than the ordinary Gaussian mixture model-based clustering, k-means clustering and hierarchical clustering algorithms, which are widely used in this field. In addition, with the meta-cluster analysis, biologically-meaningful expression patterns are extracted from a set of clustering results. The constraints in our model worked more efficiently than those in the previous studies. In our experiment, such constraints contributed to the stability of the clustering results. Moreover, the clustering based on the Bayesian inference was found to be more stable than those by the conventional maximum likelihood estimation.
    Download PDF (615K)
  • Ai Mikami, Jianming Shi
    2009 Volume 4 Issue 4 Pages 769-779
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    In this study, we use the Ant Colony System (ACS) to develop a heuristic algorithm for sequence alignment. This algorithm is certainly an improvement on ACS-MultiAlignment, which was proposed in 2005 for predicting major histocompatibility complex (MHC) class II binders. The numerical experiments indicate that this algorithm is as much as 2, 900 times faster than the original ACS-MultiAlignment algorithm. We also compare this algorithm to the other approaches such as Gibbs sampling algorithm using numerical experiments. The results show that our algorithm finds the best value prompter than Gibbs approach.
    Download PDF (649K)
  • Jorji Nonaka, Kenji Ono, Hideo Miyachi
    2009 Volume 4 Issue 4 Pages 780-788
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This paper presents a performance evaluation of large-scale parallel image compositing on a T2K Open Supercomputer. Traditional image compositing algorithms were not primarily designed for exploiting the combined message passing and the shared address space parallelism provided by systems such as T2K Open Supercomputer. In this study, we investigate the Binary-Swap image compositing method because of its promising potential for scalability. We propose some improvements to the Binary-Swap method aiming to fully exploit the hybrid programming model. We obtained encouraging results from the performance evaluation conducted on Todai Combined Cluster, a T2K Open Supercomputer at the University of Tokyo. The proposed improvements have also shown a high potential to tackle the large-scale image compositing problem on leading-edge HPC systems where an ever increasing number of processing cores is involved.
    Download PDF (503K)
  • Eric M. Heien, Yoshiyuki Asai, Taishin Nomura, Kenichi Hagihara
    2009 Volume 4 Issue 4 Pages 789-801
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Recent work in biophysical science increasingly focuses on modeling and simulating human biophysical systems to better understand the human physiome. One program to generate such models is insilicoIDE. These models may consist of thousands or millions of components with complex relations. Simulations of such models can require millions of time steps and take hours or days to run on a single machine. To improve the speed of biophysical simulations generated by insilicoIDE, we propose techniques for augmenting the simulations to support parallel execution in an MPI-enabled environment. In this paper we discuss the methods involved in efficient parallelization of such simulations, including classification and identification of model component relationships and work division among multiple machines. We demonstrate the effectiveness of the augmented simulation code in a parallel computing environment by performing simulations of large scale neuron and cardiac models.
    Download PDF (936K)
  • Yoshiharu Kojima, Masahiko Sakai, Naoki Nishida, Keiichirou Kusakari, ...
    2009 Volume 4 Issue 4 Pages 802-814
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    The reachability problem for given an initial term, a goal term, and a term rewriting system (TRS) is to decide whether the initial one is reachable to the goal one by the TRS or not. A term is shallow if each variable in the term occurs at depth 0 or 1. Innermost reduction is a strategy that rewrites innermost redexes, and context-sensitive reduction is a strategy in which rewritable positions are indicated by specifying arguments of function symbols. In this paper, we show that the reachability problem under context-sensitive innermost reduction is decidable for linear right-shallow TRSs. Our approach is based on the tree automata technique that is commonly used for analysis of reachability and its related properties. We show a procedure to construct tree automata accepting the sets of terms reachable from a given term by context-sensitive innermost reduction of a given linear right-shallow TRS.
    Download PDF (631K)
  • Yao-Wen Chang, Zhe-Wei Jiang, Tung-Chieh Chen
    2009 Volume 4 Issue 4 Pages 815-836
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    The placement problem is to place objects into a fixed die such that no objects overlap with each other and some cost metric (e.g., wirelength) is optimized. Placement is a major step in physical design that has been studied for several decades. Although it is a classical problem, many modern design challenges have reshaped this problem. As a result, the placement problem has attracted much attention recently, and many new algorithms have been developed to handle the emerging design challenges. Modern placement algorithms can be classified into three major categories: simulated annealing, min-cut, and analytical algorithms. According to the recent literature, analytical algorithms typically achieve the best placement quality for large-scale circuit designs. In this paper, therefore, we shall give a systematic and comprehensive survey on the essential issues in analytical placement. This survey starts by dissecting the basic structure of analytical placement. Then, various techniques applied as components of popular analytical placers are studied, and two leading placers are exemplified to show the composition of these techniques into a complete placer. Finally, we point out some research directions for future analytical placement.
    Download PDF (549K)
  • Hideki Takase, Hiroyuki Tomiyama, Hiroaki Takada
    2009 Volume 4 Issue 4 Pages 837-845
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Energy consumption has become one of the major concerns in modern embedded systems. Recently memory subsystems have become consumed large amount of total energy in the embedded processors. This paper proposes partitioning and allocation approaches of scratch-pad memory in non-preemptive fixed-priority multi-task systems. We propose three approaches (i.e., spatial, temporal, and hybrid approaches) which enable energy efficient usage of the scratch-pad region. These approaches can reduce energy consumption of instruction memory. Each approach is formulated as an integer programming problem that simultaneously determines (1) partitioning of the scratch-pad memory space for the tasks, and (2) allocation of functions to the scratch-pad memory space for each task. Our formulations pay attention to the task periods for the purpose of energy minimization. The experimental results show up to 47% of energy reduction in the instruction memory subsystems can be achieved by the proposed approaches.
    Download PDF (498K)
  • Keisuke Nakano, Sebastian Maneth
    2009 Volume 4 Issue 4 Pages 846-856
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Macro tree transducers are a classical formal model for structural-recursive tree transformation with accumulative parameters. They have recently been applied to model XML transformations and queries. Typechecking a tree transformation means checking whether all valid input trees are transformed into valid output trees, for the given regular tree languages of input and output trees. Typechecking macro tree transducers is generally based on inverse type inference, because of the advantageous property that inverse transformations effectively preserve regular tree languages. It is known that the time complexity of typechecking an n-fold composition of macro tree transducers is non-elementary. The cost of typechecking can be reduced if transducers in the composition have special properties, such as being deterministic or total, or having no accumulative parameters. In this paper, the impact of such properties on the cost of typechecking is investigated. Reductions in cost are achieved by applying composition and decomposition constructions to tree transducers. Even though these constructions are well-known, they have not yet been analyzed with respect to the precise sizes of the transducers involved. The results can directly be applied to typechecking XML transformations, because type formalisms for XML are captured by regular tree languages.
    Download PDF (293K)
  • Takashi Yokota, Kanemitsu Ootsu, Takanobu Baba
    2009 Volume 4 Issue 4 Pages 857-869
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This paper addresses the quantitative evaluation methodology of interconnection networks. In the conventional evaluation method, performance curves are drawn by a series of simulation runs, and network methods are discussed by comparing the shape of performance curves. We present the Ramp Load Method that does not require repetitive simulation runs and produces continuous performance curves. Based on the continuous curves, we give a formal definition of critical load ratio. Furthermore, we introduce a feature quantity to represent both throughput and average latency, and propose a new measure called Network Performance Measure. Through detailed evaluation and some application examples, the effectiveness of the proposed evaluation methodology is confirmed.
    Download PDF (1081K)
  • Tetsuya Yoshida, Hiroshi Yamada, Kenji Kono
    2009 Volume 4 Issue 4 Pages 870-884
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    The use of time-sensitive software has been popular for embedded systems like mobile phones and portable video players. Embedded software is usually developed in parallel with real hardware devices due to a tight time-to-market constraint, and therefore it is quite difficult to verify the sensory responsiveness of time-sensitive applications such as GUIs and multimedia players. To verify the responsiveness, it is useful for developers to observe the software's behavior in a test environment in which the software runs in real time rather than in simulation time. To provide such a test environment, we need a mechanism that slows down the CPU speed of test machines because test machines are usually equipped with high-end desktop CPUs. A CPU slowdown mechanism needs to provide various CPU speeds, keep a constant CPU speed in the short term, and be sensitive toward hardware interrupts. Although there are a couple of ways of slowing down CPU speed, they do not satisfy all the above requirements. This paper describes FoxyLargo that smoothly slows down CPU speed with a virtual machine monitor (VMM). FoxyLargo carefully schedules a virtual machine (VM) to provide an illusion that the VM is running slowly from the viewpoint of time-sensitive applications. For this purpose, FoxyLargo combines three techniques: 1) fine-grained, 2) interrupt-sensitive, and 3) clock-tick based VM scheduling. We applied our techniques to Xen VMM, and conducted three experiments. The experimental results show that FoxyLargo adequately meet all the above requirements. Also, we successfully reproduced the decoding behavior of an MPEG player. This result demonstrates that FoxyLargo can reproduce the behavior of real applications.
    Download PDF (1164K)
  • Yuta Ashida, Tomonobu Ozaki, Takenao Ohkawa
    2009 Volume 4 Issue 4 Pages 885-894
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    A comparative analysis of organisms with metabolic pathways provides important information about functions within organisms. In this paper, we discuss problem of comparing organisms using partial metabolic structures that contain many biological characteristics and propose a pathway comparison method based on an elementary flux mode (EFM) — the minimal metabolic pathway that satisfies a steady state. By the extraction of the ‘elementary flux mode, ’ we obtain biologically significant metabolic substructures. To compare metabolic pathways based on EFMs, we propose a new pseudo alignment method with a penalty based on the importance of enzymes. The distance among organisms can be calculated based on the pseudo alignment of EFMs. To confirm its effectivity, we apply the proposed method to the pathway datasets from 38 organisms. We successfully reconstructed a “three domain theory” from the aspect of the biological function. Moreover, we evaluated the results in terms of the accuracy of organism classification from the biological function and confirmed that the obtained classification was related deeply to such habitats as aerobe or anaerobe.
    Download PDF (685K)
  • Kazunori Miyanishi, Tomonobu Ozaki, Takenao Ohkawa
    2009 Volume 4 Issue 4 Pages 895-902
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    As the number of documents about protein structural analysis increases, a method of automatically identifying protein names in them is required. However, the accuracy of identification is not high if the training data set is not large enough. We consider a method to extend a training data set based on machine learning using an available corpus. Such a corpus usually consists of documents about a certain kind of organism species, and documents about different kinds of organism species tend to have different vocabularies. Therefore, depending on the target document or corpus, it is not effective for the accurate identification to simply use a corpus as a training data set. In order to improve the accuracy, we propose a method to select sentences that have a positive effect on identification and to extend the training data set with the selected sentences. In the proposed method, a portion of a set of tagged sentences is used as a validation set. The process to select sentences is iterated using the result of the identification of protein names in a validation set as feedback. In the experiment, compared with the baseline, a method without a corpus, with a whole corpus, or with a part of a corpus chosen at random, the accuracy of the proposed method was higher than any baseline method. Thus, it was confirmed that the proposed method selected effective sentences.
    Download PDF (544K)
Media (processing) and Interaction
  • Kok-Meng Ong, Wataru Kameyama
    2009 Volume 4 Issue 4 Pages 903-912
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This study addresses the challenge of analyzing affective video content. The affective content of a given video is defined as the intensity and the type of emotion that arise in a viewer while watching that video. In this study, human emotion was monitored by capturing viewers' pupil sizes and gazing points while they were watching the video. On the basis of the measurement values, four features were extracted (namely cumulative pupil response (CPR), frequency component (FC), modified bivariate contour ellipse area (mBVCEA) and Gini coefficient). Using principal component analysis, we have found that two key features, namely the CPR and FC, contribute to the majority of variance in the data. By utilizing the key features, the affective content was identified and could be used in classifying the video shots into their respective scenes. An average classification accuracy of 71.89% was achieved for three basic emotions, with the individual maximum classification accuracy at 89.06%. The development in this study serves as the first step in automating personalized video content analysis on the basis of human emotion.
    Download PDF (15330K)
  • Natsumi Kusumoto, Shinsaku Hiura, Kosuke Sato
    2009 Volume 4 Issue 4 Pages 913-921
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Exaggerated defocus cannot be achieved with an ordinary compact digital camera because of its tiny sensor size, so taking pictures that draw the attention of a viewer to the subject is hard. Many methods are available for controlling the focus and defocus of previously taken pictures. However, most of these methods require custom-built equipment such as a camera array to take pictures. Therefore, in this paper, we describe a method for creating images focused at any depth with an arbitrarily blurred background from a set of images taken by a handheld compact digital camera that is moved at random. Our method can produce various aesthetic blurs by changing the size, shape, or density of the blur kernel. In addition, we demonstrate the potential of our method through a subjective evaluation of blurred images created by our system.
    Download PDF (13538K)
  • Yuxin Wang, Keizo Oyama
    2009 Volume 4 Issue 4 Pages 922-936
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    We propose a web page classification method that is suitable for building web page collections and show its effectiveness through experimentation. First, we describe a model that represents a surrounding page group structure that takes the link relation and directory hierarchy relation into consideration and a method for extracting features based on the model. The method is tested through classification experimentation on two data sets and using the support vector machine (SVM) as the classification algorithm, and its effectiveness is confirmed through comparison with a baseline and the results of previous studies. The contribution of each part of the surrounding pages is also analyzed. Next, we test the method's performance on overall recall-precision range and find that it is superior in the high recall range. Finally, we estimate the performance of a three-grade classifier composed with the method and the amount of manual assessment required to build a web page collection.
    Download PDF (2998K)
  • Sudipta Kundu, Sorin Lerner, Rajesh Gupta
    2009 Volume 4 Issue 4 Pages 937-950
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    The growth in size and heterogeneity of System-on-Chip (SOC) design makes their design process from initial specification to IC implementation complex. System-level design methods seek to combat this complexity by shifting increasing design burden to high-level languages such as SystemC and SystemVerilog. Such languages not only make a design easier to describe using high-level abstractions, but also provide a path for systematic implementation through refinement and elaboration of such descriptions. In principle, this can enable a greater exploration of design alternatives and thus better design optimization than possible using lower level design methods. To achieve these goals, however, verification capabilities that seek to validate designs at higher levels as well their equivalences with lower level implementations are crucially needed. To the extent possible given the large space of design alternatives, such validation must be formal to ensure the design and important properties are provably correct against various implementation choices. In this paper, we present a survey of high-level verification techniques that are used for both verification and validation of high-level designs, that is, designs modeled using high-level programming languages. These techniques include those based on model checking, theorem proving and approaches that integrate a combination of the above methods. The high-level verification approaches address verification of properties as well as equivalence checking with refined implementations. We also focus on techniques that use information from the synthesis process for improved validation. Finally, we conclude with a discussion and future research directions in this area.
    Download PDF (406K)
  • Yasuhiro Mukaigawa, Kazuya Suzuki, Yasushi Yagi
    2009 Volume 4 Issue 4 Pages 951-961
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    The scattering effect of incident light, called subsurface scattering, occurs under the surface of translucent objects. In this paper, we present a method for analyzing the subsurface scattering from a single image taken in a known arbitrary illumination environment. In our method, diffuse subsurface reflectance in the subsurface scattering model can be linearly solved by quantizing the distances between each pair of surface points. Then, the dipole approximation is fit to the diffuse subsurface reflectance. By applying our method to real images of translucent objects, we confirm that the parameters of subsurface scattering can be computed for different materials.
    Download PDF (662K)
  • Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Shohei Hido, Jun Se ...
    2009 Volume 4 Issue 4 Pages 962-987
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    In statistical pattern recognition, it is important to avoid density estimation since density estimation is often more difficult than pattern recognition itself. Following this idea — known as Vapnik's principle, a statistical data processing framework that employs the ratio of two probability density functions has been developed recently and is gathering a lot of attention in the machine learning and data mining communities. The purpose of this paper is to introduce to the computer vision community recent advances in density ratio estimation methods and their usage in various statistical data processing tasks such as non-stationarity adaptation, outlier detection, feature selection, and independent component analysis.
    Download PDF (849K)
  • Bo Zheng, Ryo Ishikawa, Takeshi Oishi, Jun Takamatsu, Katsushi Ikeuchi
    2009 Volume 4 Issue 4 Pages 988-998
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This paper presents a fast registration method based on solving an energy minimization problem derived by implicit polynomials (IPs). Once a target object is encoded by an IP, it will be driven fast towards a corresponding source object along the IP's gradient flow without using point-wise correspondences. This registration process is accelerated by a new IP transformation method. Instead of the time-consuming transformation to a large discrete data set, the new method can transform the polynomial coefficients to maintain the same Euclidean transformation. Its computational efficiency enables us to improve a new application for real-time Ultrasound (US) pose estimation. The reported experimental results demonstrate the capabilities of our method in overcoming the limitations of a noisy, unconstrained, and freehand US image, resulting in fast and robust registration.
    Download PDF (1780K)
  • Xu Qiao, Rui Xu, Yen-Wei Chen, Takanori Igarashi, Keisuke Nakao, Akio ...
    2009 Volume 4 Issue 4 Pages 999-1009
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This paper introduces a framework called generalized N-dimensional principal component analysis (GND-PCA) for statistical appearance modeling of facial images with multiple modes including different people, different viewpoint and different illumination. The facial images with multiple modes can be considered as high-dimensional data. GND-PCA can represent the high-order dimensional data more efficiently. We conduct extensive experiments on MaVIC Database (KAO-Ritsumeikan Multi-angle View, Illumination and Cosmetic Facial Database) to evaluate the effectiveness of the proposed algorithm and compared the conventional ND-PCA in terms of reconstruction error. The results indicated that the extraction of data features is computationally more efficient using GND-PCA than PCA and ND-PCA.
    Download PDF (1013K)
  • Shohei Nobuhara, Yoshiyuki Tsuda, Iku Ohama, Takashi Matsuyama
    2009 Volume 4 Issue 4 Pages 1010-1027
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This paper presents a novel approach for simultaneous silhouette extraction from multi-viewpoint images. The main contribution of this paper is a new algorithm for 1) 3D context aware error detection and correction of 2D multi-viewpoint silhouette extraction and 2) 3D context aware classification of cast shadow regions. Our method takes both monocular image segmentation and background subtraction of each viewpoint as its inputs, but does not assume they are correct. Inaccurate segmentation and background subtraction are corrected through our iterative method based on inter-viewpoint checking. Some experiments quantitatively demonstrate advantages against previous approaches.
    Download PDF (2425K)
  • Tetsuya SAKAI, Noriko Kando, Hideki Shima, Chuan-Jie Lin, Ruihua Song, ...
    2009 Volume 4 Issue 4 Pages 1028-1033
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    We consider the problem of ranking information retrieval systems without relevance assessments in the context of collaborative evaluation forums such as NTCIR and TREC. Our short-term goal is to provide the NTCIR participants with a “system ranking forecast” prior to conducting manual relevance assessments, thereby reducing researchers' “idle time” and accelarating research. The long term goal is to semi-automate repeated evaluation of search engines. Our experiments using the NTCIR-7 ACLIA IR4QA test collections show that pseudo-system-rankings based on a simple method are highly correlated with the “true” rankings. Encouraged by this positive finding, we plan to release system ranking forecasts to participants of the next round of IR4QA at NTCIR-8.
    Download PDF (908K)
  • Yasunobu NOHARA, Sozo INOUE
    2009 Volume 4 Issue 4 Pages 1034-1039
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    In this paper, we propose a secure identification scheme for RFID with efficient time and memory, and also an efficient update of pre-computed values on the server side. Although RFID (Radio Frequency IDentification) is becoming popular, a privacy problem still remains, where an adversary can trace users' behavior by linking identification log by legitimate/adversary readers. For this problem, a hash-chain scheme has been proposed as a secure identification for low-cost RFID tags, and its long identification time has been reduced by Avoine et al. using pre-computation on the server side. However, Avoine's scheme uses static pre-computation, and therefore pre-computed values include ones which are already used and no longer used. In this paper, we optimize a lookup of pre-computed values using d-left hashing, and provide efficient update of pre-computed values. We also show reasonable analytical result for memory and pre-computation / identification / update time.
    Download PDF (169K)
  • Takayuki Ishida, Kazumi Yamawaki, Hideki Noda, Michiharu Niimi
    2009 Volume 4 Issue 4 Pages 1040-1045
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This paper presents a modified QIM-JPEG2000 steganography which improves the previous JPEG2000 steganography using quantization index modulation (QIM). It does not increase the post-embedding file size, producing less degraded stego images. Steganalysis experiments show that the modified QIM-JPEG2000 is more secure than the previous QIM-JPEG2000 and is the most secure among major steganographic methods for JPEG2000 ever proposed.
    Download PDF (237K)
Computer Networks and Broadcasting
  • Kazuki Yoneyama
    2009 Volume 4 Issue 4 Pages 1046-1059
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    In ACNS'06, Cliff, et al. proposed the password-based server aided key exchange (PSAKE) as one of password-based authenticated key exchanges in the three-party setting (3-party PAKE) in which two clients with different passwords exchange a session key with the help of their corresponding server. Though they also studied a strong security definition of the 3-party PAKE, their security model is not strong enough because there are desirable security properties which cannot be captured. In this paper, we define a new formal security model of the 3-party PAKE which is stronger than the previous model. Our model captures all known desirable security requirements of the 3-party PAKE, like the resistance to key-compromise impersonation, to the leakage of ephemeral private keys of servers and to the undetectable on-line dictionary attack. Also, we propose a new scheme as an improvement of PSAKE with the optimal number of rounds for a client, which is secure in the sense of our model.
    Download PDF (331K)
  • Keita Emura, Atsuko Miyaji, Kazumasa Omote
    2009 Volume 4 Issue 4 Pages 1060-1075
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Recently, cryptographic schemes based on the user's attributes have been proposed. An Attribute-Based Group Signature (ABGS) scheme is a kind of group signature scheme, where a user with a set of attributes can prove anonymously whether she has these attributes or not. An access tree is applied to express the relationships among some attributes. However, previous schemes did not provide a way to change an access tree. In this paper, we propose a dynamic ABGS scheme that can change an access tree. Our ABGS is efficient in that re-issuing of the attribute certificate previously issued for each user is not necessary. The number of calculations in a pairing does not depend on the number of attributes in both signing and verifying. Finally, we discuss how our ABGS can be applied to an anonymous survey for collection of attribute statistics.
    Download PDF (494K)
  • Keisuke Takemori, Masahiko Fujinaga, Toshiya Sayama, Masakatsu Nishiga ...
    2009 Volume 4 Issue 4 Pages 1076-1085
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Recently, source IP spoofing attacks are critical issues for the Internet. These attacks are considered to be sent from bot infected hosts. There has been active research on IP traceback technologies. However, the traceback from an end victim host to an end spoofing host has not yet been achieved, due to the lack of traceback probes installed on each routing path. Alternative probes should be employed in order to reduce the installation cost. In this research, we propose an IP traceback scheme against bots using DNS logs of existing servers. Many types of bots retrieve IP addresses of victim hosts from fully qualified domain names (FQDNs) at the beginning of an attack. The proposed scheme checks from the destination to the source DNS logs, in order to extract the actual IP addresses of bot infected hosts. Also, we propose a scheme to ascertain the reliability of traceback results, and a method to distinguish spoofing from non-spoofing attacks. We collect bot communication patterns to confirm that the DNS log can be used for reasonable probes and for achieving a high traceback success rate.
    Download PDF (914K)
  • Tokuya Inagaki, Susumu Ishihara
    2009 Volume 4 Issue 4 Pages 1086-1097
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    The sensor nodes of wireless sensor networks are placed in observation areas and transmit data to the observer by using multi-hop communication between nodes. Because these nodes are small and have a limited power supply, they must save power in order to prolong the network's lifetime. We propose HGAF (Hierarchical Geographic Adaptive Fidelity) to give a layered structure to GAF (Geographic Adaptive Fidelity), a power saving technique using location information in sensor networks. Simulation results reveal that HGAF outperforms GAF in terms of the number of survived nodes and packet delivery ratio when the node density is high. The lifetime of dense and randomly distributed sensor networks with HGAF is about 200% as long as ones with GAF.
    Download PDF (842K)
Information Systems and Applications
  • Miguel Miranda Miranda, Kiyoshi Kiyokawa, Haruo Takemura
    2009 Volume 4 Issue 4 Pages 1098-1103
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Interaction in an immersive virtual environment is limited by imperfect depth cues, unstable hand placement in midair, and so on. In this study, we summarize the design and implementation of a magic lens interface within an immersive virtual environment using a handheld device such as a personal digital assistant (PDA) or an ultra mobile personal computer (UMPC). Our interface simplifies the selection and manipulation processes using image-based interaction techniques. An empirical study shows the effectiveness of the proposed interface for selecting 3D objects, especially when the target is small or in motion.
    Download PDF (10057K)
  • Gregory Hazelbeck, Hiroaki Saito
    2009 Volume 4 Issue 4 Pages 1104-1128
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    This study presents an initial version of an e-learning system that assists learners of Japanese with their study of vocabulary. The system uses sentences from a corpus to generate context-based exercises. The sentences used in the context-based exercises are selected using a readability formula developed for this system. We used the system with two different types of corpora, a web corpus that we constructed for this system and a sample of the recently released Balanced Corpus of Contemporary Written Japanese (BCCWJ). We compared the two corpora and while the BCCWJ has better word coverage, our web corpus still covered a majority (96.1%) of the target vocabulary words even though it's relatively small. Evaluation of this system showed that the readability formula performs well, especially when sentences contain the system's target set of vocabulary words. A group of learners of Japanese were also asked to use the system and then fill out a survey. Results of the survey indicate that the learners thought the system was easy to use. Most of the learners also expressed a desire to use this type of system when studying vocabulary.
    Download PDF (439K)
  • Jun Hatori, Yusuke Miyao, Jun'ichi Tsujii
    2009 Volume 4 Issue 4 Pages 1129-1155
    Published: 2009
    Released on J-STAGE: December 15, 2009
    JOURNAL FREE ACCESS
    Traditionally, many researchers have addressed word sense disambiguation (WSD) as an independent classification problem for each word in a sentence. However, the problem with their approaches is that they disregard the interdependencies of word senses. Additionally, since they construct an individual sense classifier for each word, their method is limited in its applicability to the word senses for which training instances are served. In this paper, we propose a supervised WSD model based on the syntactic dependencies of word senses. In particular, we assume that strong dependencies between the sense of a syntactic head and those of its dependents exist. We describe these dependencies on the tree-structured conditional random fields (T-CRFs), and obtain the most appropriate assignment of senses optimized over the sentence. Furthermore, we incorporate these sense dependencies in combination with various coarse-grained sense tag sets, which are expected to relieve the data sparseness problem, and enable our model to work even for words that do not appear in the training data. In experiments, we display the appropriateness of considering the syntactic dependencies of senses, as well as the improvements by the use of coarse-grained tag sets. The performance of our model is shown to be comparable to those of state-ofthe- art WSD systems. We also present an in-depth analysis of the effectiveness of the sense dependency features by showing intuitive examples.
    Download PDF (457K)
feedback
Top