Information and Media Technologies
Online ISSN : 1881-0896
ISSN-L : 1881-0896
Volume 7, Issue 1
Displaying 1-46 of 46 articles from this issue
Computing
  • Jiongyao Ye, Hongfeng Ding, Yingtao Hu, Takahiro Watanabe
    2012 Volume 7 Issue 1 Pages 1-11
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Modern embedded processors commonly use a set-associative scheme to reduce cache misses. However, a conventional set-associative cache has its drawbacks in terms of power consumption because it has to probe all ways to reduce the access time, although only the matched way is used. The energy spent in accessing the other ways is wasted, and the percentage of such energy will increase as cache associativity increases. Previous research, such as phased caches, way prediction caches and partial tag comparison, have been proposed to reduce the power consumption of set-associative caches by optimizing the cache access mode. However, these methods are not adaptable according to the program behavior because of using a single access mode throughout the program execution. In this paper, we propose a behavior-based adaptive access-mode for set-associative caches in embedded systems, which can dynamically adjust the access modes during the program execution. First, a program is divided into several phases based on the principle of program behavior repetition. Then, an off-system pre-analysis is used to exploit the optimal access mode for each phase so that each phase employs the different optimal access mode to meet the application's demand during the program execution. Our proposed approach requires little hardware overhead and commits most workload to the software, so it is very effective for embedded processors. Simulation by using Spec 2000 shows that our proposed approach can reduce roughly 76.95% and 64.67% of power for an instruction cache and a data cache, respectively. At the same time, the performance degradation is less than 1%.
    Download PDF (892K)
  • Norio Shiratori, Kenji Sugawara, Yusuke Manabe, Shigeru Fujita, Basabi ...
    2012 Volume 7 Issue 1 Pages 12-19
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    In this work, Symbiotic Computing (SC) based solution to combat the problem of Information Explosion is addressed. Symbiotic Computing was proposed to bridge the gap between the Real Space (RS) and the Digital Space (DS) by creating symbiotic relations among users in the RS and the information resources such as software, data, etc. in the DS. SC is realized by adding a new axis, S/P computing (Social and Perceptual Computing), to the advanced ubiquitous computing consisting of ambient and web computing. Here, a new framework of SC based on Symbiotic Space (SS) and Symbiotic Space Platform (SSP) has been designed to construct and maintain Symbiotic Relations for S/P computing in order to reduce the burden of Information Explosion. Finally the feasibility of our proposal has been tested by bench-top simulation through applying logical model of Symbiotic Computing to a typical example of Information Explosion.
    Download PDF (1036K)
  • Gang Chen, Ke Chen, Dawei Jiang, Beng Chin Ooi, Lei Shi, Hoang Tam Vo, ...
    2012 Volume 7 Issue 1 Pages 20-31
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    With the unprecedented growth of data generated by mankind nowadays, it has become critical to develop efficient techniques for processing these massive data sets. To tackle such challenges, analytical data processing systems must be extremely efficient, scalable, and flexible as well as economically effective. Recently, Hadoop, an open-source implementation of MapReduce, has gained interests as a promising big data processing system. Although Hadoop offers the desired flexibility and scalability, its performance has been noted to be suboptimal when it is used to process complex analytical tasks. This paper presents E3, an elastic and efficient execution engine for scalable data processing. E3 adopts a “middle” approach between MapReduce and Dryad in that E3 has a simpler communication model than Dryad yet it can support multi-stages job better than MapReduce. E3 avoids reprocessing intermediate results by adopting a stage-based evaluation strategy and collocating data and user-defined (map or reduce) functions into independent processing units for parallel execution. Furthermore, E3 supports block-level indexes, and built-in functions for specifying and optimizing data processing flows. Benchmarking on an in-house cluster shows that E3 achieves significantly better performance than Hadoop, or put it another way, building an elastically scalable and efficient data processing system is possible.
    Download PDF (477K)
  • Zheng Liu, Jeffrey Xu Yu, Hong Cheng
    2012 Volume 7 Issue 1 Pages 32-43
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Graph patterns are able to represent the complex structural relations among objects in many applications in various domains. The objective of graph summarization is to obtain a concise representation of a single large graph, which is interpretable and suitable for analysis. A good summary can reveal the hidden relationships between nodes in a graph. The key issue is how to construct a high-quality and representative super-graph, GS, in which a super-node summarizes a collection of nodes based on the similarity of attribute values and neighborhood relationships associated with nodes in G, and a super-edge summarizes the edges between nodes in G that are represented by two different super-nodes in GS. We propose an entropy-based unified model for measuring the homogeneity of the super-graph. The best summary in terms of homogeneity could be too large to explore. By using the unified model, we relax three summarization criteria to obtain an approximate homogeneous summary of reasonable size. We propose both agglomerative and divisive algorithms for approximate summarization, as well as pruning techniques and heuristics for both algorithms to save computation cost. Experimental results confirm that our approaches can efficiently generate high-quality summaries.
    Download PDF (821K)
  • Kentaro Hara, Kenjiro Taura
    2012 Volume 7 Issue 1 Pages 44-58
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Threads share a single address space with each other. On the other hand, a process has its own address space. Since whether to share or not to share the address space depends on each data structure in the whole program, the choice of “a thread or a process” for the whole program is too much “all-or-nothing.” With this motivation, this paper proposes a half-process, a process partially sharing its address space with other processes. This paper describes the design and the kernel-level implementation of the half-process and discusses the potential applicability of the half-process for multi-thread programming with thread-unsafe libraries, intra-node communications in parallel programming frameworks and transparent kernel-level thread migration. In particular, the thread migration based on the half-process is the first work that achieves transparent kernel-level thread migration by solving the problem of sharing global variables between threads.
    Download PDF (752K)
  • Yanwei Xu, Yoshiharu Ishikawa, Jihong Guan
    2012 Volume 7 Issue 1 Pages 59-72
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Keyword search in relational databases has been widely studied in recent years because it requires users neither to master a certain structured query language nor to know the complex underlying database schemas. Most existing methods focus on answering snapshot keyword queries in static databases. In practice, however, databases are updated frequently, and users may have long-term interests on specific topics. To deal with such situations, it is necessary to build effective and efficient facilities in a database system to support continual keyword queries. In this paper, we propose an efficient method for answering continual keyword queries over relational databases. The proposed method consists of two core algorithms. The first one computes a set of potential top-k results by evaluating the range of the future relevance score for every query result and creates a light-weight state for each keyword query. The second one uses these states to maintain the top-k results of keyword queries while the database is continually being updated. Experimental results validate the effectiveness and efficiency of the proposed method.
    Download PDF (539K)
  • Masahiro Yasugi, Tasuku Hiraishi, Seiji Umatani, Taiichi Yuasa
    2012 Volume 7 Issue 1 Pages 73-84
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Parallel programming/execution frameworks for many/multi-core platforms should support as many applications as possible. In general, work-stealing frameworks provide efficient load balancing even for irregular parallel applications. Unfortunately, naïve parallel programs which traverse graph-based data structures (e.g., for constructing spanning trees) cause stack overflow or unacceptable load imbalance. In this study, we develop parallel programs to perform probabilistically balanced divide-and-conquer graph traversals. We propose a programming technique for accumulating overflowed calls for the next iteration of repeated parallel stages. In an emerging backtracking-based work-stealing framework called “Tascell, ” which features on-demand concurrency, we propose a programming technique for long-term exclusive use of workspaces, leading to a similar technique also in the Cilk framework.
    Download PDF (833K)
  • Kentaro Hara, Kenjiro Taura
    2012 Volume 7 Issue 1 Pages 85-98
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    In order to improve the resource utilization of clusters and supercomputers and thus deliver application results to users faster, it is essential for a job scheduler to expand and shrink parallel computations flexibly. In order to enable the flexible job scheduling, the parallel computations have to be reconfigurable. With this motivation, this paper proposes, implements and evaluates DMI, a global-view-based PGAS framework that enables easy programming of reconfigurable and high-performance parallel iterative computations. DMI provides programming interfaces with which a programmer can program the reconfiguration easily with a global-view. Our performance evaluations showed that DMI can efficiently adapt the parallelism of long-running parallel iterative computations, such as a real-world finite element method and large-scale iterative graph search, to the dynamic increase and decrease of available resources through the reconfiguration.
    Download PDF (1455K)
  • Hidehito Gomi
    2012 Volume 7 Issue 1 Pages 99-109
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    A policy provisioning framework is described that supports management of the lifecycle of personal information and its data-handling policies distributed beyond security domains. A model for creating data-handling policies reflecting the intentions of its system administrator and the privacy preferences of the data owner is explained. Also, algorithms for systematically propagating and integrating data-handling policies from system entities in different administrative domains are presented. This framework enables data-handling policies to be properly deployed and enforced in a way that enhances security and privacy.
    Download PDF (459K)
  • Hiroshi Ishii, Qiang Ma, Masatoshi Yoshikawa
    2012 Volume 7 Issue 1 Pages 110-118
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    We propose a novel method for the incremental construction of causal networks to clarify the relationships among news events. We propose the Topic-Event Causal (TEC) model as a causal network model and an incremental constructing method based on it. In the TEC model, a causal relation is expressed using a directed graph and a vertex representing an event. A vertex contains structured keywords consisting of topic keywords and an SVO tuple. An SVO tuple, which consists of a tuple of subject, verb and object keywords represent the details of the event. To obtain a chain of causal relations, vertices representing a similar event need to be detected. We reduce the time taken to detect them by restricting the calculation to topics using topic keywords. We detect them on a concept level. We propose an identification method that identifies the sense of the keywords and introduce three semantic distance methods to compare keywords. Our method detects vertices representing similar events more precisely than conventional methods. We carried out experiments to validate the proposed methods.
    Download PDF (1156K)
  • Naoki Yoshinaga, Masaru Kitsuregawa
    2012 Volume 7 Issue 1 Pages 119-128
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    This paper proposes a method that speeds up a classifier trained with many conjunctive features: combinations of (primitive) features. The key idea is to precompute as partial results the weights of primitive feature vectors that represent fundamental classification problems and appear frequently in the target task. A prefix tree (trie) compactly stores the primitive feature vectors with their weights, and it enables the classifier to find for a given feature vector its longest prefix feature vector whose weight has already been computed. Experimental results on base phrase chunking and dependency parsing demonstrated that our method speeded up the SVM and LLM classifiers by a factor of 1.8 to 10.6.
    Download PDF (439K)
  • Satoshi Yoshida, Takashi Uemura, Takuya Kida, Tatsuya Asai, Seishi Oka ...
    2012 Volume 7 Issue 1 Pages 129-140
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    We address the problem of improving variable-length-to-fixed-length codes (VF codes). A VF code that we deal here with is an encoding scheme that parses an input text into variable length substrings and then assigns a fixed length codeword to each parsed substring. VF codes have favourable properties for fast decoding and fast compressed pattern matching, but they are worse in compression ratio than the latest compression methods. The compression ratio of a VF code depends on the parse tree used as a dictionary. To gain a better compression ratio we present several improvement methods for constructing parse trees. All of them are heuristical solutions since it is intractable to construct the optimal parse tree. We compared our methods with the previous VF codes, and showed experimentally that their compression ratios reach to the level of state-of-the-art compression methods.
    Download PDF (735K)
  • Satoshi Iwata, Kenji Kono
    2012 Volume 7 Issue 1 Pages 141-152
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Performance anomalies in web applications are becoming a huge problem and the increasing complexity of modern web applications has made it much more difficult to identify their root causes. The first step toward hunting for root causes is to narrow down suspicious components that cause performance anomalies. However, even this is difficult when several performance anomalies simultaneously occur in a web application; we have to determine if their root causes are the same or not. We propose a novel method that helps us narrow down suspicious components called performance anomaly clustering, which clusters anomalies based on their root causes. If two anomalies are clustered together, they are affected by the same root cause. Otherwise, they are affected by different root causes. The key insight behind our method is that anomaly measurements that are negatively affected by the same root cause deviate similarly from standard measurements. We compute the similarity in deviations from the non-anomalous distribution of measurements, and cluster anomalies based on this similarity. The results from case studies, which were conducted using RUBiS, which is an auction prototype modeled after eBay.com, are encouraging. Our clustering method output clusters crucial in the search for root causes. Guided by the clustering results, we searched for components exclusively used by each cluster and successfully determined suspicious components, such as the Apache web server, Enterprise Beans, and methods in Enterprise Beans. The root causes we found were shortages in network connections, inadequate indices in the database, and incorrect issues with SQLs, and so on.
    Download PDF (617K)
  • Yuan He, Hiroki Matsutani, Hiroshi Sasaki, Hiroshi Nakamura
    2012 Volume 7 Issue 1 Pages 153-160
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    The three-dimensional Network-on-Chip (3D NoC) is an emerging research topic exploring the network architecture of 3D ICs that stack several wafers or dies. As such topics being extensively studied, it is found negative impacts of 3D NoC's vertical interconnects are raising concerns considering their footprint sizes and routability degradation. In our evaluation, we found such vertical bandwidth limitation can dramatically degrade system performance by up to 2.3×. Since such limitations come from physical design constraints, to mitigate performance degradation, we have no other choice but to reduce the amount of communication data on-chip, especially for those data moving vertically. In this paper, therefore, we carry out a study of data compression on 3D NoC architectures with a comprehensive set of scientific workloads. Firstly, we propose an adaptive data compression scheme for 3D NoCs, taking account of the vertical bandwidth limitation and data compressibility. Secondly, we evaluate our proposal on a 3D NoC platform and we observe that the compressibility based adaptive compression is very useful against incompressible data while the location-based adaptive compression is more effective with more layers for the 3D NoC. Thirdly, we find that in a bandwidth limited situation like a CMP with 3D NoCs having multiple connected layers, adaptive data compression with location-based control or with both compressibility and location based control is very promising if the number of layers grows.
    Download PDF (1960K)
Media (processing) and Interaction
  • Yanlei Gu, Tomohiro Yendo, Mehrdad Panahpour Tehrani, Toshiaki Fujii, ...
    2012 Volume 7 Issue 1 Pages 161-169
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Traffic sign recognition systems can be used to assist drivers and improve road safety. The system is expected to recognize traffic signs at even greater distances in order to give drivers as much warning as possible based on the road conditions. A hybrid camera system is proposed in this paper with the goal of increasing the recognition distance compared to the conventional systems. In this system, an active telephoto camera is used as an assistant to a wide angle camera. Traffic sign detection and classification are processed separately for the different images from the wide angle camera and telephoto camera, respectively. The image from the telephoto camera provides enough information for a classification when the resolution of the detected traffic sign is low from the wide angle camera. The experimental results demonstrated that the recognition distance of the proposed system is improved compared to the conventional systems.
    Download PDF (2686K)
  • Mona Abo-El Dahb, Yao Zhou, Umair Farooq Siddiqi, Yoichi Shiraishi
    2012 Volume 7 Issue 1 Pages 170-180
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    The conventional steepest descent method in the back propagation process of an artificial neural network (ANN) is replaced by Simulated Evolution algorithm. This is called SimE-ANN and is applied to the estimation of landslide. In the experimental results, the errors of displacement and resistance of the piles in SimE-ANN are 50.2% and 28.0% smaller than those of the conventional ANN in average over 10 sets of data, respectively. However, the experimental results also show the effects of overtraining of SimE-ANN and the appropriate selection of training data should be investigated as future work.
    Download PDF (854K)
  • Takahiro Shinozaki, Toshinao Iwaki, Shiqiao Du, Masakazu Sekijima, Sad ...
    2012 Volume 7 Issue 1 Pages 181-191
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Three-dimensional structure prediction of a molecule can be modeled as a minimum energy search problem in a potential landscape. Popular ab initio structure prediction approaches based on this formalization are the Monte Carlo methods represented by the Metropolis method. However, their prediction performance degrades for larger molecules such as proteins since the search space is exponential to the number of atoms. In order to search the exponential space more efficiently, we propose a new method modeling the potential landscape as a factor graph. The key ideas are slicing the factor graph based on the maximum distance of bonded atoms to convert it to a linear structured graph, and the utilization of the max-sum search algorithm combined with samplings. It is referred to as Slice Chain Max-Sum and it has an advantage that the search is efficient because the graph is linear. Experiments are performed using polypeptides having 50 to 300 amino acid residues. It has been shown that the proposed method is computationally more efficient than the Metropolis method for large molecules.
    Download PDF (384K)
  • Marshall F. Tappen
    2012 Volume 7 Issue 1 Pages 192-205
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Low-level vision encompasses a wide variety of problems and solutions. Solutions to low-level problems can be broadly group according to how they propagate local information to global representations. Understanding these categorizations is useful because they offer guidance on how tools like machine learning can be implemented in these systems.
    Download PDF (385K)
  • Yasuhiro Mukaigawa, Ramesh Raskar, Yasushi Yagi
    2012 Volume 7 Issue 1 Pages 206-217
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    We propose a new method to analyze scattering light transport in homogeneous translucent media. The incident light undergoes multiple bounces in translucent media, and produces a complex light field. Our method analyzes the light transport in two steps. First, single and multiple scatterings are separated by projecting high-frequency stripe patterns. Then, the light field for each bounce scattering is recursively estimated based on a forward rendering process. Experimental results show that scattering light fields can be analyzed and visualized for each bounce.
    Download PDF (2003K)
  • Atsushi Shimada, Satoshi Yoshinaga, Rin-ichiro Taniguchi
    2012 Volume 7 Issue 1 Pages 218-229
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    An adaptive background model plays an important role for object detection in a scene which includes illumination changes. An updating process of the background model is utilized to improve the robustness against illumination changes. However, the process sometimes causes a false-negative problem when a moving object stops in an observed scene. A paused object will be gradually trained as the background since the observed pixel value is directly used for the model update. In addition, the original background model hidden by the paused object cannot be updated. If the illumination changes behind the paused object, a false-positive problem will be caused when the object restarts to move. In this paper, we propose 1) a method to inhibit background training to avoid the false-negative problem, and 2) a method to update an original background region occluded by a paused object to avoid the false-positive problem. We have used a probabilistic approach and a predictive approach of the background model to solve these problems. The great contribution of this paper is that we can keep paused objects from being trained by modeling the original background hidden by them. And also, our approach has an ability to adapt to various illumination changes. Our experimental results show that the proposed method can detect stopped objects robustly, and in addition, it is also robust for illumination changes and as efficient as the state-of-the-art method.
    Download PDF (1625K)
  • Takayoshi Yamashita, Yuji Yamauchi, Hironobu Fujiyoshi
    2012 Volume 7 Issue 1 Pages 230-241
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Human detection and action recognition form the basis for understanding human behaviors. Human detection is used to detect the positions of humans, and action recognition is able to recognize the action of specific humans. However, numerous approaches have been used to handle action recognition and human detection separately. Therefore, three main issues still exist when independent methods of human detection and action recognition are combined, 1) intrinsic errors in object detection impact the performance of action recognition, 2) features common to action recognition and object detection are missed, 3) the combination also has an impact on processing speed. We propose a single framework for human detection and action recognition to solve these issues. It is based on a hierarchical structure called Boosted Randomized Trees. The nodes are trained such that the upper nodes detect humans from the background, while the lower nodes recognize action. We were able to improve both human detection and action recognition rates over earlier hierarchical structure approaches with the proposed method.
    Download PDF (1607K)
  • Takehiro Tachikawa, Shinsaku Hiura, Kosuke Sato
    2012 Volume 7 Issue 1 Pages 242-255
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    This paper describes a method to determine the direction of a light source and the distribution of diffuse reflectance from two images under different lighting conditions. While most inverse-rendering methods require 3 or more images, we investigate the use of only two images. Using the relationships between albedo and light direction at 6 or more points, we firstly show that it is possible to simultaneously estimate both of these if the shape of the target object is given. Then we extend our method to handle a specular object and shadow effect by applying a robust estimation method. Thorough experimentation shows that our method is feasible and stable not only for well controlled indoor scenes, but also for an outdoor environment illuminated by sunlight.
    Download PDF (1959K)
  • Ukrit Watchareeruetai, Akisato Kimura, Robert Cheng Bao, Takahito Kawa ...
    2012 Volume 7 Issue 1 Pages 256-267
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    We propose a novel framework called StochasticSIFT for detecting interest points (IPs) in video sequences. The proposed framework incorporates a stochastic model considering the temporal dynamics of videos into the SIFT detector to improve robustness against fluctuations inherent to video signals. Instead of detecting IPs and then removing unstable or inconsistent IP candidates, we introduce IP stability derived from a stochastic model of inherent fluctuations to detect more stable IPs. The experimental results show that the proposed IP detector outperforms the SIFT detector in terms of repeatability and matching rates.
    Download PDF (2314K)
  • Satoshi Yoshinaga, Atsushi Shimada, Hajime Nagahara, Rin-ichiro Tanigu ...
    2012 Volume 7 Issue 1 Pages 268-280
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Object detection is an important task for computer vision applications. Many researchers have proposed a number of methods to detect the objects through background modeling. To adapt to “illumination changes” in the background, local feature-based background models are proposed. They assume that local features are not affected by background changes. However, “motion changes”, such as the movement of trees, affect the local features in the background significantly. Therefore, it is difficult for local feature-based models to handle motion changes in the background. To solve this problem, we propose a new background model in this paper by applying a statistical framework to a local feature-based approach. Our proposed method combines the concepts of statistical and local feature-based approaches into a single framework. In particular, we use illumination invariant local features and describe their distribution by Gaussian Mixture Models (GMMs). The local feature has the ability to tolerate the effects of “illumination changes”, and the GMM can learn the variety of “motion changes”. As a result, this method can handle both background changes. Some experimental results show that the proposed method can detect the foreground objects robustly against both illumination changes and motion changes in the background.
    Download PDF (2532K)
  • Hirokazu Nosato, Tsukasa Kurihara, Hidenori Sakanashi, Masahiro Muraka ...
    2012 Volume 7 Issue 1 Pages 281-291
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    In histopathological diagnosis, a clinical pathologist discriminates between normal tissues and cancerous tissues. However, recently, the shortage of clinical pathologists is posing increasing burdens on meeting the demands for such diagnoses, and this is becoming a serious social problem. Currently, it is necessary to develop new medical technologies to help reduce their burdens. Therefore, as a diagnostic support technology, this paper describes an extended method of HLAC feature extraction for classification of histopathological images into normal and anomaly. The proposed method can automatically classify cancerous images as anomaly by using an extended geometric invariant HLAC features with rotation- and reflection-invariant properties from three-level histopathological images, which are segmented into nucleus, cytoplasm and background. In conducted experiments, we demonstrate a reduction in the rate of not only false-negative errors but also of false-positive errors, where a normal image is falsely classified as an image with an anomaly that is suspected as being cancerous.
    Download PDF (2279K)
  • Seiichi Tagawa, Yasuhiro Mukaigawa, Jaewon Kim, Ramesh Raskar, Yasuyuk ...
    2012 Volume 7 Issue 1 Pages 292-305
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    We propose a new imaging method called hemispherical confocal imaging to visualize clearly a particular depth in a 3-D scene. The key optical component is a turtleback reflector which is a specially designed polyhedral mirror. To synthesize a hemispherical aperture, we combined the turtleback reflector with a coaxial camera and projector, to create on a hemisphere many virtual cameras and projectors with a uniform density. In such an optical device, high frequency illumination can be focused at a particular depth in the scene to visualize only that depth by employing descattering. The observed views are then factorized into masking, attenuation, reflected light, illuminance, and texture terms to enhance the visualization when obstacles are present. Experiments using a prototype system show that only a particular depth is effectively illuminated, and hazes caused by scattering and attenuation can be recovered even when obstacles are present.
    Download PDF (2221K)
  • Chika Inoshita, Yasuhiro Mukaigawa, Yasushi Yagi
    2012 Volume 7 Issue 1 Pages 306-317
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Many deblurring techniques have been proposed to restore blurred images resulting from camera motion. A major problem in the restoration process is that the deblurred images often include wave-like artifacts called ringing. In this paper, we propose a ringing detector that distinguishes the ringing artifacts from natural textures included in images. In designing the ringing detector, we focus on the fact that ringing artifacts are caused by the null frequency of the point-spread function. Ringings are detected by evaluating whether the deblurred image contains sine waves corresponding to the null frequencies across the entire image with uniform phase. By combining the ringing detector with a deblurring process, we can reduce ringing artifacts in the restored images. We demonstrate the effectiveness of the proposed ringing detector in experiments with synthetic and real images.
    Download PDF (1980K)
  • Gregory Hazelbeck, Hiroaki Saito
    2012 Volume 7 Issue 1 Pages 318-327
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    We present a system that assists Japanese-language teachers with the creation of electronic reading materials which contain glosses. Although glosses are traditionally generated for only content words, we propose a hybrid method for identifying and glossing functional expressions and conjugations, which enables the system to both generate more glosses and display them in an appropriate manner for learners. Coverage analysis and empirical evaluations show that our hybrid method allows our system to cover more functional expressions and conjugations than previous systems while maintaining good performance. Feedback received during interviews with Japanese-language teachers show that they react very positively toward both the new glosses and the system in general. Finally, results from a survey of Japanese as a foreign language learners show that they find the new glosses for functional expressions and conjugations to be very helpful.
    Download PDF (442K)
  • Yuanyuan Wang, Kazutoshi Sumiya
    2012 Volume 7 Issue 1 Pages 328-342
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Currently, many universities use Web services, such as SlideShare and edubase, to store presentation files. These files provide varying levels of knowledge, and are useful and valuable to students. However, self-learners retrieving such files still lack support in identifying which slides meet their specific needs. This is because, amongst presentation files intended for different levels of expertise, it is difficult to understand the context, and thus identify relevant information, of a user query in a slide. We describe a novel browsing method for e-learning by generating snippets for target slides. For this, we consider the relevant information between slides and identify the portions of the slides that are relevant to the query. By analyzing the keyword conceptual structure on the basis of semantic relations, and the document structure on the basis of the indent levels in the slides, not only can target slides be precisely retrieved, but their relevant portions can also be brought to the attention of the user. This is done by focusing on portions from either detailed or generalized slides at the conceptual level; this gives the surrounding context to help users easily determine which slides are useful. We also present a prototype system and the results of an evaluation of its effectiveness.
    Download PDF (2972K)
  • Kazuhisa Matsuzono, Hitoshi Asaeda, Jun Murai
    2012 Volume 7 Issue 1 Pages 343-353
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    High-quality and high-performance real-time interactive video streaming requires both keeping the highest data transmission rate and minimizing data packet loss to achieve the best possible streaming quality. TCP-friendly rate control (TFRC) is the most widely recognized mechanism for achieving relatively smooth data transmission while competing fairly with TCP flows. However, because its data transmission rate depends largely on packet loss conditions, high-quality real-time streaming suffers from a significant degradation of streaming quality due to both a reduction in the data transmission rate and data packet losses. This paper proposes the dynamic probing forward error correction (DP-FEC) mechanism that is effective for high-quality real-time streaming to maximize the streaming quality in a situation in which competing TCP flows pose packet losses to the streaming flow. DP-FEC estimates the network condition by dynamically adjusting the degree of FEC redundancy while trying to recover lost data packets. It effectively utilizes network resources and adjusts the degree of FEC redundancy to improve the playback quality at the user side while minimizing the performance impact of competing TCP flows. We describe the DP-FEC algorithm and evaluate its effectiveness using an NS-2 simulator. The results show that by effectively utilizing network resources, DP-FEC enables to retain higher streaming quality while minimizing the adverse condition on TCP performance, thus achieving TCP friendliness.
    Download PDF (437K)
  • Keiji Shinzato, Tomohide Shibata, Daisuke Kawahara, Sadao Kurohashi
    2012 Volume 7 Issue 1 Pages 354-365
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Due to the explosive growth in the amount of information in the last decade, it is getting extremely harder to obtain necessary information by conventional information access methods. Hence, creation of drastically new technology is needed. For developing such new technology, search engine infrastructures are required. Although the existing search engine APIs can be regarded as such infrastructures, these APIs have several restrictions such as a limit on the number of API calls. To help the development of new technology, we are running an open search engine infrastructure, TSUBAKI, on a high-performance computing environment. In this paper, we describe TSUBAKI infrastructure.
    Download PDF (685K)
  • Supheakmungkol Sarin, Michael Fahrmair, Matthias Wagner, Wataru Kameya ...
    2012 Volume 7 Issue 1 Pages 366-382
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    In this era of information explosion, automating the annotation process of digital images is a crucial step towards efficient and effective management of this increasingly high volume of content. However, this still is a highly challenging task for the research community. One of the main bottlenecks is the lack of integrity and diversity of features. We propose to solve this problem by utilizing 43 image features that cover the holistic content of the image from global to subject, background and scene. In our approach, salient regions and the background are separated without prior knowledge. Each of them together with the whole image are treated independently for feature extraction. Extensive experiments were designed to show the efficiency and the effectiveness of our approach. We chose two publicly available datasets manually annotated with diverse nature of images for our experiments, namely, the Corel5K and ESP Game datasets. We confirm the superior performance of our approach over the use of a single whole image using sign test with p-value < 0.05. Furthermore, our combined feature set gives satisfactory performance compared to recently proposed approaches especially in terms of generalization even with just a simple combination. We also obtain a better performance with the same feature set versus the grid-based approach. More importantly, when using our features with the state-of-the-art technique, our results show higher performance in a variety of standard metrics.
    Download PDF (12207K)
  • Chenhao Wang, Zhencheng Hu, Roland Chapuis
    2012 Volume 7 Issue 1 Pages 383-392
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    This paper presents a robust hybrid approach to Predictive Lane Detection - PLD, which utilizes information from digital map to improve efficiency and accuracy to vision-based lane detector. Traditional approaches are mostly designed for well maintained and simple road conditions like motorway or interstate road with clear lane markers, to solve out the estimation problems of coming road shape as well as vehicle's position and ego-state, which however becomes ambiguous or unavailable in the complicated road environment and under difficult weather or illumination conditions. In this paper, the proposed approach refers to vehicle localization on digital map for road geometry estimation, which gives strong cues for vision-based detector to limit the search region of road candidates and suppress noise. In addition, other information from digital map like lane marker painting color and categories is utilized in the high level of data fusion on road geometry estimation. Real and synthesized road experiment results verified the effectiveness and efficiency of our approach.
    Download PDF (1782K)
Computer Networks and Broadcasting
  • Sébastien Decugis, Fumio Teraoka
    2012 Volume 7 Issue 1 Pages 393-404
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    AAA (Authentication, Authorization, and Accounting) is one of the important functions indispensable for providing services on the Internet. Diameter Base Protocol was standardized in IETF as a successor of RADIUS, which is a widely used AAA protocol in the current Internet. Diameter solves the problems that RADIUS has such as support of multiple realms, reliable and secure message transport, and failover. There are several open source implementations of Diameter Base Protocol. However, none of them completely conforms to the specification. The first contribution of freeDiameter is that it is an open source of Diameter Base Protocol that completely conforms to the specification. It is written in C and based on a BSD-like license. In the Diameter architecture, a particular service on Diameter Base Protocol is defined as a Diameter application such as Diameter EAP application for WiFi network access control. The second contribution of freeDiameter is that the software architecture of freeDiameter makes it easy to implement Diameter applications as additional plug-ins. freeDiameter has already been distributed through our home page. freeDiameter with Diameter EAP application has been used in our laboratory for WiFi network access. It was also used for network control in the WIDE camp held in September 2010 for four days in which approximately 200 researchers attended. There was no problem on freeDiameter. This is good evidence of the stability of freeDiameter.
    Download PDF (943K)
  • Hideyuki Tokuda, Jin Nakazawa
    2012 Volume 7 Issue 1 Pages 405-413
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Convergence between cyber and physical spaces is accelerating due to the penetration of various ubiquitous services based on sensors and actuators. Effective sensors such as ultra low-cost wireless sensors, smart phones and pads allow us to couple real objects, people, places, and environment with the corresponding entities in the cyber space. Similarly, soft sensors such as blog, facebook, twitter, foursqure and other applications create new types of sensed data for cyber-physical coupling. for In this paper, we describe sensor enabled cyber-physical coupling for creating ubiquitous services in the SenseCampus project. We first classify cyber-physical coupling and ubiquitous services in the project. Several ubiquitous services, such as SensingCloud, DIY smart object services, Twitthings, Airy Notes, and Mebius Ring, are described. We then address the challenges in cyber-physical coupling for creating advanced ubiquitous services particularly for educational facilities.
    Download PDF (1783K)
  • Taye Mulugeta, Lei Shu, Manfred Hauswirth, Zhangbing Zhou, Shojiro Nis ...
    2012 Volume 7 Issue 1 Pages 414-424
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Two Phase geographic Greedy Forwarding (TPGF) is a pure on-demand geographic greedy forwarding protocol for transmitting multimedia streams in wireless multimedia sensor networks (WMSNs), which has explicit route discovery, i.e., a node greedily forwards a routing packet to the neighbor that is the closest one to the destination to build a route. Like most geographic routing protocols, TPGF is vulnerable to some greedy forwarding attacks, e.g., spoofing or modifying control packets. As the first research effort that investigates the secure routing protocol in WMSNs, in this paper, we identify vulnerabilities in TPGF and propose corresponding countermeasures, e.g., secure neighbor discovery and route discovery, and propose the SecuTPGF, an extended version of TPGF, which exactly follows the original TPGF protocol's routing mechanism but with enhanced security and reliability. The effectiveness of SecuTPGF is proved by conducting security analysis and evaluation experiments.
    Download PDF (2759K)
  • Kriengsak Treeprapin, Akimitsu Kanzaki, Takahiro Hara, Shojiro Nishio
    2012 Volume 7 Issue 1 Pages 425-434
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we propose DATFM/DA (Data Acquisition and Transmission with Fixed and Mobile node with Deployment Adjusting), which is an extension of our previous mobile sensor control method, DATFM. DATFM/DA uses two types of sensor nodes, fixed node and mobile node. The data acquired by the nodes are accumulated on a fixed node before being transferred to the sink node. DATFM/DA divides the target region into multiple areas, and statically deploys mobile nodes to each divided area. In addition, DATFM/DA adjusts the number of mobile nodes deployed in each area based on the analysis of the performance. We also conduct simulation experiments to verify that this method further improves the performances of sensing and data transfer.
    Download PDF (1837K)
  • Tsuyoshi Hisamatsu, Hitoshi Asaeda
    2012 Volume 7 Issue 1 Pages 435-447
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we investigate the limitation caused by reception bandwidth in current overlay streaming, and propose a novel overlay network architecture for high-quality, real-time streaming. The proposed architecture consists of two components: 1) join and retransmission control (JRC), and 2) redundant node selection (RNS). The JRC dynamically adjusts the number of join and data retransmission requests based on the network condition and reception packet fluctuation of the receiver. The RNS selects retransmission nodes based on the retention probability of lost packets requested by the receivers. We have designed and implemented the algorithm of the proposed architecture. According to our evaluation, our approach results in an additional 1-2Mbps reception bandwidth from existing overlay streaming applications.
    Download PDF (842K)
  • Yoshiaki Taniguchi, Akimitsu Kanzaki, Naoki Wakamiya, Takahiro Hara
    2012 Volume 7 Issue 1 Pages 448-457
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Wireless sensor network technologies have attracted a lot of attention in recent years. In this paper, we propose an energy-efficient data gathering mechanism using traveling wave and spatial interpolation for wireless sensor networks. In our proposed mechanism, sensor nodes schedule their message transmission timing in a fully-distributed manner such that they can gather sensor data over a whole wireless sensor network and transmit that data to a sink node while switching between a sleep state and an active state. In addition, each sensor node determines the redundancy of its sensor data according to received messages so that only necessary sensor data are gathered and transmitted to the sink node. Our proposed mechanism does not require additional control messages and enables both data traffic and control traffic to be drastically reduced. Through simulation experiments, we confirmed that with our proposed mechanism, the number of message transmissions can be reduced by up to 77% and the amount of transmitted data can be reduced by up to 13% compared to a conventional mechanism.
    Download PDF (917K)
  • Yuichi Hattori, Sozo Inoue
    2012 Volume 7 Issue 1 Pages 458-465
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we introduce a large-scale activity gathering system with mobile sensor devices such as smart phones and accelerometers. We gathered over 35, 000 activity data from more than 200 people over approximately 13 months. We describe the design rationale of the system, analyze the gathered data through statistics and clustering, and application of an existing activity recognition method. From the recognition, the performance of existing algorithm drastically deteriorated using the gathered data as training data. These results show that we were still able to find a challenging field for activity recognition in larger-scale activity data.
    Download PDF (503K)
  • Quang Tran Minh, Eiji Kamioka
    2012 Volume 7 Issue 1 Pages 466-476
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    The penetration rate is one of the most important factors that affects the effectiveness of the mobile phone-based traffic state estimation. This article thoroughly investigates the influence of the penetration rate on the traffic state estimation using mobile phones as traffic probes and proposes reasonable solutions to minimize such influence. In this research, the so-called “acceptable” penetration rate, at which the estimation accuracy is kept as an “acceptable” level, is identified. This recognition is important to bring the mobile phone-based traffic state estimation systems into realization. In addition, two novel “velocity-density inference” models, namely the “adaptive” and the “adaptive feedback” velocity-density inference circuits, are proposed to improve the effectiveness of the traffic state estimation. Furthermore, an artificial neural network-based prediction approach is introduced to a the effectiveness of the velocity and the density estimation when the penetration rate degrades to 0%. These improvements are practically meaningful since they help to guarantee a high accurate traffic state estimation, even in cases of very low penetration rate. The experimental evaluations reveal the effectiveness as well as the robustness of the proposed solutions.
    Download PDF (802K)
  • Manabu Ito, Satoshi Komorita, Yoshinori Kitatsuji, Hidetoshi Yokota
    2012 Volume 7 Issue 1 Pages 477-487
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    The rapid spread of smart phones causes the explosion of media traffic. This encourages mobile network operators (MNOs) to coordinate multiple access networks. For such an MNO, the IP Multimedia Subsystem (IMS) is a promising service control infrastructure to ensure a QoS guaranteed communication path for the media in the multi-access network. IMS-based service continuity enables user equipments (UEs) to continuously use IMS-based services (e.g., VoIP) even when the UEs make handovers between different access networks where the UE is assigned the different IP address, respectively. In the case where the UE cannot simultaneously use multiple wireless devices, there is the possibility of a long media disruption time during handovers. This is caused by several consecutive handovers as a result of attempting to discover the access network where the UE can have the QoS-guaranteed communications. In this paper, we propose a method for reducing the media disruption time when the UE makes handovers between different access networks. In the proposal, the UE proactively performs the service continuity procedure, and selects an access network that can provide the required network resources to the UE. We implement and evaluate the proposed method, and show how the media disruption time can be reduced.
    Download PDF (1137K)
  • XingPing He, Sayaka Kamei, Satoshi Fujita
    2012 Volume 7 Issue 1 Pages 488-495
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    This paper proposes a distributed algorithm to calculate a subnetwork of a given wireless sensor network (WSN) connecting a set of sources and a set of sinks, in such a way that: 1) the length of one of the shortest paths connecting from a source to a sink in the subgraph does not exceed the distance from the source to the farthest sink in the original graph, and 2) the number of links contained in the subgraph is smallest. The proposed algorithm tries to improve an initial solution generated by a heuristic scheme by repeatedly applying a local search. The result of simulations indicates that: 1) using a heuristic to generate an initial solution, the size of the initial solution is reduced by 10% compared with a simple shortest path tree; and 2) the local search reduces the size of the resultant subgraph by 20% and the cost required for such an improvement by the local search can be recovered by utilizing the resultant subgraph for a sufficiently long time such as a few days.
    Download PDF (560K)
Information Systems and Applications
  • Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama
    2012 Volume 7 Issue 1 Pages 496-505
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    The spatio-temporal correlation analysis between visual saliency and eye movements is presented for the estimation of the mental focus toward videos. We extract spatio-temporal dynamics patterns of saliency areas from the videos, which we refer to as saliency-dynamics patterns, and evaluate eye movements based on their correlation with the saliency-dynamics patterns in view. Experimental results using TV commercials demonstrate the effectiveness of the proposed method for the mental-focus estimation.
    Download PDF (874K)
  • Mamoun Nawahdah, Tomoo Inoue
    2012 Volume 7 Issue 1 Pages 506-515
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    Watching a real teacher in a real environment from a good distance and with a clear viewing angle has a significant effect on learning physical tasks. This applies to physical-task learning in a mixed-reality environment as well. Observing and imitating body motion is important for learning some physical tasks, including spatial collaborative work. When people learn a task with physical objects, they want to try and practice the task with the actual objects. They also want to keep the referential behavior model close to them at all times. Showing the virtual teacher by using mixed-reality technology can create such an environment, and thus has been researched in this study. It is known that a virtual teacher-model's position and orientation influence (a) the number of errors, and (b) the accomplishment time in physical-task learning using mixed-reality environments. This paper proposes an automatic adjustment method governing the virtual teacher's horizontal rotation angle, so that the learner can easily observe important body motions. The method divides the whole task motion into fixed duration segments, and seeks the most important moving part of the body in each segment, and then rotates the virtual teacher to show the most important part to the learner accordingly. To evaluate the method, a generic physical-task learning experiment was conducted. The method was revealed to be effective for motions that gradually reposition the most important moving part, such as in some manufacturing and cooking tasks. This study is therefore considered likely to enhance the transference of physical-task skills.
    Download PDF (2349K)
  • Shelly Sachdeva, Aastha Madaan, Subhash Bhalla
    2012 Volume 7 Issue 1 Pages 516-528
    Published: 2012
    Released on J-STAGE: March 15, 2012
    JOURNAL FREE ACCESS
    A majority of research efforts in the domain of Electronic Health Records concentrate on standardization and related issues. The earlier forms of medical records did not permit a high level of exchange, interoperability or extensive search and querying. Recent research has focused on the development of open standards for life-time long health record archives for individual patients. This facilitates the extensive use of data mining and querying techniques for the analysis. These efforts can increase the depth and the extent of the utilization of patient data. For example, association analysis can be used to identify common features among disparate patients to check whether diagnoses or procedures are effective. Pattern discovery techniques can also be used to create the census reports and generate a meaningful visualization of summary data at hospitals. For handling the large volume of data, there is a need to focus on improving the usability. The current study proposes a model for the development of EHR support systems. It aims to capture the health worker's needs in a scientific way, on a continuous basis. The proposal has been evaluated for the accuracy of knowledge discovery to improve the usability.
    Download PDF (1307K)
feedback
Top