Information and Media Technologies
Online ISSN : 1881-0896
ISSN-L : 1881-0896
Volume 7, Issue 2
Displaying 1-42 of 42 articles from this issue
Hardware and Devices
  • Akio Takada, Koji Sasaki, Eiji Takahashi, Naoki Hanashima, Takatoshi Y ...
    2012 Volume 7 Issue 2 Pages 529-534
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    We have developed wire-grid polarizers with low reflectivity in the visible region. The polarizers consist of a bilayered structure of an absorptive layer and a (transparent) gap layer on sub-wavelength pitch gratings made of aluminum. The bilayered structure functions as a highly effective antireflection coating for wire-grid with high reflectance. In order to realize these multilayered wire-grid, the glancing angle deposition technique was used to deposit the absorptive layer just above the Al wire-grid covered with the SiO2 gap layer. Both low reflectance and high transmittance in the desired wavelength range were achieved by optimizing the thickness of each layer.
    Download PDF (2640K)
  • Tan Yan, Qiang Ma, Martin D.F. Wong
    2012 Volume 7 Issue 2 Pages 535-543
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    The increasing complexity of electronic systems has made PCB routing a difficult problem. A large amount of research effort has been dedicated to the study of this problem. In this paper, we provide an overview of recent research results on the PCB routing problem. We focus on the escape routing problem and the length-matching routing problem, which are the two most important problems in PCB routing. Other relevant works are also briefly introduced.
    Download PDF (1298K)
  • Yohei Nakata, Shunsuke Okumura, Hiroshi Kawaguchi, Masahiko Yoshimoto
    2012 Volume 7 Issue 2 Pages 544-555
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    This paper presents a novel cache architecture using 7T/14T SRAM, which can improve its reliability with control lines dynamically. Our proposed 14T word-enhancing scheme can enhance its operating margin in word granularity by combining two words in a low-voltage mode. Furthermore, we propose a new testing method that maximizes the efficiency of the 14T word-enhancing scheme. In a 65-nm process, it can reduce the minimum operation voltage (Vmin) to 0.5V to a level that is 42% and 21% lower, respectively, than those of a conventional 6T SRAM and a cache word-disable scheme. Measurement results show that the 14T word-enhancing scheme can reduce Vmin of the 6T SRAM and 14T dependable modes by 25% and 19%, respectively. The respective dynamic power reductions are 89.2% and 73.9%. The respective total power reductions are 44.8% and 20.9%.
    Download PDF (2120K)
  • Hajime Nagahara, Changyin Zhou, Takuya Watanabe, Hiroshi Ishiguro, Shr ...
    2012 Volume 7 Issue 2 Pages 556-566
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Since 1960s, aperture patterns have been studied extensively and a variety of coded apertures have been proposed for various applications, including extended depth of field, defocus deblurring, depth from defocus, light field acquisition, etc. Researches have shown that optimal aperture patterns can be quite different due to different applications, imaging conditions, or scene contents. In addition, many coded aperture techniques require aperture patterns to be temporally changed during capturing. As a result, it is often necessary to have a programmable aperture camera whose aperture pattern can be dynamically changed as needed in order to capture more useful information. In this paper, we propose a programmable aperture camera using a Liquid Crystal on Silicon (LCoS) device. This design affords a high brightness contrast and high resolution aperture with a relatively low light loss, and enables one change the pattern at a reasonably high frame rate. We build a prototype camera and evaluate its features and drawbacks comprehensively by experiments. We also demonstrate three coded aperture applications in defocus deblurring, depth from defocus and light field acquisition.
    Download PDF (3277K)
Computing
  • Kento Emoto
    2012 Volume 7 Issue 2 Pages 567-583
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    What we call “generate-test-α” is a computation pattern in which we do some extra computation, such as choosing the optimal solution, after the usual generate&test computation that enumerates all solutions passing the test. A naive parallel algorithm of the generate-test-α can be given as a composition of parallel skeletons, but it will suffer from a heavy computation cost when the number of generated candidates is large. Such a situation often occurs when we generate a set of substructures from a source data structure. It is known in the field of skeletal parallel programming that a certain class of simplified computation without test phases can be given efficient linear cost algorithms by making systematic transformations exploiting semirings. However, no transformation is known as yet to optimize the generate-test-α computation uniformly. In this paper, we propose a novel transformation to embed the test phases into semirings so that generate-test-α computation can be transformed into a simplified generate-α computation. This transformation allows us to reuse efficient parallel algorithms of generate-α for the generate-test-α computation. In addition, we give powerful optimizations for a class of generate-α computations, so that we can give uniform optimizations for a wide class of generate-test-α computations.
    Download PDF (382K)
  • Xiongxin Zhao, Zhixiang Chen, Xiao Peng, Dajiang Zhou, Satoshi Goto
    2012 Volume 7 Issue 2 Pages 584-592
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Currently most of LDPC decoders are implemented with the so-called layered algorithm for its implementation efficiency and relatively high decoding performance. However, not all of structured LDPC codes can be implemented with the layered algorithm directly because of the message updating conflicts within layers in the a-posteriori information memory. In this paper we focus on the resolution of this kind of conflicts for DVB-T2 LDPC decoders. Unlike the previous resolutions, we directly implement the layered algorithm without modifying the parity-check matrices (PCM) or the decoding algorithm. DVB-T2 LDPC decoder architecture is also proposed in this paper with two new techniques which guarantee conflict-free layered decoding. The PCM Rearrange technique reduces the number of conflicts and eliminates all of data dependency problems between layers to ensure high pipeline efficiency. The Layer Division technique deals with all remaining conflicts with a well-designed decoding schedule. Experiment results show that compared to state-of-the-art works we achieve a slight error-correcting performance gain for DVB-T2 LDPC codes.
    Download PDF (501K)
  • Masayoshi Yoshimura, Yusuke Akamine, Yusuke Matsunaga
    2012 Volume 7 Issue 2 Pages 593-600
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    In advanced integrated circuit technology, the soft error tolerance is low. Soft errors ultimately lead to failure in VLSIs. We propose a method for the exact estimation of error propagation probabilities in sequential circuits whose FFs latch failure values. The failure due to soft errors in sequential circuits is defined using the modified product machine. The modified product machine monitors whether failure values appear at any primary output. The behavior of the modified product machine is analyzed with the Markov model. The probabilities that the failure values latched into the flip-flops (FFs) appear at any primary output are calculated from the state transition probabilities of the modified product machine. The time required for solving simultaneous linear equations accounts for a large portion of the execution time. We also propose two acceleration techniques to enable the application of our estimation method to larger scale circuits. These acceleration techniques reduce the number of variables in simultaneous linear equations. We apply the proposed method to ISCAS'89 and MCNC benchmark circuits and estimate error propagation probabilities for sequential circuits. Experimental results show that total execution times for the proposed method with two acceleration techniques are up to 10 times lesser than the total execution times for a naive implementation.
    Download PDF (314K)
  • Hikaru Horie, Masato Asahara, Hiroshi Yamada, Kenji Kono
    2012 Volume 7 Issue 2 Pages 601-613
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    This paper presents MashCache, a scalable client-side architecture for taming harsh effects of flash crowds. Like previous systems, MashCache extensively makes use of client resources to cache and share frequently accessed contents. To design MashCache, we carefully investigated the quantitative analysis of flash crowds in the wild and found out that flash crowds have three good features that can be used to simplify the overall design of MashCache. First, flash crowds start up very slowly. MashCache has enough time to prepare for the coming flash crowds. Second, flash crowds last for a short period. Since the accessed content is rarely updated in this short period, MashCache can employ a simple mechanism for cache consistency. Finally, the clients issue the same requests on the target web sites. This feature simplifies the cache management of MashCache because the cache explosion can be avoided. By using these good features of flash crowds, MashCache advances the state-of-the-art of P2P based caching systems for flash crowds; MashCache is a pure P2P system that combines 1) aggressive caching, 2) query-origin key, 3) two-phase delta consistency, and 4) carefully designed cache meta data. MashCache has another advantage. Since it works completely on client-side, MashCache can tolerate flash crowds even if the external services the mashup depends on are not equipped with flash-crowd-resistant mechanisms. In the experiments with up to 2,500 emulated clients, a prototype of MashCache reduces the number of requests to the original web servers by 98.2% with moderate overheads.
    Download PDF (1178K)
  • Takahiro Hirofuchi, Hidemoto Nakada, Satoshi Itoh, Satoshi Sekiguchi
    2012 Volume 7 Issue 2 Pages 614-626
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Dynamic consolidation of virtual machines (VMs) through live migration is a promising technology for IaaS datacenters. VMs are dynamically packed onto fewer server nodes, thereby eliminating excessive power consumption. Existing studies on VM consolidation, however, are based on precopy live migration, which requires dozens of seconds to switch the execution hosts of VMs. It is difficult to optimize VM locations quickly on sudden load changes, resulting in serious violations of VM performance criteria. In this paper, we propose an advanced VM consolidation system exploiting postcopy live migration, which greatly alleviates performance degradation. VM locations are reactively optimized in response to ever-changing resource usage. Sudden overloading of server nodes are promptly resolved by quickly switching the execution hosts of VMs. We have developed a prototype of our consolidation system and evaluated its feasibility through experiments. We confirmed that our consolidation system achieved a higher degree of performance assurance than using precopy migration. Our micro benchmark program, designed for the metric of performance assurance, showed that performance degradation was only 12% or less, even for memory-intensive workloads, which was less than half the level of using precopy live migration. The SPECweb benchmark showed that performance degradation was approximately 10%, which was greatly alleviated from the case of using precopy live migration (21%).
    Download PDF (1531K)
  • Daniel Sangorrin, Shinya Honda, Hiroaki Takada
    2012 Volume 7 Issue 2 Pages 627-638
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Virtualization solutions aimed at the consolidation of a real-time operating system (RTOS) and a general-purpose operating system (GPOS) onto the same platform are gaining momentum as high-end embedded systems increase their computation power. Among them, the most extended approach for scheduling both operating systems consists of executing the GPOS only when the RTOS becomes idle. Although this approach can guarantee the real-time performance of the RTOS tasks and interrupt handlers, the responsiveness of GPOS time-sensitive activities is negatively affected when the RTOS contains compute-bound activities executing with low priority. In this paper, we modify a reliable hardware-assisted dual-OS virtualization technique to implement an integrated scheduling architecture where the execution priority level of the GPOS and RTOS activities can be mixed with high granularity. The evaluation results show that the proposed approach is suitable for enhancing the responsiveness of the GPOS time-sensitive activities without compromising the reliability and real-time performance of the RTOS.
    Download PDF (1365K)
  • Kazuya Yamakita, Hiroshi Yamada, Kenji Kono
    2012 Volume 7 Issue 2 Pages 639-650
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Although operating systems (OSes) are crucial to achieving high availability of computer systems, modern OSes are far from bug-free. Rebooting the OS is simple, powerful, and sometimes the only remedy for kernel failures. Once we accept reboot-based recovery as a fact of life, we should try to ensure that the downtime caused by reboots is as short as possible. This paper presents “phase-based” reboots that shorten the downtime caused by reboot-based recovery. The key idea is to divide a boot sequence into phases. The phase-based reboot reuses a system state in the previous boot if the next boot reproduces the same state. A prototype of the phase-based reboot was implemented on Xen 3.4.1 running para-virtualized Linux 2.6.18. Experiments with the prototype show that it successfully recovered from kernel transient failures inserted by a fault injector, and its downtime was 34.3% to 93.6% shorter than that of the normal reboot-based recovery.
    Download PDF (934K)
  • Tomoharu Ugawa, Hideya Iwasaki, Taiichi Yuasa
    2012 Volume 7 Issue 2 Pages 651-658
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Mark sweep garbage collection (GC) is usually implemented using a mark stack for a depth first search that marks all objects reachable from the root set. However, the required size of the mark stack depends on the application, and its upper bound is proportional to the size of the heap. It is not acceptable in most systems to reserve memory for such a large mark stack. To avoid unacceptable memory overhead, some systems limit the size of the mark stack. If the mark stack overflows, the system scans the entire heap to find objects that could not be pushed due to overflow and traverses their children. Since the scanning takes a long time, this technique is inefficient for applications that are likely to cause overflows. In this research, we propose a technique to record rough locations of objects that failed to be pushed so that they can be found without scanning the entire heap. We use a technique similar to the card table of mostly concurrent GC to record rough locations.
    Download PDF (262K)
  • Munehiro Takimoto
    2012 Volume 7 Issue 2 Pages 659-666
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Partial dead code elimination (PDE) is a powerful code optimization technique that extends dead code elimination based on code motion. PDE eliminates assignments that are dead on some execution paths and alive on others. Hence, it can not only eliminate partially dead assignments but also move loop-invariant assignments out of loops. These effects are achieved by interleaving dead code elimination and code sinking. Hence, it is important to capture second-order effects between them, which can be reflected by repetitions. However, this process is costly. This paper proposes a technique that applies PDE to each assignment on demand. Our technique checks the safety of each code motion so that no execution path becomes longer. Because checking occurs on a demand-driven basis, the checking range may be restricted. In addition, because it is possible to check whether an assignment should be inserted at the blocking point of the code motion by performing a demand-driven analysis, PDE analysis can be localized to a restricted region. Furthermore, using the demand-driven property, our technique can be applied to each statement in a reverse postorder for a reverse control flow graph, allowing it to capture many second-order effects. We have implemented our technique as a code optimization phase and compared it with previous studies in terms of optimization and execution costs of the target code. As a result, our technique is as efficient as a single application of PDE and as effective as multiple applications of PDE.
    Download PDF (301K)
  • Akihisa Yamada, Keiichirou Kusakari, Toshiki Sakabe, Masahiko Sakai, N ...
    2012 Volume 7 Issue 2 Pages 667-675
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Dynamically typed languages such as Scheme are widely adopted because of their rich expressiveness. However, there is the drawback that dynamic typing cannot detect runtime errors at compile time. In this paper, we propose a type system which enables static detection of runtime errors. The key idea of our approach is to introduce a special type, called the error type, for expressions that cause runtime errors. The proposed type system brings out the benefit of the error type with set-theoretic union, intersection and complement types, recursive types, parametric polymorphism and subtyping. While existing type systems usually ensure that evaluation never causes runtime errors for typed expressions, our system ensures that evaluation always causes runtime errors for expressions typed with the error type. Likewise, our system also ensures that evaluation never causes errors for expressions typed with any type that does not contain the error type. Under the usual definition of subtyping, it is difficult to syntactically prove the soundness of our type system. We redefine subtyping by introducing the notion of intransitive subtyping, and syntactically prove the soundness under the new definition.
    Download PDF (248K)
  • Katsuaki Ikegami, Kenjiro Taura
    2012 Volume 7 Issue 2 Pages 676-684
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Message passing model is a popular programming model in which users explicitly write both “send” and “receive” commands in programs and locate shared data on each process's local memory address. Consequently, it is difficult to write a program with complicated algorithms using a message passing model. This problem can be solved using a partitioned global address space (PGAS) model, which can provide virtual global address space and users can effortlessly write programs with complex data sharing. The PGAS model hides the network communication as implicit global memory access. This leads to better programmability, but there can be additional network communication overhead compared to a message passing model. We can reduce the overhead if programs read or write global memory in bulk, but this complicates writing programs. This paper presents the programming language and its runtime to achieve both the programmability and the performance with automatic communication aggregation. The programmer can write global memory accesses in the normal memory access style; then, the compiler and runtime aggregate the communication. In particular, the most time-consuming network accesses are placed in loops, and therefore, this paper suggests how the compiler detects global memory accesses in loops and aggregates them and how to implement that idea.
    Download PDF (375K)
  • Takayuki Koai, Makoto Tatsuta
    2012 Volume 7 Issue 2 Pages 685-693
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Substitution Theorem is a new theorem in untyped lambda calculus, which was proved in 2006. This theorem states that for a given lambda term and given free variables in it, the term becomes weakly normalizing when we substitute arbitrary weakly normalizing terms for these free variables, if the term becomes weakly normalizing when we substitute a single arbitrary weakly normalizing term for these free variables. This paper formalizes and verifies this theorem by using the higher-order theorem prover HOL. A control path, which is the key notion in the proof, explicitly uses names of bound variables in lambda terms, and it is defined only for lambda terms without bound variable renaming. The lambda terms without bound variable renaming are formalized by using the HOL package based on contextual alpha-equivalence. The verification uses 10119 lines of HOL code and 326 lemmas.
    Download PDF (217K)
  • Yusuke Wada, Shigeru Kusakabe
    2012 Volume 7 Issue 2 Pages 694-700
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Formal methods are mathematically-based techniques for specifying, developing and verifying a component or system for increasing the confidence regarding the reliability and robustness of the target. It can be used at different levels with different techniques, and one approach is to use model-oriented formal languages such as VDM languages in writing specifications. During model development, we can test executable specifications in VDM-SL and VDM++. In a lightweight formal approach, we test formal specifications to increase our confidence as we do in implementing software code with conventional programming languages. For this purpose, millions of tests may be conducted in developing highly reliable mission-critical software in a lightweight formal approach. In this paper, we introduce our approach to supporting a large volume of testing for executable formal specifications using Hadoop, an implementation of the MapReduce programming model. We are able to automatically distribute an interpretation of specifications in VDM languages by using Hadoop. We also apply a property-based data-driven testing tool, QuickCheck, over MapReduce so that specifications can be checked with thousands of tests that would be infeasible to write by hand, often uncovering subtle corner cases that wouldn't be found otherwise. We observed effect to coverage and evaluated scalability in testing large amounts of data for executable specifications in our approaches.
    Download PDF (1518K)
  • Hironao Takahashi, Khalid Mahmood Malik, Kinji Mori
    2012 Volume 7 Issue 2 Pages 701-708
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Web services and cloud computing paradigms have opened up many new vistas. The data intensive cloud applications usually require huge amounts of data to input and output from secondary storage systems. The outstanding progress in area of network communications has enabled high speed networks and therefore, communication latency bottleneck in cloud and other web applications has been shifted to node/storage level. Moreover, existing cloud solutions focused mainly on the efficient utilization of computing resources through virtualization and issues of storage bottleneck did not receive much attention. Moreover, virtualization based implementation ensures equal priority to all hosted applications, thus, real time applications in cloud environment can't meet their requirements. To meet the demand of overall low latency in cloud and other web services; and particularly to reduce I/O bottleneck at storage level, novel idea of autonomous L3 cache technology is proposed. Autonomous L3 cache technology utilizes local memory space as dedicated block device cache for certain specific application, thus prioritizing it over rest of hosted ones. Evaluation shows performance improvement of 5-8 times in terms of timeliness in given setup.
    Download PDF (872K)
  • Abu Elenin Sherihan, Masato Kitakami
    2012 Volume 7 Issue 2 Pages 709-720
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    A Grid monitoring system is differentiated from a general monitoring system in that it must be scalable across wide-area networks, include a large number of heterogeneous resources, and be integrated with the other Grid middleware in terms of naming and security issues. A Grid Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. The Grid Monitoring Architecture (GMA) specification sets out the requirements and constraints of any implementation. It is based on simple Consumer/Producer architecture with an integrated system registry and distinguishes transmission of monitoring data and data discovery logically. There are many systems that implement GMA but all have some drawbacks such as, difficult installation, single point of failure, or loss of message control. So we design a simple model after we analyze the requirements of Grid monitoring and information service. We propose a grid monitoring system based on GMA. The proposed Grid monitoring system consists of producers, registry, consumers, and failover registry. The registry is used to match the consumer with one or more producers, so it is the main monitoring tool. The failover registry is used to recover any failure in the main registry. The structure of a proposed grid monitoring system depends on java Servlet and SQL query language. This makes the system more flexible and scalable. We try to solve some problems of the previous works in a Grid monitoring system such as, lack of data flow and single point of failure in R-GMA, and difficulty of installing in MDS4. Firstly, we solve the problem of single point of failure by adding failover registry to the system. It can recover any failure in Registry node. Secondly, we take into consideration the system components to be easy to install/maintain. The proposed system is combination of few systems and frequency of update is low. Thirdly, load balancing should be added to the system to overcome the message overloaded. We evaluate the performance of the system by measuring the response time, utilization, and throughput. The result with load balancing is better than that without load balancing in all evaluation results. Finally, we make a comparison between the proposed system and the other three monitoring systems. We also make a comparison between the four types of load balancing algorithms.
    Download PDF (1654K)
  • Mohammed Sahli, Tetsuo Shibuya
    2012 Volume 7 Issue 2 Pages 721-727
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Exact string matching is the problem of finding all occurrences of a pattern P in a text T. The problem is well-known and many sophisticated algorithms have been proposed. Some fast exact string matching algorithms have been described since the 80s (e.g., the Boyer-Moore algorithm and its simplified version the Boyer-Moore-Horspool algorithm). They have been regarded as the standard benchmarks for the practical exact string search literature. In this paper, we propose two algorithms MSBM (Max-Shift BM) and MSH (Max-Shift BMH) both based on the combination of the bad-character rule of the right-most character used in the Boyer-Moore-Horspool algorithm, the extended bad-character rule and the good-suffix rule used in the Gusfield algorithm, which is a modification of the Boyer-Moore algorithm. Only a small extra space and preprocessing time are needed with respect to the BM and BMH algorithms. Nonetheless, empirical results on different data (DNA, Protein and Text Web) with different pattern lengths show that both MSBM and MSH are very fast in practice. MSBM algorithm usually won against other algorithms.
    Download PDF (1431K)
  • Satoshi Fujita, Yang Yang
    2012 Volume 7 Issue 2 Pages 728-736
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we consider the problem of recognizing the shape of dynamic event regions in wireless sensor networks (WSNs). A key idea of our proposed algorithm is to use the notion of distance fielddefined by the hop count from the boundary of event regions. By constructing such a field, we can easily identify several critical points in each event region (e.g., local maximum and saddle point) which could be effectively used to characterize the shape and the movement of such event regions. The communication cost required for the shape recognition of dynamic event regions significantly decreases compared with a naive centralized scheme by selectively allowing those critical points to send a certification message to the boundary of the event region and a notification message to the data aggregation points. The performance of the proposed scheme is evaluated by simulations. The simulation results indicate that: 1) the number of messages transmissions during a shape recognition significantly decreases compared with a naive centralized scheme; 2) the accuracy of shape recognition depends on the density of the underlying WSN, while it is robust against the lack of sensors in a particular region in the field, and 3) the proposed event tracking scheme correctly recognizes the movement of an event region with small number of message transmissions compared to a centralized scheme.
    Download PDF (691K)
  • Chuzo Iwamoto, Junichi Kishi, Kenichi Morita
    2012 Volume 7 Issue 2 Pages 737-739
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    We study the problem of determining the minimum number of face guards which cover the surface of a polyhedral terrain. We show that ⌊(2n-5)/7⌋ face guards are sometimes necessary to guard the surface of an n-vertex triangulated polyhedral terrain.
    Download PDF (519K)
  • Ting Ting Qin, Qi Cao, Qi Ying Wei, Satoshi Fujita
    2012 Volume 7 Issue 2 Pages 740-748
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we propose a new method to realize quick update of information concerned with shared contents in Peer-to-Peer (P2P) networks. The proposed method is a combination of a hierarchical P2P architecture and a tag-based file management scheme. The hierarchical architecture consists of three layers: the top layer consisting of a collection of central servers, the middle layer consisting of a set of sub-servers, and the bottom layer consisting of a number of user peers. Indexes of files held by each user peer are stored at the sub-servers in the middle layer, and the correlation between file indexes and sub-servers is maintained by central servers using tags. We implemented a prototype of the proposed method using Java, and evaluated the performance through simulations using PeerSim 1.0.4. The results of our experiments indicate that the proposed method is a good candidate for “real-time search engines” in P2P systems; e.g., it completes an upload of 10, 000 file indexes to the relevant sub-servers in a few minutes and achieves query forwarding to relevant peers within 100ms.
    Download PDF (816K)
  • Mitsuhiro Hattori, Nori Matsuda, Takashi Ito, Yoichi Shibata, Katsuyuk ...
    2012 Volume 7 Issue 2 Pages 749-760
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Biometric authentication has been attracting much attention because it is more user-friendly than other authentication methods such as password-based and token-based authentications. However, it intrinsically comprises problems of privacy and revocability. To address these issues, new techniques called cancelable biometrics have been proposed and their properties have been analyzed extensively. Nevertheless, only a few considered provable security, and provably secure schemes known to date had to sacrifice user-friendliness because users have to carry tokens so that they can securely access their secret keys. In this paper, we propose two cancelable biometric protocols each of which is provably secure and requires no secret key access of users. We use as an underlying component the Boneh-Goh-Nissim cryptosystem proposed in TCC 2005 and the Okamoto-Takashima cryptosystem proposed in Pairing 2008 in order to evaluate 2-DNF (disjunctive normal form) predicate on encrypted feature vectors. We define a security model in a semi-honest manner and give a formal proof which shows that our protocols are secure in that model. The revocation process of our protocols can be seen as a new way of utilizing the veiled property of the underlying cryptosystems, which may be of independent interest.
    Download PDF (728K)
Media (processing) and Interaction
  • Keisuke Tomono, Hajime Katsuyama, Akira Tomono
    2012 Volume 7 Issue 2 Pages 761-769
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    We propose a new method in which scents are emitted through the display screen in the direction of the viewer to enhance the reality effect of the visual images. A thin LED display panel filled with tiny pores was made for this experiment and an air control system with a blower was placed behind the screen. We experimentally proved that the direction of airflow could be controlled and scents could properly travel through the pores to the front side of the screen aimed at the viewer. The effectiveness of the scents and the psychological effects were evaluated by the subjects’ biological responses and answers to a questionnaire. By analyzing their biological responses, more significant changes in skin conductance were detected when the advertisements were presented with visuals and scents than with visuals alone. Therefore, we had a lot of good reasons to objectively evaluate the psychological effects of visuals with scents.
    Download PDF (2577K)
  • Ryo Furukawa, Ryusuke Sagawa, Hiroshi Kawasaki, Kazuhiro Sakashita, Ya ...
    2012 Volume 7 Issue 2 Pages 770-782
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    In the present paper, we propose a one-shot scanning system consisting of multiple projectors and cameras for dense entire shape acquisition of a moving object. One potential application of the proposed system is to capture a moving object at a high frame rate. Since the patterns used for one-shot scanning are usually complicated, and the patterns interfere with each other if they are projected onto the same object, it is difficult to use multiple sets of patterns for entire shape acquisition. In addition, the overlapped areas of each object have gaps and errors are accumulated. As such, merged shapes are usually noisy and inconsistent. In order to address this problem, we propose a one-shot shape reconstruction method using a projector to project a static pattern of parallel lines of one or two colors. Since each projector projects only parallel lines with a small number of colors, these patterns are easily decomposed and detected even if the patterns are projected multiple times onto the same object. We also propose a multi-view reconstruction algorithm for the projector-camera system. In the experiment, we built a system consisting of six projectors and six cameras, and dense shapes of entire objects were successfully reconstructed.
    Download PDF (2565K)
  • Masatoshi Sekine, Kurato Maeno
    2012 Volume 7 Issue 2 Pages 783-792
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Recently it has become more important to monitor the daily human activities of the elderly and of children. In this paper, we propose a system for practical activity recognition using the Doppler effect in 24GHz microwaves. It extracts the features from the signals, selects the optimal features, and then classifies activities using a pattern matching technique. We can sense human activities simply with setting Doppler sensors on the wall or tables, without any body-attached sensors. As a result of performance evaluation, our system achieves over ninety percent in the classification of eight actions on average.
    Download PDF (1715K)
  • Graham Neubig, Taro Watanabe, Eiichiro Sumita, Shinsuke Mori, Tatsuya ...
    2012 Volume 7 Issue 2 Pages 793-804
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    The phrase table, a scored list of bilingual phrases, lies at the center of phrase-based machine translation systems. We present a method to directly learn this phrase table from a parallel corpus of sentences that are not aligned at the word level. The key contribution of this work is that while previous methods have generally only modeled phrases at one level of granularity, in the proposed method phrases of many granularities are included directly in the model. This allows for the direct learning of a phrase table that achieves competitive accuracy without the complicated multi-step process of word alignment and phrase extraction that is used in previous research. The model is achieved through the use of non-parametric Bayesian methods and inversion transduction grammars (ITGs), a variety of synchronous context-free grammars (SCFGs). Experiments on several language pairs demonstrate that the proposed model matches the accuracy of the more traditional two-step word alignment/phrase extraction approach while reducing its phrase table to a fraction of its original size.
    Download PDF (852K)
Computer Networks and Broadcasting
  • Yasuyuki Okumura, Katsuyuki Fujii
    2012 Volume 7 Issue 2 Pages 805-811
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    We propose a scheme for estimating the level of beat noise that adaptively cancels the noise from the received signals and thus solves a critical problem: minimizing the bit error rate of OCDMA systems. This scheme is implemented as an adaptive filter based on an expansion of the Volterra series: the received signal is used to estimate the key components of beat noise, transmitted signal and additive Gaussian noise. The Volterra series can be used to describe the simultaneous occurrence of optical signals from different transmitters, which causes the beat noise. We show the configuration, operation principles and performance of the adaptive filter. We also show that the proposed combination of maximum likelihood detection and the estimation of the beat noise reduces the bit error rate and thus raise the performance of a system so that it approaches the level offered by an equivalent system with no beat noise.
    Download PDF (933K)
  • Achmad Basuki, Achmad Husni Thamrin, Hitoshi Asaeda, Jun Murai
    2012 Volume 7 Issue 2 Pages 812-822
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    The tussle in IP multicast, where different enablers have interests that are adverse to each other, has led to a halt in inter-provider multicast deployment and created a situation in which enabling inter-domain multicast routing is considered a deterrent for network providers. This paper presents ODMT (On-demand Inter-domain Multicast Tunneling), an on-demand inter-provider multicast tunneling that is autonomous in operation and manageable through its definable policy control. In the architectural design, we propose a new approach of enabling inter-provider multicast by decoupling the control-plane and forwarding-plane. Focusing on the control-plane without changing the forwarding-plane, our solution changes the traditional open multicast service model into a more manageable service model in inter-domain multicast operation, hence it eases the Internet-wide multicast deployment.
    Download PDF (1251K)
  • Ervianto Abdullah, Satoshi Fujita
    2012 Volume 7 Issue 2 Pages 823-830
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    The objective of Peer-to-Peer Content Delivery Networks is to deliver copyrighted contents to paid clients in an efficient and secure manner. To protect such contents from being distributed to unauthorized peers, Lou and Hwang proposed a proactive content poisoning scheme to restrain an illegal download conducted by unauthorized peers, and a scheme to identify colluders who illegally leak the contents to such unauthorized peers. In this paper, we propose three schemes which extend the Lou and Hwang's colluder detection scheme in two directions. The first direction is to introduce an intensive probing to check suspected peers, and the second direction is to adopt a reputation system to select reliable (non-colluder) peers as a decoy. The performance of the resulting scheme is evaluated by simulation. The result of simulations indicates that the proposed schemes detect all colluders about 30% earlier on average than the original scheme while keeping the accuracy of the colluder detection at medium collusion rate.
    Download PDF (1206K)
  • Yong Jin, Nariyoshi Yamai, Kiyohiko Okayama, Motonori Nakamura
    2012 Volume 7 Issue 2 Pages 831-840
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    With the explosive expansion of the Internet, many fundamental and popular Internet services such as WWW and e-mail are becoming more and more important and are indispensable for the human's social activities. As one technique to operate the systems reliably and efficiently, the way of introducing multihomed networks attracts much attention. However, conventional route selection mechanisms on multihomed networks reveal problems in terms of properness of route selection and dynamic traffic balancing which are two key criteria of applying multihomed networks. In this paper, we propose an improved dynamic route selection mechanism based on multipath DNS (Domain Name System) round trip time to address the existing problems. The evaluation results on the WWW system and the e-mail system indicate that the proposal is effective for a proper route selection based on the network status as well as for dynamic traffic balancing on multihomed networks and we also confirmed the resolution of problems that occur in the case of conventional mechanisms.
    Download PDF (1094K)
  • Yang Chen, Ryo Kurachi, Gang Zeng, Hiroaki Takada
    2012 Volume 7 Issue 2 Pages 841-852
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    The Controller Area Network (CAN) is widely employed in automotive control system networks. In the last few years, the amount of data has been increasing rapidly in these networks. With the purpose of improving CAN bandwidth efficiency, scheduling and analysis are considered to be particularly important. As an effective method, it has been known that assigning offsets to the CAN messages can reduce their worst case response time (WCRT). Meanwhile, the fact is that many commercial CAN controllers have been equipped with a priority or first-in-first-out (FIFO) queue to transmit messages to the CAN bus. However, previous researches for WCRT analysis of CAN messages either assumed a priority queue or did not consider the offset. For this reason, in this paper we propose a WCRT analysis method for CAN messages with assigned offset in the FIFO queue. We first present a critical instant theorem, then we propose two algorithms for WCRT calculation based on the given theorem. Experimental results on generated message sets and a real message set have validated the effectiveness of the proposed algorithms.
    Download PDF (1190K)
  • Tao Xu, Masahiro Watanabe, Masaki Bandai, Takashi Watanabe
    2012 Volume 7 Issue 2 Pages 853-860
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we propose and implement a cross layer protocol for ad hoc networks using directional antennas. In the proposed protocol called RSSI-based MAC and routing protocol using directional antennas (RMRP), RSSI is used for computing the direction of the receiver and also used for controlling backoff time. Moreover, the backoff time is weighted according to number of hops from a source node. In addition, simple routing functions are introduced in the proposed RMRP. We implement the proposed RMRP on a testbed with the electronically steerable passive array radiator (ESPAR) antenna and IEEE 802.15.4. From some experimental results, we confirm some throughput improvement and show the effectiveness of the proposed RMRP. Especially, the proposed RMRP can achieve about 2.1 times higher throughput than a conventional random backoff protocol in a multi-hop communication scenario.
    Download PDF (3124K)
  • Tetsuya Arita, Fumio Teraoka
    2012 Volume 7 Issue 2 Pages 861-871
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Network mobility has attracted large attention to provide vehicles such as trains with Internet connectivity. NEMO Basic Support Protocol (NEMO-BS) supports network mobility. However, through our experiment using a train in service, NEMO-BS shows that the handover latency becomes very large if the signaling messages are lost due to instability of the wireless link during handover. There are several proposals such as N-PMIPv6 and N-NEMO to support network mobility based on network-based localized mobility management. However, these proposals have problems such as large tunneling overhead and transmission of the control messages for handover on the wireless link. This paper proposes PNEMO, a network-based localized mobility management protocol for mobile networks. In PNEMO, mobility management is basically handled in the wired network so that the signaling messages are not transmitted on the wireless link when handover occurs. This makes handover stable even if the wireless link is unstable during handover. PNEMO uses a single tunneling even if the mobile network is nested. PNEMO is implemented in Linux. The measured performance shows that the handover latency is almost constant even if the wireless link is unstable when handover occurs, and that the overhead of PNEMO is negligible in comparison with NEMO-BS.
    Download PDF (3454K)
  • Kotaro Ishitani, Hiroshi Yamamoto, Maki Yamamoto, Katsuyuki Yamazaki
    2012 Volume 7 Issue 2 Pages 872-876
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    On Awashima Island in Niigata prefecture, Japan, the tourist association conducts ecotourism specifically aimed at children. In this ecotourism, providing children with valid educational materials is important. Therefore, we have proposed an ecotourism support system that can provide video content as educational materials by using mobile phones and One-seg broadcasting. In an experimental evaluation of the proposed system, a content scheduler of the system has increased the number of accesses per hour to the ecotour page by offering appropriate content to tourists. As a result, a conversion rate of the ecotour page was 30.6% which is quite high, so One-seg broadcasting is useful for advertising ecotourism.
    Download PDF (1321K)
Information Systems and Applications
  • Kojiro Yano
    2012 Volume 7 Issue 2 Pages 877-881
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Dysregulation of epigenetic mechanisms has been implicated in the pathogenesis of Alzheimer's disease (AD). It has been shown that epigenetic status in promoter regions can alter levels of gene expressions, but their influence on correlated expressions of genes and its dependency on the disease are unclear. Using publicly available microarray and DNA methylation data, this article infer how correlation in gene expression in non-demented (ND) and AD brain may be influenced by genomic promoter methylation. Pearson correlation coefficients of gene expression levels between each of 123 known hypomethylated genes and all other genes in the microarray dataset were calculated, and the mean absolute coefficients were obtained as an overall strength of gene expression correlation of the hypomethylated gene. The distribution of the mean absolute coefficients showed that the hypomethylated genes can be divided into two, by the mean coefficients above or below 0.15. The division of the hypomethylated genes by the mean coefficients was more evident in AD brain than in ND brain. On the other hand, hypermethylated genes had a single dominant group, and the majority of them had the mean coefficient below 0.15. These results suggest that the lower the DNA methylation, the higher the correlation of gene expression levels with the other genes in microarray data. The strength of gene expression correlation was also calculated between known AD risk genes and all other genes in microarray data. It was found that AD risk genes were more likely to have the mean absolute correlation coefficients above 0.15 in AD brain, when the evidence for their association with AD was strong, suggesting the link between DNA methylation and AD. In conclusion DNA methylation status is intimately associated with correlated gene expression, particularly in AD brain.
    Download PDF (173K)
  • Koji Ara, Tomoaki Akitomi, Nobuo Sato, Kunio Takahashi, Hideyuki Maeda ...
    2012 Volume 7 Issue 2 Pages 882-894
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    A sensor-based project management process, which uses continuous sensing data of face-to-face communication, was developed for integration into current project management processes. To establish a practical process, a sensing system was applied in two software-development projects involving 123 and 65 employees, respectively, to analyze the relation between work performance and behavioral patterns and investigate the use of sensor data. It was found that a factor defined as “communication richness, ” which refers to the amount of communication, correlates with employee performance (job evaluation) and was common in both projects, while other factors, such as “workload, ” were found in just one of the projects. Developers' quality of development (low bug occurrence) was also investigated in one of the projects and “communication richness” was found as a factor of high development quality. As a result of this analysis, we propose a four-step sensor-based project management process, which consists of analysis, monitoring, inspection, and action, and evaluated its effectiveness. Through monitoring, it was estimated that some “unplanned” events, such as changing specifications and problem solving during a project, could be systematically identified. Cohesion of a network was systematically increased using a recommendation of communication, called WorkX, which involves micro rotating of discussion members based on network topology.
    Download PDF (3847K)
  • Shelly Sachdeva, Subhash Bhalla
    2012 Volume 7 Issue 2 Pages 895-907
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    The healthcare professionals have critical needs for general purpose query capabilities. These users increasingly require the use of information technology. The query needs cannot be met by form based user interfaces (or through the aids such as, the query builder). Further, the archetype-based Electronic Health Records (EHRs) databases are more complex, as compared with the traditional database systems. The present study examines a new way to support general purpose user level query language interface for querying EHR data. It presents the user with user's view of clinical concepts, without requiring any intricate knowledge of an object or stored structures. It enables clinicians and researchers to pose general purpose queries, over archetype-based Electronic Health Record systems.
    Download PDF (1025K)
  • Tetsuo Iijima
    2012 Volume 7 Issue 2 Pages 908-911
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    A simple formula of a decision making process is introduced and applied to airplane accidents: a near-miss case of Douglas DC-10-40 with Boeing 747-400D in 2001 and a collision between Tupolev 154M and Boeing 757-200 cargo jet in 2002. The decision making process is shown as the plot of lnln (1-y)-1 versus lnt with phase-change ratio, y and time, t and thus, it shows the diffusion law when the plot is linear. The flight data focused on altitude from cruising to descending is applied to the model and a clear phase-change is demonstrated. The timing of the phase-change means the “decision making” point and is estimated as timing for pilots to start to perform maneuvers. This model may be applied to a large number of cases related to the human factors on decision making.
    Download PDF (3018K)
  • Satoru Hirako, Masafumi Shionyu
    2012 Volume 7 Issue 2 Pages 912-920
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    The functional sites of multidomain proteins are often found at the interfaces of two or more domains. Therefore, the spatial arrangement of the domains is essential in understanding the functional mechanisms of multidomain proteins. However, an experimental determination of the whole structure of a multidomain protein is often difficult due to flexibility in inter-domain arrangement. We have developed a score function, named DINE, to detect probable docking poses generated in a rigid-body docking simulation. This score function takes into account the binding energy, information about the domain interfaces of homologous proteins, and the end-to-end distance spanned by the domain linker. We have examined the performance of DINE on 55 non-redundant known structures of two-domain proteins. In the results, the near-native docking poses were scored within the top 10 in 65.5% of the test cases. DINE scored the near-native poses higher in comparison with an existing domain assembly method, which also used binding energy and linker distance restraints. The results demonstrate that the domain-interface restraints of DINE are quite efficient in selecting near-native domain assemblies.
    Download PDF (846K)
  • Tomoshige Ohno, Shigeto Seno, Yoichi Takenaka, Hideo Matsuda
    2012 Volume 7 Issue 2 Pages 921-927
    Published: 2012
    Released on J-STAGE: June 15, 2012
    JOURNAL FREE ACCESS
    Alternative splicing plays an important role in eukaryotic gene expression by producing diverse proteins from a single gene. Predicting how genes are transcribed is of great biological interest. To this end, massively parallel whole transcriptome sequencing, often referred to as RNA-Seq, is becoming widely used and is revolutionizing the cataloging isoforms using a vast number of short mRNA fragments called reads. Conventional RNA-Seq analysis methods typically align reads onto a reference genome (mapping) in order to capture the form of isoforms that each gene yields and how much of every isoform is expressed from an RNA-Seq dataset. However, a considerable number of reads cannot be mapped uniquely. Those so-called multireads that are mapped onto multiple locations due to short read length and analogous sequences inflate the uncertainty as to how genes are transcribed. This causes inaccurate gene expression estimations and leads to incorrect isoform prediction. To cope with this problem, we propose a method for isoform prediction by iterative mapping. The positions from which multireads originate can be estimated based on the information of expression levels, whereas quantification of isoform-level expression requires accurate mapping. These procedures are mutually dependent, and therefore remapping reads is essential. By iterating this cycle, our method estimates gene expression levels more precisely and hence improves predictions of alternative splicing. Our method simultaneously estimates isoform-level expressions by computing how many reads originate from each candidate isoform using an EM algorithm within a gene. To validate the effectiveness of the proposed method, we compared its performance with conventional methods using an RNA-Seq dataset derived from a human brain. The proposed method had a precision of 66.7% and outperformed conventional methods in terms of the isoform detection rate.
    Download PDF (814K)
feedback
Top