Information and Media Technologies
Online ISSN : 1881-0896
ISSN-L : 1881-0896
9 巻 , 1 号
選択された号の論文の13件中1~13を表示しています
Hardware and Devices
  • Jingcheng Zhuang, Robert Bogdan Staszewski
    2014 年 9 巻 1 号 p. 1-14
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    This paper presents an all-digital phase-locked loop (ADPLL) architecture in a new light that allows it to significantly save power through complexity reduction of its phase locking and detection mechanisms. The predictive nature of the ADPLL to estimate next edge occurrence of the reference clock is exploited here to reduce the timing range and thus complexity of the fractional part of the phase detection mechanism as implemented by a time-to-digital converter (TDC) and to ease the clock retiming circuit. In addition, the integer part, which counts the DCO clock edges, can be disabled to save power once the loop has achieved lock. It can be widely used in fields of fractional-N frequency multiplication and frequency/phase modulation. The presented principles and techniques have been validated through extensive behavioral simulations as well as fabricated IC chips.
  • Tsung-Yi Ho
    2014 年 9 巻 1 号 p. 15-25
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    Microfluidic biochips are replacing the conventional biochemical analyzers, and are able to integrate on-chip all the basic functions for biochemical analysis. The “digital” microfluidic biochips (DMFBs) are manipulating liquids not as a continuous flow, but as discrete droplets on a two-dimensional array of electrodes. Basic microfluidic operations, such as mixing and dilution, are performed on the array, by routing the corresponding droplets on a series of electrodes. The challenges facing biochips are similar to those faced by microelectronics some decades ago. To meet the challenges of increasing design complexity, computer-aided-design (CAD) tools are being developed for DMFBs. This paper provides an overview of DMFBs and describes emerging CAD tools for the automated synthesis and optimization of DMFB designs, from fluidic-level synthesis and chip-level design to testing. Design automations are expected to alleviate the burden of manual optimization of bioassays, time-consuming chip designs, and costly testing and maintenance procedures. With the assistance of CAD tools, users can concentrate on the development and abstraction of nanoscale bioassays while leaving chip optimization and implementation details to CAD tools.
  • Yuko Hara-Azumi, Toshinobu Matsuba, Hiroyuki Tomiyama, Shinya Honda, H ...
    2014 年 9 巻 1 号 p. 26-34
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    Due to the increasing diversity and complexity of embedded systems, the use of high-level synthesis (HLS) and that of FPGAs have been both becoming prevalent in order to enhance the design productivity. Although a number of works for FPGA-oriented optimizations, particularly about resource binding, have been studied in HLS, the HLS technologies are still immature since most of them overlook some important facts on resource sharing. In this paper, for FPGA-based designs, we quantitatively evaluate effects of several resource sharing approaches in HLS using practically large benchmarks, on various FPGA devices. Through the comprehensive evaluation, the effects on clock frequency, execution time, area, and multiplexer distribution are examined. Several important discussions and findings will be disclosed, which are essential for further advance of the practical HLS technology.
Computing
  • Tetsuro Horikawa, Jin Nakazawa, Kazunori Takashio, Hideyuki Tokuda
    2014 年 9 巻 1 号 p. 35-47
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    Spread of GPU-accelerated applications on PCs can cause serious degradation of the user experience such as frame dropping on the video playback, due to applications' resource competition on the same GPU due to arbitrary processors selection. In this paper, we propose a processors assignment system for real applications that achieves processors assignment according to condition based rules without modifying applications. To demonstrate the feasibility of our concept, we implemented a prototype of the centralized processors assignment mechanism called Torta. Our experiment using eight practical applications has shown that Torta achieves binary-compatible processors switching with an average performance penalty on only 0.2%. In a particular case where a video playback application is executed with three other GPU-intensive applications, our method enables users to enjoy the video playback with 60 frames per second (FPS) while the FPS decreases to 14 without the mechanism. This paper shows the design and the implementation of Torta on Windows 7 and concludes that our mechanism increases the efficiency of computational resource usage on PCs, thus improves the overall user experiences.
  • Lei Ma, Cyrille Artho, Hiroyuki Sato
    2014 年 9 巻 1 号 p. 48-60
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    With today's importance of distributed applications, their verification and analysis are still challenging. They involve large combinational states, interactive network communications between peers, and concurrency. Although there are some dynamic analysis tools for analyzing the runtime behavior of a single-process application, they do not provide methods to analyze distributed applications as a whole, where multiple processes run simultaneously. Centralization is a general solution which transforms multi-process applications into a single-process one that can be directly analyzed by existing tools. In this paper, we improve the accuracy of centralization. Moreover, we extend it as a general framework for analyzing distributed applications with multiple versions. First, we formalize the version conflict problem and present a simple solution, and further propose an optimized solution to resolving class version conflicts during centralization. Our techniques enable sharing common code whenever possible while keeping the version space of each component application separate. Centralization issues like startup semantics and static field transformation are improved and discussed. We implement and apply our centralization tool to some network benchmarks. Experiments, where existing tools are used on the centralized application, prove the usefulness of our automatic centralization tool, showing that centralization enables these tools to analyze distributed applications with multiple versions.
  • Kazuyuki Hara, Kentaro Katahira
    2014 年 9 巻 1 号 p. 61-66
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    In on-line gradient descent learning, the local property of the derivative term of the output function can slowly converge. Improving the derivative term, such as by using the natural gradient, has been proposed for speeding up the convergence. Beside this sophisticated method, we propose an algorithm that replaces the derivative term with a constant and show that this greatly increases convergence speed when the learning step size is less than 2.7, which is near the optimal learning step size. The proposed algorithm is inspired by linear perceptron learning and can avoid locality of the derivative term. We derived the closed deterministic differential equations by using a statistical mechanics method and show the validity of theoretical results by comparing them with computer simulation solutions. In real problems, the optimum learning step size is not given in advance. Therefore, the learning step size must be small. The proposed method is useful in this case.
  • Ryota Miyata, Toru Aonishi, Jun Tsuzurugi, Koji Kurata
    2014 年 9 巻 1 号 p. 67-72
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    Many associative memory models with synaptic decay such as the forgetting model and the zero-order decay model have been proposed and studied so far. The previous studies showed the relation between the storage capacity C and the synaptic decay coefficient α in each synaptic decay model. However, with the exceptions of a few studies, they did not compare the network retrieval performance between different synaptic decay models. We formulate the associative memory model with the β-th-order synaptic decay as an extension of the zero-order decay model. The parameter β denotes the synaptic decay order or the degree of the synaptic decay term, which enables us to compare the retrieval performance between different synaptic decay models. Using numerical simulations, we investigate the relation between the synaptic decay coefficient α and the storage capacity C of the network by varying the synaptic decay order β. The results show that the properties of the synaptic decay model are constant for a large decay order β. Moreover, we search the minimum β to avoid overloading and the optimal β to maximize the network retrieval performance. The minimum integer value of β to avoid overloading is -1. The optimal integer value of β to maximize the network retrieval performance is 1, i.e., the degree of the forgetting model, and the suboptimal integer β is 0, i.e., that of the zero-order synaptic decay model.
  • Yu Liu, Kento Emoto, Kiminori Matsuzaki, Zhenjiang Hu
    2014 年 9 巻 1 号 p. 73-82
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    MapReduce programming model attracts a lot of enthusiasm among both industry and academia, largely because it simplifies the implementations of many data parallel applications. In spite of the simplicity of the programming model, there are many applications that are hard to be implemented by MapReduce, due to their innate characters of computational dependency. In this paper we propose a new approach of using the programming pattern accumulate over MapReduce, to handle a large class of problems that cannot be simply divided into independent sub-computations. Using this accumulate pattern, many problems that have computational dependency can be easily expressed, and then the programs will be transformed to MapReduce programs executed on large clusters. Users without much knowledge of MapReduce can also easily write programs in a sequential manner but finally obtain efficient and scalable MapReduce programs. We describe the programming interface of our accumulate framework and explain how to transform a user-specified accumulate computation to an efficient MapReduce program. Our experiments and evaluations illustrate the usefulness and efficiency of the framework.
Media (processing) and Interaction
  • Naoya Inoue, Kentaro Inui
    2014 年 9 巻 1 号 p. 83-110
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    Abduction is desirable for many natural language processing (NLP) tasks. While recent advances in large-scale knowledge acquisition warrant applying abduction with large knowledge bases to real-life NLP problems, as of yet, no existing approach to abduction has achieved the efficiency necessary to be a practical solution for large-scale reasoning on real-life problems. In this paper, we propose an efficient solution for large-scale abduction. The contributions of our study are as follows: (i) we propose an efficient method of cost-based abduction in first-order predicate logic that avoids computationally expensive grounding procedures; (ii) we formulate the best-explanation search problem as an integer linear programming optimization problem, making our approach extensible; (iii) we show how cutting plane inference, which is an iterative optimization strategy developed in operations research, can be applied to make abduction in first-order logic tractable; and (iv) the abductive inference engine presented in this paper is made publicly available.
  • Norihide Kitaoka, Yuji Kinoshita, Sunao Hara, Chiyomi Miyajima, Kazuya ...
    2014 年 9 巻 1 号 p. 111-120
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    We regarded a dialog strategy for information retrieval as a graph search problem and proposed several novel dialog strategies that can recover from misrecognition through a spoken dialog that traverses the graph. To recover from misrecognition without seeking confirmation, our system kept multiple understanding hypotheses at each turn and searched for a globally optimal hypothesis in the graph whose nodes express understanding states across user utterances in a whole dialog. In the search, we used a new criterion based on efficiency in information retrieval and consistency with understanding hypotheses, which is also used to select an appropriate system response. We showed that our system can make more efficient and natural dialogs than previous ones.
  • Takashi Isozaki
    2014 年 9 巻 1 号 p. 121-131
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    Methods of statistical causal discovery that use conditional independence (CI) tests are attractive due to their time efficiency and applications to latent variable systems. However, they often suffer from worse inference results induced by statistical errors in CI tests than other approaches. We considered part of these errors to be due to statistically weak violations of a usually used assumption, called the causal faithfulness condition. We propose a causal discovery algorithm that can reduce the numbers of unnecessarily performed CI tests in this study and so provide accurate and fast inference without loss of theoretical correctness. We also introduce unreliable directions, which can reduce orientation errors caused by the locality of CI tests in the algorithm. Further, simulations are provided to demonstrate the performance of the proposed algorithm for discrete probability systems and continuous linear structural equation models.
  • Kazuhisa Miwa, Hitoshi Terai, Nana Kanzaki, Ryuichi Nakaike
    2014 年 9 巻 1 号 p. 132-140
    発行日: 2014年
    公開日: 2014/03/15
    ジャーナル フリー
    We present an intelligent tutoring system that teaches natural deduction to undergraduate students. An expert problem solver in the system provides basic instructional help, such as suggesting the use of a rule in the next step of solving a problem and indicating the inference drawn by applying the rule. The system provides help by using a complete problem solver as an expert instructor. Students learning with our tutoring system can vary the degree of help they receive (from low to high and vice versa). Empirical evaluation showed that the system enhanced the problem-solving performance of participants during the learning phase, and these performance gains were carriedover to the post-test phase. The analysis of participants' interactions with the system revealed the between-participants adaptation of students, meaning that participants with lower scores learned using higher levels of assistance than those with higher scores. In addition, the analysis revealed the within-participants adaptation of students, meaning that they adaptively changed levels of support according to their learning progress and the degree of difficulty of the problem.
Information Systems and Applications
feedback
Top