Information and Media Technologies
Online ISSN : 1881-0896
ISSN-L : 1881-0896
Volume 8, Issue 1
Displaying 1-21 of 21 articles from this issue
Computing
  • Daniel Sangorrín, Shinya Honda, Hiroaki Takada
    2013 Volume 8 Issue 1 Pages 1-17
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    Dual-OS communications allow a real-time operating system (RTOS) and a general-purpose operating system (GPOS)—sharing the same processor through virtualization—to collaborate in complex distributed applications. However, they also introduce new threats to the reliability (e.g., memory and time isolation) of the RTOS that need to be considered. Traditional dual-OS communication architectures follow essentially the same conservative approach which consists of extending the virtualization layer with new communication primitives. Although this approach may be able to address the aforementioned reliability threats, it imposes a rather big overhead on communications due to unnecessary data copies and context switches.
    In this paper, we propose a new dual-OS communications approach able to accomplish efficient communications without compromising the reliability of the RTOS. We implemented our architecture on a physical platform using a highly reliable dual-OS system (SafeG) which leverages ARM TrustZone hardware to guarantee the reliability of the RTOS. We observed from the evaluation results that our approach is effective at minimizing communication overhead while satisfying the strict reliability requirements of the RTOS.
    Download PDF (1214K)
  • Tetsuo Imai, Atsushi Tanaka
    2013 Volume 8 Issue 1 Pages 18-24
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    Recent studies revealed that some social and technological network formations can be represented by the network formation games played by selfish multiple agents. In general, the topologies formed by selfish multiple agents are worse than or equal to those formed by the centralized designer in the sense of social total welfare. Several works such as the price of anarchy are known as a measure for evaluating the inefficiency of solutions obtained by selfish multiple agents compared to the social optimal solution. In this paper, we introduce the expected price of anarchy which is proposed as a valid measure for evaluating the inefficiency of the dynamic network formation game whose solution space is divided into basins with multimodal sizes. Moreover, through some computer simulations we show that it can represent the average case behavior of inefficiency of dynamic network formation games which is missed by two previous measures.
    Download PDF (375K)
  • Naoki Fukuta
    2013 Volume 8 Issue 1 Pages 25-31
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    A multi-unit combinatorial auction is a combinatorial auction that has some items that can be seen as indistinguishable. Although the mechanism can be applied to dynamic electricity auctions and various purposes, it is difficult to apply to large-scale auction problems due to its computational intractability. In this paper, I present an idea and an analysis about an approximate allocation and pricing algorithm that is capable of handling multi-unit auctions. The analysis shows that the algorithm effectively produces approximation allocations that are necessary in pricing. Furthermore, the algorithm can be seen as an approximation of VCG (Vickrey-Clarke-Groves) mechanism satisfying budget balance condition and bidders' individual rationality without having unrealistic assumptions on bidders' behaviors. I show that the proposed allocation algorithm successfully produced good allocations for those problems that could not be easily solved by ordinary LP solvers due to hard time constraints.
    Download PDF (164K)
  • Akira Ura, Daisaku Yokoyama, Takashi Chikayama
    2013 Volume 8 Issue 1 Pages 32-40
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    It is difficult to fully utilize the parallelism of large-scale computing environments in alpha-beta search. The naive parallel execution of subtrees would result in much less task pruning than may have been possible in sequential execution. This may even degrade total performance. To overcome this difficulty, we propose a two-level task scheduling policy in which all tasks are classified into two priority levels based on the necessity for their results. Low priority level tasks are only executed after all high priority level tasks currently executable have started. When new high priority level tasks are generated, the execution of low priority level tasks is suspended so that high level tasks can be executed. We suggest tasks be classified into the two levels based on the Young Brothers Wait Concept, which is widely used in parallel alpha-beta search. The experimental results revealed that the scheduling policy suppresses the degradation in performance caused by executing tasks whose results are eventually found to be unnecessary. We found the new policy improved performance when task granularity was sufficiently large.
    Download PDF (513K)
  • Kazuya Haraguchi
    2013 Volume 8 Issue 1 Pages 41-47
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    In this paper, we study how many inequality signs we should include in the design of Futoshiki puzzle. A problem instance of Futoshiki puzzle is given as an n × n grid of cells such that some cells are empty, other cells are filled with integers in [n] = {1, 2,...,n}, and some pairs of two adjacent cells have inequality signs. A solver is then asked to fill all the empty cells with integers in [n] so that the n2 integers in the grid form an n × n Latin square and satisfy all the inequalities. In the design of a Futoshiki instance, we assert that the number of inequality signs should be an intermediate one. To draw this assertion, we compare Futoshiki instances that have different numbers of inequality signs from each other. The criterion is the degree to which the condition on inequality is used to solve the instance. If this degree were small, then the instance would be no better than one of a simple Latin square completion puzzle like Sudoku, with unnecessary inequality signs. Since we are considering Futoshiki puzzle, it is natural to take an interest in instances with large degrees. As a result of the experiments, the Futoshiki instances which have an intermediate number of inequality signs tend to achieve the largest evaluation values, rather than the ones which have few or many inequality signs.
    Download PDF (210K)
  • Tetsuya Sakai
    2013 Volume 8 Issue 1 Pages 48-58
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    Given an ambiguous or underspecified web search query, search result diversification aims at accomodating different user intents within a single “entry-point” result page. However, some intents are informational, for which many relevant pages may help, while others are navigational, for which only one web page is required. We propose new evaluation metrics for search result diversification that considers this distinction, as well as the condordance test for comparing the intuitiveness of a given pair of metrics quantitatively. Our main experimental findings are: (a) In terms of discriminative power which reflects statistical reliability, the proposed metrics, DIN#-nDCG and P+Q#, are comparable to intent recall and D#-nDCG, and possibly superior to α-nDCG; (b) In terms of the concordance test which quantifies the agreement of a diversity metric with a gold standard metric that represents a basic desirable property, DIN#-nDCG is superior to other diversity metrics in its ability to reward both diversity and relevance at the same time. Moreover, both D#-nDCG and DIN#-nDCG significantly outperform α-nDCG in their ability to reward diversity, to reward relevance, and to reward both at the same time. In addition, we demonstrate that the randomised Tukey's Honestly Significant Differences test that takes the entire set of available runs into account is substantially more conservative than the paired bootstrap test that only considers one run pair at a time, and therefore recommend the former approach for significance testing when a set of runs is available for evaluation.
    Download PDF (1010K)
Media (processing) and Interaction
  • Hiromitsu Nishizaki, Tomoyosi Akiba, Kiyoaki Aikawa, Tatsuya Kawahara, ...
    2013 Volume 8 Issue 1 Pages 59-80
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    This paper describes a design of spoken term detection (STD) studies and their evaluating framework at the STD sub-task of the NTCIR-9 IR for Spoken Documents (SpokenDoc) task. STD is the one of information access technologies for spoken documents. The goal of the STD sub-task is to rapidly detect presence of a given query term, consisting of word or a few word sequences spoken, from the spoken documents included in the Corpus of Spontaneous Japanese. To successfully complete the sub-task, we considered the design of the sub-task and the evaluation methods, and arranged the task schedule. Finally, seven teams participated in the STD subtask and submitted 18 STD results. This paper explains the STD sub-task details we conducted, the data used in the sub-task, how to make transcriptions by speech recognition for data distribution, the evaluation measurement, introduction of the participants' techniques, and the evaluation results of the task participants.
    Download PDF (519K)
  • Hung-Hsuan Huang, Toyoaki Nishida
    2013 Volume 8 Issue 1 Pages 81-96
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    In developing an entertainment application like a game, evaluating how the players actually interact with the system is an essential issue for its further improvements. This paper proposes and evaluates a quiz game agent who is attentive to the dynamics of multiple concurrent participants (players). The attentiveness of this agent is meant to be achieved by an utterance policy that determines the nature of the utterance and whether, when, and to whom to utter. Two heuristics are introduced to drive the policy: the interaction atmosphere (AT) of the participants and the participant who tends to lead the conversation (CLP) at a specific time point. They are estimated from the activeness of the participants' face movements and acoustic information during their discussion of the answer. In order to prevent the inherent drawback of a 2D agent that makes it difficult for multiple concurrent users to distinguish the focus of its attention, a physical pointer is also introduced. This system is then evaluated from three aspects, participants' own subjective measurement by questionnaires, participants' implicit attitude by an external measuring test, and from third-person view by video data analysis. The joint results of the experiments indicated that the methods for estimating AT and CLP worked. The participants pay more attention to the agent and participate in the game more actively if the indication of the pointer is more comprehensive.
    Download PDF (811K)
  • Seokhwan Kim, Shin Takahashi, Jiro Tanaka
    2013 Volume 8 Issue 1 Pages 97-108
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    We prototyped two selection techniques, Point-Tap and Tap-Tap, and conducted experiments to assess their characteristics, in particular how familiarity with a space affects their usability. Both techniques were developed to enhance the capability of the general “pointing gesture” and “map with live video” techniques. The goal of both techniques is to acquire a target object in smart space, and they share the concept of “see-and-select,” which allows users to select an object while seeing the objects with their own eyes. Consequently, users must rely on the spatial locations of objects when using the techniques. According to spatial cognition science, humans recognize object locations in two ways, egocentrically and allocentrically, and some work has pointed out that users rely on allocentric representations more once they have become familiar with a space. Indeed, in our experiments, users who were familiar with the space could use the “map with live video” technique more effectively. The two main contributions of this paper are the presentation of the new techniques themselves, and the identification of a major factor for applying the techniques, namely, the users' expected familiarity with a space.
    Download PDF (1704K)
  • Rafael Henrique Castanheira de Souza, Masatoshi Okutomi, Akihiko Torii
    2012 Volume 8 Issue 1 Pages 109-117
    Published: 2012
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    This paper presents a real-time incremental mosaicing method that generates a large seamless 2D image by stitching video key-frames as soon as they are detected. There are four main contributions: (1) we propose a “fast” key-frame selection procedure based solely on the distribution of the distance of matched feature descriptors. This procedure automatically selects key-frames that are used to expand the mosaics while achieving real-time performance; (2) we register key-frame images by using a non-rigid deformation model based on a triangular mesh in order to “smoothly” stitch images when scene transformations can not be expressed by homography; (3) we add a new constraint on the non-rigid deformation model that penalizes over-deformation in order to create mosaics with natural appearance; (4) we propose a fast image stitching algorithm for real-time mosaic rendering modeled as an instance of the minimum graph cut problem, applied to mesh triangles instead of the image pixels. The performance of the proposed method is validated by experiments in non-controlled conditions and by comparison with a state-of-the-art method.
    Download PDF (3292K)
  • Yoshihiko Suhara, Jun Suzuki, Ryoji Kataoka
    2013 Volume 8 Issue 1 Pages 118-129
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    Learning to rank is a supervised learning problem whose goal is to construct a ranking model. In recent years, online learning to rank algorithms have begun to attract attention because large-scale datasets have become available. We propose a selective pairwise approach to online learning to rank algorithms that offer both fast learning and high performance. The basic strategy of our method is to select the most effective document pair to minimize the objective function using an entered query present in the training data, and then updates the current weight vector by using only the selected document pair instead of using all document pairs in the query. The main characteristics of our method are that it utilizes adaptive margin rescaling based on the approximated NDCG to reflect the IR evaluation measure, the max-loss update procedure, and ramp loss to reduce the over-fitting problem. Finally, we implement our proposal, PARank-NDCG, in the framework of the Passive-Aggressive algorithm. We conduct experiments on the MSLR-WEB datasets, which contain 10,000 and 30,000 queries. Our experiments show that PARank-NDCG outperforms conventional algorithms including online learning to rank algorithms such as Stochastic Pairwise Descent, Committee Perceptron and batch algorithm such as RankingSVM on NDCG values. In addition, our method only takes 7 seconds to learn a model on the MSLR-WEB10K dataset. PARank-NDCG offers approximately 63 times faster training than RankingSVM on average.
    Download PDF (462K)
Computer Networks and Broadcasting
  • Makoto Sugihara, Akihito Iwanaga
    2013 Volume 8 Issue 1 Pages 130-136
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    It is essential to cut down the fabrication cost especially in mass production. This paper presents a design methodology in which we reduce the operating frequency of a communication bus under hard real-time constraints so that we can cut down the cost of a communication mechanism of an in-vehicular embedded system. The reduction of the operating frequency contributes to choosing a slower and cheaper wire harness that constitutes an in-vehicular network system. We formalize a bus bandwidth minimization problem to optimize a payload size of a frame under hard real-time constraints on the assumption that each and every signal is uniquely mapped to its own time slot of the time division multiple access (TDMA) scheme. Our experimental results show that our methodology obtained an optimal payload size of a frame and an optimal operating frequency of a bus for several hypothetical automotive benchmarks. Our method achieved one-fifth of the typical bandwidth of a FlexRay bus, that is 10Mbps, for the SAE benchmark signal set.
    Download PDF (312K)
  • Quang Tran Minh, Muhammad Ariff Baharudin, Eiji Kamioka
    2013 Volume 8 Issue 1 Pages 137-150
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    This paper proposes a notable mobile phone based context-aware traffic state estimation (MC-TES) framework whereby the essential issues of low and uncertain penetration rate are thoroughly resolved. A novel intelligent context-aware velocity-density inference circuit (ICIC) and a practical artificial neural network (ANN) based prediction approach are proposed. The ICIC model not only improves the traffic state estimation effectiveness but also minimizes the critical penetration rate required in the mobile phone based traffic state estimation (M-TES). The ANN-based prediction approach is considered as a complement of the ICIC in cases of an unacceptably low or unknown penetration rate. In addition, the difficulty in selecting the “right” traffic state estimation model, namely among the ICIC and the ANN, under the condition of an uncertain penetration rate is resolved. The experimental evaluations confirm the effectiveness, the feasibility as well as the robustness of the proposed approaches. As a result, this research contributes to accelerating the realization of mobile phone-based intelligent transportation systems (M-ITSs) or of the M-TES systems in specific.
    Download PDF (956K)
  • Huiting Cheng, Yasushi Yamao
    2013 Volume 8 Issue 1 Pages 151-159
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    The reliability of Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) vehicle-to-vehicle (V2V) communication in real road environments suffers from fading, shadowing and the hidden terminal problem, especially for non-line-of sight (NLOS) areas such as intersections. In order to improve the communication reliability, a CSMA/CA based vehicle-roadside-vehicle broadcast relay network was proposed and its effectiveness has been shown through simulations. However, the potential of such a network has not been well analyzed and optimized. In this paper, a theoretical model is proposed to analyze the performance of the broadcast relay network in detail. In order to fit real vehicular environments, the model assumes a typical crossroad and takes into account fading, shadowing, the hidden terminal problem and the capture effect. The influence of system parameters including position of nodes, carrier sense threshold and RF frequency band on the reliability of the network is studied based on the model. The accuracy of the proposed analytical model is confirmed by simulations. The analytical model and the obtained results are useful for the design of vehicular broadcast networks to select appropriate system parameters.
    Download PDF (1135K)
  • Kazuhisa Matsuzono, Hitoshi Asaeda, Osamu Nakamura, Jun Murai
    2013 Volume 8 Issue 1 Pages 160-172
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    Motivated by the deployment of wide-area high-speed networks, we propose GENEVA, the streaming control algorithm using generalized multiplicative-increase/additive-decrease (GMIAD). Because current typical congestion controllers such as a TCP-friendly rate control prevent occurrences of network congestion reacting susceptibly to packet loss, it causes a significant degradation of streaming quality due to low-achieving throughput (i.e., lower throughput than the maximum throughput that a streaming flow requires in maximum audio/video quality) and data packet losses. GENEVA avoids this problem by allowing a streaming flow to maintain moderate network congestion while trying to recover lost data packets that other competing flows cause during the process of probing for available bandwidth. Using the GMIAD mechanism, the FEC window size (the degree of FEC redundancy per unit time) is adjusted to suppress occurrences of bursty packet loss, while trying to effectively utilize network resources that other competing flows cannot consume due to reductions in the transmission rate in response to packet loss. We describe the GENEVA algorithm and evaluate its effectiveness using an NS-2 simulator. The results show that GENEVA enables high-performance streaming flows to retain higher streaming quality under stable conditions while minimizing the adverse impact on competing TCP performance.
    Download PDF (685K)
  • Hiroki Oda, Hiroyuki Hisamatsu, Hiroshi Noborio
    2013 Volume 8 Issue 1 Pages 173-181
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    In high-speed and long-distance networks, TCP NewReno, the most popular version of Transmission Control Protocol (TCP), cannot achieve sufficient throughput owing to the inherent nature of the congestion control mechanism of TCP. Therefore, in order to overcome this limitation, Compound TCP was proposed. Compound TCP can achieve a considerably higher throughput than TCP NewReno in high-speed and long-distance networks. The congestion control mechanism of Compound TCP consists of loss-based and delay-based congestion controls. However, in wireless LAN, the media access control used causes unfairness in the throughput among TCP connections. Compound TCP has the same type of congestion control as TCP NewReno; hence, it is expected that the problem will occur among Compound TCP connections. In this study, we evaluate the performance of Compound TCP for wireless LAN, and demonstrate that the throughput among Compound TCP connections becomes unfair. Then, we propose Compound TCP+, which implements a finer congestion control by detecting a state of slight congestion. Using simulation, we show that in wireless LAN, Compound TCP+ connections achieve fairness and share the bandwidth equally. We also demonstrate through simulation that Compound TCP+ achieves high throughput in a high-speed wired network.
    Download PDF (402K)
  • Odira Elisha Abade, Katsuhiko Kaji, Nobuo Kawaguchi
    2013 Volume 8 Issue 1 Pages 182-195
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    Explicit multiunicast (XCAST) has been proposed as a multicasting scheme with complementary scaling properties which can solve the scalability problems of conventional IP Multicast. XCAST is suitable for videoconferencing, online games and IPTV. This paper deals with QoS provisioning in XCAST networks using Differentiated Services (DiffServ). We show that integration of DiffServ in XCAST is a non-trivial problem due to inherent architectural differences between XCAST and DiffServ. We then propose a scheme called QS-XCAST that uses dynamic DSCPs to adapt to the heterogeneity of receivers in an XCAST network. We also provide an algorithm for harmonizing the receiver-driven and sender-driven QoS approaches between XCAST and DiffServ thereby determining the correct DSCP-PHB for all links in an XCAST network. By simulating using OMNeT++ we evaluate QS-XCAST using four metrics: throughput, average per-hop-delay, link utilization and forwarding fairness to other traffic in the network. Our solution eliminates DSCP confusion and collusion attack problems to which naive XCAST QoS provisioning is vulnerable. It also offers a more efficient bandwidth utilization, better forwarding fairness and less traffic load compared to the existing XCAST.
    Download PDF (979K)
Information Systems and Applications
  • Shin'ichi Shiraishi
    2013 Volume 8 Issue 1 Pages 196-207
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    This paper presents two different model-based approaches that use multiple architecture description languages (ADLs) for automotive system development. One approach is based on AADL (Architecture Analysis & Design Language), and the other is a collaborative approach using multiple languages: SysML (Systems Modeling Language) and MARTE (Modeling and Analysis of Real-Time and Embedded systems). In this paper, the detailed modeling steps for both approaches are explained through a real-world automotive development example: a cruise control system. Moreover, discussion of the modeling steps offers a qualitative comparison of the two approaches, and then clarifies the characteristics of the different types of ADLs.
    Download PDF (1293K)
  • Midori Sugaya, Hiroki Takamura, Yoichi Ishiwata, Satoshi Kagami, Kimio ...
    2013 Volume 8 Issue 1 Pages 208-221
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    Humanoid robot systems are composed of an assortment of hardware and software components, and they have complex embedded systems and real-time properties. These features make it difficult to isolate or to identify a fault in a short period of time even though such systems are expected to recover quickly in order to avoid any harmful behaviors that may cause harm to the users. This paper presents a new technological method for detecting errors in real-time applications online through the technique of online kernel log monitoring and analysis method. The contributions of approaches are that we present a method for kernel log analysis based on a state transition model of scheduling tasks, and apply it to the kernel logs to detect anomaly behavior of real-time tasks. In order to reduce the analysis overhead of huge volumes of data, we propose a new system that places the kernel log analysis engine on a separate core from the one that runs the kernel log monitoring process. Based on this system, we provide a framework for writing analyzers to detect errors incrementally. In our system, these components work together to solve the problems highlighted by root cause analysis in robotic systems. We applied the proposed system to actual robotics systems and successfully detected several deviated errors and faults that include a serious priority inversion that was not detected in over 10 years of operation in the actual operating system.
    Download PDF (752K)
  • Patcharee Basu, Achmad Basuki, Achmad Husni Thamrin, Keiko Okawa, Jun ...
    2013 Volume 8 Issue 1 Pages 222-229
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    This paper studies the innovative methodology for developing IT education from the traditional face-to-face model to full capability of distance learning by combining a remote computer laboratory and a distance e-learning environment. There are three approaches presented with different implementations of laboratory technologies and learning models. The design challenge is to address the limitations of resources at region-wide learning sites, cost-effectiveness and scalability. Computer virtualization and StarBED computing testbed can achieve a larger scale of laboratory size with less cost of equipment and administration workload. The live learning environment ensures quality of realtime communications during lecture and lab sessions by employing IPv6 Multicast on a satellite UDLR network. The self-paced learning environment is proposed to enable a flexible schedule, resource reusability, and scalability. The lab supervision system is the key component to enhance teaching effectiveness in the self-paced hands-on practice by systematically automating the lab supervision skills of lecturers. These approaches have been evaluated as feasible, cost-effective and scalable by the real implementations of similar Asia-wide workshops. Trade-off overheads and analysis on technology and pedagogy perspectives are made to compare their characteristics.
    Download PDF (1838K)
  • Yuichiro Otsuka, Junshan Hu, Tomoo Inoue
    2013 Volume 8 Issue 1 Pages 230-238
    Published: 2013
    Released on J-STAGE: March 15, 2013
    JOURNAL FREE ACCESS
    A tabletop dish recommendation system for multiple users dining together, called Group FDT (Future Dining Table), is presented. The system continually assesses the dining status of users, and recommends dishes in a timely manner. The recommendation timing and displayed position of recommended dishes are based on research on real dining, extant literature, and experimental results. Thus, the system is expected to be useful in addressing staff shortages in the food service industry such as Japanese pubs which often receive additional dish orders.
    Download PDF (442K)
feedback
Top