-
Takayuki Ito
Article type: Special Issue on Theory and Application of Agent Research
2013 Volume 21 Issue 1 Pages
1
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
-
Tetsuo Imai, Atsushi Tanaka
Article type: Special Issue on Theory and Application of Agent Research
Subject area: Knowledge Community
2013 Volume 21 Issue 1 Pages
2-8
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
Recent studies revealed that some social and technological network formations can be represented by the network formation games played by selfish multiple agents. In general, the topologies formed by selfish multiple agents are worse than or equal to those formed by the centralized designer in the sense of social total welfare. Several works such as the price of anarchy are known as a measure for evaluating the inefficiency of solutions obtained by selfish multiple agents compared to the social optimal solution. In this paper, we introduce the expected price of anarchy which is proposed as a valid measure for evaluating the inefficiency of the dynamic network formation game whose solution space is divided into basins with multimodal sizes. Moreover, through some computer simulations we show that it can represent the average case behavior of inefficiency of dynamic network formation games which is missed by two previous measures.
View full abstract
-
Naoki Fukuta
Article type: Special Issue on Theory and Application of Agent Research
Subject area: Knowledge Community
2013 Volume 21 Issue 1 Pages
9-15
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
A multi-unit combinatorial auction is a combinatorial auction that has some items that can be seen as indistinguishable. Although the mechanism can be applied to dynamic electricity auctions and various purposes, it is difficult to apply to large-scale auction problems due to its computational intractability. In this paper, I present an idea and an analysis about an approximate allocation and pricing algorithm that is capable of handling multi-unit auctions. The analysis shows that the algorithm effectively produces approximation allocations that are necessary in pricing. Furthermore, the algorithm can be seen as an approximation of VCG (Vickrey-Clarke-Groves) mechanism satisfying budget balance condition and bidders' individual rationality without having unrealistic assumptions on bidders' behaviors. I show that the proposed allocation algorithm successfully produced good allocations for those problems that could not be easily solved by ordinary LP solvers due to hard time constraints.
View full abstract
-
Tetsuro Tanaka
Article type: Special Issue on Game Programming
2013 Volume 21 Issue 1 Pages
16
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
-
Akira Ura, Daisaku Yokoyama, Takashi Chikayama
Article type: Special Issue on Game Programming
Subject area: Parallel and Distributed Algorithms
2013 Volume 21 Issue 1 Pages
17-25
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
It is difficult to fully utilize the parallelism of large-scale computing environments in alpha-beta search. The naive parallel execution of subtrees would result in much less task pruning than may have been possible in sequential execution. This may even degrade total performance. To overcome this difficulty, we propose a two-level task scheduling policy in which all tasks are classified into two priority levels based on the necessity for their results. Low priority level tasks are only executed after all high priority level tasks currently executable have started. When new high priority level tasks are generated, the execution of low priority level tasks is suspended so that high level tasks can be executed. We suggest tasks be classified into the two levels based on the Young Brothers Wait Concept, which is widely used in parallel alpha-beta search. The experimental results revealed that the scheduling policy suppresses the degradation in performance caused by executing tasks whose results are eventually found to be unnecessary. We found the new policy improved performance when task granularity was sufficiently large.
View full abstract
-
Kazuya Haraguchi
Article type: Special Issue on Game Programming
Subject area: Edutainment
2013 Volume 21 Issue 1 Pages
26-32
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
In this paper, we study how many inequality signs we should include in the design of Futoshiki puzzle. A problem instance of Futoshiki puzzle is given as an
n ×
n grid of cells such that some cells are empty, other cells are filled with integers in [
n] = {1, 2,...,
n}, and some pairs of two adjacent cells have inequality signs. A solver is then asked to fill all the empty cells with integers in [
n] so that the
n2 integers in the grid form an
n ×
n Latin square and satisfy all the inequalities. In the design of a Futoshiki instance, we assert that the number of inequality signs should be an intermediate one. To draw this assertion, we compare Futoshiki instances that have different numbers of inequality signs from each other. The criterion is the degree to which the condition on inequality is used to solve the instance. If this degree were small, then the instance would be no better than one of a simple Latin square completion puzzle like Sudoku, with unnecessary inequality signs. Since we are considering Futoshiki puzzle, it is natural to take an interest in instances with large degrees. As a result of the experiments, the Futoshiki instances which have an intermediate number of inequality signs tend to achieve the largest evaluation values, rather than the ones which have few or many inequality signs.
View full abstract
-
Yukikazu Nakamoto
Article type: Special Issue on Embedded Systems Engineering
2013 Volume 21 Issue 1 Pages
33
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
-
Shin'ichi Shiraishi
Article type: Special Issue on Embedded Systems Engineering
Subject area: Design Methodologies
2013 Volume 21 Issue 1 Pages
34-45
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
This paper presents two different model-based approaches that use multiple architecture description languages (ADLs) for automotive system development. One approach is based on AADL (Architecture Analysis & Design Language), and the other is a collaborative approach using multiple languages: SysML (Systems Modeling Language) and MARTE (Modeling and Analysis of Real-Time and Embedded systems). In this paper, the detailed modeling steps for both approaches are explained through a real-world automotive development example: a cruise control system. Moreover, discussion of the modeling steps offers a qualitative comparison of the two approaches, and then clarifies the characteristics of the different types of ADLs.
View full abstract
-
Makoto Sugihara, Akihito Iwanaga
Article type: Special Issue on Embedded Systems Engineering
Subject area: Design Methodologies
2013 Volume 21 Issue 1 Pages
46-52
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
It is essential to cut down the fabrication cost especially in mass production. This paper presents a design methodology in which we reduce the operating frequency of a communication bus under hard real-time constraints so that we can cut down the cost of a communication mechanism of an in-vehicular embedded system. The reduction of the operating frequency contributes to choosing a slower and cheaper wire harness that constitutes an in-vehicular network system. We formalize a bus bandwidth minimization problem to optimize a payload size of a frame under hard real-time constraints on the assumption that each and every signal is uniquely mapped to its own time slot of the time division multiple access (TDMA) scheme. Our experimental results show that our methodology obtained an optimal payload size of a frame and an optimal operating frequency of a bus for several hypothetical automotive benchmarks. Our method achieved one-fifth of the typical bandwidth of a FlexRay bus, that is 10Mbps, for the SAE benchmark signal set.
View full abstract
-
Midori Sugaya, Hiroki Takamura, Yoichi Ishiwata, Satoshi Kagami, Kimio ...
Article type: Special Issue on Embedded Systems Engineering
Subject area: Verification, Testing, and Debugging
2013 Volume 21 Issue 1 Pages
53-66
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
Humanoid robot systems are composed of an assortment of hardware and software components, and they have complex embedded systems and real-time properties. These features make it difficult to isolate or to identify a fault in a short period of time even though such systems are expected to recover quickly in order to avoid any harmful behaviors that may cause harm to the users. This paper presents a new technological method for detecting errors in real-time applications online through the technique of online kernel log monitoring and analysis method. The contributions of approaches are that we present a method for kernel log analysis based on a state transition model of scheduling tasks, and apply it to the kernel logs to detect anomaly behavior of real-time tasks. In order to reduce the analysis overhead of huge volumes of data, we propose a new system that places the kernel log analysis engine on a separate core from the one that runs the kernel log monitoring process. Based on this system, we provide a framework for writing analyzers to detect errors incrementally. In our system, these components work together to solve the problems highlighted by root cause analysis in robotic systems. We applied the proposed system to actual robotics systems and successfully detected several deviated errors and faults that include a serious priority inversion that was not detected in over 10 years of operation in the actual operating system.
View full abstract
-
Patcharee Basu, Achmad Basuki, Achmad Husni Thamrin, Keiko Okawa, Jun ...
Article type: Special Issue on Education and Computers
Subject area: Learning Support
2013 Volume 21 Issue 1 Pages
67-74
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
This paper studies the innovative methodology for developing IT education from the traditional face-to-face model to full capability of distance learning by combining a remote computer laboratory and a distance e-learning environment. There are three approaches presented with different implementations of laboratory technologies and learning models. The design challenge is to address the limitations of resources at region-wide learning sites, cost-effectiveness and scalability. Computer virtualization and StarBED computing testbed can achieve a larger scale of laboratory size with less cost of equipment and administration workload. The live learning environment ensures quality of realtime communications during lecture and lab sessions by employing IPv6 Multicast on a satellite UDLR network. The self-paced learning environment is proposed to enable a flexible schedule, resource reusability, and scalability. The lab supervision system is the key component to enhance teaching effectiveness in the self-paced hands-on practice by systematically automating the lab supervision skills of lecturers. These approaches have been evaluated as feasible, cost-effective and scalable by the real implementations of similar Asia-wide workshops. Trade-off overheads and analysis on technology and pedagogy perspectives are made to compare their characteristics.
View full abstract
-
Keiichi Yasumoto
Article type: Special Issue on Mobile Communication and Intelligent Transport Systems for Creating a New Trend in Information and Communication Technology (ICT) Society
2013 Volume 21 Issue 1 Pages
75
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
-
Quang Tran Minh, Muhammad Ariff Baharudin, Eiji Kamioka
Article type: Special Issue on Mobile Communication and Intelligent Transport Systems for Creating a New Trend in Information and Communication Technology (ICT) Society
Subject area: ITS
2013 Volume 21 Issue 1 Pages
76-89
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
This paper proposes a notable
mobile phone based
context-aware traffic state
estimation (MC-TES) framework whereby the essential issues of low and uncertain penetration rate are thoroughly resolved. A novel
intelligent
context-aware velocity-density
inference
circuit (ICIC) and a practical artificial neural network (ANN) based prediction approach are proposed. The ICIC model not only improves the traffic state estimation effectiveness but also minimizes the critical penetration rate required in the
mobile phone based
traffic state
estimation (M-TES). The ANN-based prediction approach is considered as a complement of the ICIC in cases of an unacceptably low or unknown penetration rate. In addition, the difficulty in selecting the “right” traffic state estimation model, namely among the ICIC and the ANN, under the condition of an uncertain penetration rate is resolved. The experimental evaluations confirm the effectiveness, the feasibility as well as the robustness of the proposed approaches. As a result, this research contributes to accelerating the realization of mobile phone-based intelligent transportation systems (M-ITSs) or of the M-TES systems in specific.
View full abstract
-
Huiting Cheng, Yasushi Yamao
Article type: Special Issue on Mobile Communication and Intelligent Transport Systems for Creating a New Trend in Information and Communication Technology (ICT) Society
Subject area: ITS
2013 Volume 21 Issue 1 Pages
90-98
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
The reliability of Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) vehicle-to-vehicle (V2V) communication in real road environments suffers from fading, shadowing and the hidden terminal problem, especially for non-line-of sight (NLOS) areas such as intersections. In order to improve the communication reliability, a CSMA/CA based vehicle-roadside-vehicle broadcast relay network was proposed and its effectiveness has been shown through simulations. However, the potential of such a network has not been well analyzed and optimized. In this paper, a theoretical model is proposed to analyze the performance of the broadcast relay network in detail. In order to fit real vehicular environments, the model assumes a typical crossroad and takes into account fading, shadowing, the hidden terminal problem and the capture effect. The influence of system parameters including position of nodes, carrier sense threshold and RF frequency band on the reliability of the network is studied based on the model. The accuracy of the proposed analytical model is confirmed by simulations. The analytical model and the obtained results are useful for the design of vehicular broadcast networks to select appropriate system parameters.
View full abstract
-
Atsuo Hazeyama, Koji Tsukada
Article type: Special Issue on Collaboration Technologies and Network Services that Enrich and Secure Our Society
2013 Volume 21 Issue 1 Pages
99
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
-
Yuichiro Otsuka, Junshan Hu, Tomoo Inoue
Article type: Special Issue on Collaboration Technologies and Network Services that Enrich and Secure Our Society
Subject area: Multi-modal Interfaces
2013 Volume 21 Issue 1 Pages
100-108
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
A tabletop dish recommendation system for multiple users dining together, called Group FDT (Future Dining Table), is presented. The system continually assesses the dining status of users, and recommends dishes in a timely manner. The recommendation timing and displayed position of recommended dishes are based on research on real dining, extant literature, and experimental results. Thus, the system is expected to be useful in addressing staff shortages in the food service industry such as Japanese pubs which often receive additional dish orders.
View full abstract
-
Kazuhisa Matsuzono, Hitoshi Asaeda, Osamu Nakamura, Jun Murai
Article type: Regular Papers
Subject area: Network Protocols
2013 Volume 21 Issue 1 Pages
109-121
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
Motivated by the deployment of wide-area high-speed networks, we propose GENEVA, the streaming control algorithm using generalized multiplicative-increase/additive-decrease (GMIAD). Because current typical congestion controllers such as a TCP-friendly rate control prevent occurrences of network congestion reacting susceptibly to packet loss, it causes a significant degradation of streaming quality due to low-achieving throughput (i.e., lower throughput than the maximum throughput that a streaming flow requires in maximum audio/video quality) and data packet losses. GENEVA avoids this problem by allowing a streaming flow to maintain moderate network congestion while trying to recover lost data packets that other competing flows cause during the process of probing for available bandwidth. Using the GMIAD mechanism, the FEC window size (the degree of FEC redundancy per unit time) is adjusted to suppress occurrences of bursty packet loss, while trying to effectively utilize network resources that other competing flows cannot consume due to reductions in the transmission rate in response to packet loss. We describe the GENEVA algorithm and evaluate its effectiveness using an NS-2 simulator. The results show that GENEVA enables high-performance streaming flows to retain higher streaming quality under stable conditions while minimizing the adverse impact on competing TCP performance.
View full abstract
-
Hiroki Oda, Hiroyuki Hisamatsu, Hiroshi Noborio
Article type: Regular Papers
Subject area: Network Protocols
2013 Volume 21 Issue 1 Pages
122-130
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
In high-speed and long-distance networks, TCP NewReno, the most popular version of Transmission Control Protocol (TCP), cannot achieve sufficient throughput owing to the inherent nature of the congestion control mechanism of TCP. Therefore, in order to overcome this limitation, Compound TCP was proposed. Compound TCP can achieve a considerably higher throughput than TCP NewReno in high-speed and long-distance networks. The congestion control mechanism of Compound TCP consists of loss-based and delay-based congestion controls. However, in wireless LAN, the media access control used causes unfairness in the throughput among TCP connections. Compound TCP has the same type of congestion control as TCP NewReno; hence, it is expected that the problem will occur among Compound TCP connections. In this study, we evaluate the performance of Compound TCP for wireless LAN, and demonstrate that the throughput among Compound TCP connections becomes unfair. Then, we propose Compound TCP+, which implements a finer congestion control by detecting a state of slight congestion. Using simulation, we show that in wireless LAN, Compound TCP+ connections achieve fairness and share the bandwidth equally. We also demonstrate through simulation that Compound TCP+ achieves high throughput in a high-speed wired network.
View full abstract
-
Odira Elisha Abade, Katsuhiko Kaji, Nobuo Kawaguchi
Article type: Regular Papers
Subject area: Network Quality and Control
2013 Volume 21 Issue 1 Pages
131-144
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
Explicit multiunicast (XCAST) has been proposed as a multicasting scheme with complementary scaling properties which can solve the scalability problems of conventional IP Multicast. XCAST is suitable for videoconferencing, online games and IPTV. This paper deals with QoS provisioning in XCAST networks using Differentiated Services (DiffServ). We show that integration of DiffServ in XCAST is a non-trivial problem due to inherent architectural differences between XCAST and DiffServ. We then propose a scheme called QS-XCAST that uses dynamic DSCPs to adapt to the heterogeneity of receivers in an XCAST network. We also provide an algorithm for harmonizing the
receiver-driven and
sender-driven QoS approaches between XCAST and DiffServ thereby determining the correct DSCP-PHB for all links in an XCAST network. By simulating using OMNeT++ we evaluate QS-XCAST using four metrics: throughput, average per-hop-delay, link utilization and forwarding fairness to other traffic in the network. Our solution eliminates
DSCP confusion and
collusion attack problems to which naive XCAST QoS provisioning is vulnerable. It also offers a more efficient bandwidth utilization, better forwarding fairness and less traffic load compared to the existing XCAST.
View full abstract
-
Tetsuya Sakai
Article type: Regular Papers
Subject area: Web Intelligence
2013 Volume 21 Issue 1 Pages
145-155
Published: 2013
Released on J-STAGE: January 15, 2013
JOURNAL
FREE ACCESS
Given an ambiguous or underspecified web search query, search result diversification aims at accomodating different user intents within a single “entry-point” result page. However, some intents are informational, for which many relevant pages may help, while others are navigational, for which only one web page is required. We propose new evaluation metrics for search result diversification that considers this distinction, as well as the condordance test for comparing the intuitiveness of a given pair of metrics quantitatively. Our main experimental findings are: (a) In terms of
discriminative power which reflects statistical reliability, the proposed metrics, DIN#-nDCG and P+Q#, are comparable to intent recall and D#-nDCG, and possibly superior to α-nDCG; (b) In terms of the
concordance test which quantifies the agreement of a diversity metric with a gold standard metric that represents a basic desirable property, DIN#-nDCG is superior to other diversity metrics in its ability to reward both diversity and relevance at the same time. Moreover, both D#-nDCG and DIN#-nDCG significantly outperform α-nDCG in their ability to reward diversity, to reward relevance, and to reward both at the same time. In addition, we demonstrate that the randomised Tukey's Honestly Significant Differences test that takes the entire set of available runs into account is substantially more conservative than the paired bootstrap test that only considers one run pair at a time, and therefore recommend the former approach for significance testing when a set of runs is available for evaluation.
View full abstract