The fusion of automobile technology and information technology is now progressing at a rapid pace. The use of information technology in automobiles is being developed in two ways. One is its role in enhancing driving performance, so that some cars have begun to resemble “computers that move.” Another key area is Intelligent Transport Systems (ITS) which provide innovative services relating to transport, traffic management and which can enable car users to be better informed and smarter users of the technology. A probe vehicle system, which utilizes the sensor data of each automobile, has become a new trend in the service deployment of ITS for enhancing car telematics. This kind of system is expected to provide a platform for large amounts of data based on automobiles. This paper describes various methods of fusion between the Internet and automobile technologies. Moreover, the possibilities of developing various interfaces and forms of privacy protection that can be integrated into probe vehicle systems, and which can also provide a basis for various services, are also explored and examined in depth.
Wide area sensor network research consortium, Live E! project has been start in 2005. In this paper, We will discribe sensor networking transition using Live E! project activities. Also we will discribe future prospects of sensor networking.
Accessibility of information communication technology (ICT) devices and information obtained via such devices are essential for daily lives. It has huge social significance and ICT could be a door leading to society especially for people with disabilities. Therefore, there are many countries that have assembled the regulation to mandate or advance universal access for all information and ICT devices. Standards organizations like World Wide Web Consortium (W3C) have developed technology standards and guidelines for accessibility that enable ICT providers to evaluate their technologies. This article introduces the history of ICT accessibility and describes technologies that help developers to make and validate accessible content. State of the art of accessibility technologies including “crowd accessibility” are also described.
The preference about poses of the idol varys and there is a demand for selection of idol images according to their pose. We present an agent for classifying by the pose of idol in still images. In this work, we focus on still images of idol wearing swimsuits. For each image, we create the feature vectors such as line segments indicating size and location, orientations of ten body parts(head, torso, upper/lower arms/legs) in reference to Eichner's Stickman Pose Estimation. Moreover, in order to raise the accuracy of the classification, we propose Human Pose Guide Ontology(HPGO) to guide and constrain the feature vectors. Finally, we evaluate our approach and show its efficiency of HPGO.
Many surprising recipes exist in the user- generated recipe site. The simplest way to find surprising recipe is to use a search function. However, the title of its recipe does not always contain the keyword “surprise”. Thus we cannot find surprising recipes in an easy way. In this paper, we propose a system to extract surprising ingredients as the first step of extracting surprising recipes from the recipe site. We propose RF-IIF (Recipe Frequency-Inverse Ingredient Frequency) based on TF-IDF in our system. RF-IIF calculates a surprising value of the ingredient about the meal based on the generality and the rarity of the ingredient. We extract ingredients whose RF-IIF ranks are in top 20 as surprising ingredients about the meal. Through questionnaires to evaluate the extracted ingredients, we verified the effectiveness of the proposed method which extracts surprising ingredients.
In anthropological and archaeological problems, it is difficult to presume the change processes among specific time points when ruins were excavated. In this paper, we propose a novel Agent-Based Simulation (ABS) model to extract various to-be processes. As an example, we discuss the problem of whether native Jomon people or Chinese-Korean immigrants played the major role of agricultural culture in Yayoi period by ABS. The model demonstrates that under the situation where many Jomon people introduced the agricultural culture in early Yayoi period, it is possible that those who have the trait gene of Chinese-Korean immigrants belong to the largest group three hundred years later. This result suggests the plausibility that those who played the major role of the agricultural culture in early Yayoi period included many Jomon people and then the mixed breed people between Jomon and Chinese-Korean immigrants became to be those who played the major role. Compared with the other applications of ABS in archaeological domains, which only explain the simulation results and the archaeological facts based on the input data and models, our results utilize ABS as a novel tool to examine various to-be processes that could lead to new hypothesizes.
Agent programming models such as distributed multiagent models and mobile agent models have been proposed. We have proposed an agent programming model for business applications since 2000 and applied to auction systems, a financial trading system, and others applications. Through those applications we obtained the knowledge that the programming model is efficient on developing applications. Especially, it is very efficient for scale out applications. In this paper, we discuss about the efficiency by referring the applications developed using the model.
Financial markets have suffered large scale crashes, like the 1929 US stock market crash, and the global economy has been influenced significantly as a result. In this study, we focus on the relationship between financial crashes and short-selling, the latter of which is an important trading method in the modern financial market. To this end, we have developed an agent-based model incorporating the short-selling on the basis of a previous study by Friedman and Abraham. Portfolio-managers are modeled explicitly in the model, whose investment strategies, namely the leverage levels, are adjusted according to the payoff gradient. Like the previous study, comparison of the numerical result with the analytical solution is used to validate the model. We analyze short-selling's influence on the statistics of crashes through a series of simulations, in which both macro- and micro-mechanisms have been carefully investigated. As a result, we find that the volatility of markets with short-selling becomes about twice as large as that of markets without short-selling. In the meantime, the crash frequency increases, which means that short-selling could make financial markets more unstable. We also find that crashes are caused by agents' accidental loss which drives other agents to lower their leverage. On the other hand, traders in a market with short-selling become more active right after the crash than those in a market without short-selling.
We developed a simulation-optimization integration system, for the purpose of improving efficiency of the automated transportation system in a large-scale job-shop factory. In such a factory, many semi-finished goods are delivered to and from shops by automatically controlled vehicles. Those vehicles are required to move efficiently since the traffic jam of vehicles may lower the production efficiency of the shops. In order to implement the simulation-optimization integration system, we used NETLOGO, a multi-agent simulator, and αPSO, an optimization algorithm, which are appropriate for simulating and optimizing such a stochastic complex system as an automated transportation network. We conducted some feasibility studies and found two cases where the efficiencies of the transportation improve: we optimized distance-velocity relation between vehicles, which could increase the flow of the vehicles by 22.1%; we also optimized green period of the signal at the crossing of the transportation, which could reduce the travel time of the vehicles by 12.8%.
This paper reports simulation results of smart acces vehicle (SAV) service for a middle-sized city, Hakodate, JAPAN, to evaluate utility of SAV compared with a traditional bus system (i.e. route-bus). And we propose the simulation environment SAVSQUID (SAVs Simulator for Qualitative Utility Investigation and Design). One of difficulties of introduction of such ’smart‘ transportation systems is hardness of predicting how well the system works and is used by citizens and/or tourists. Our previous work of simulation of such smart systems showed a smart system could perform better than fixed-route bus systems in large-scale operation in a virtual city under ideal conditions. In order to confirm the result in a real city, Hakodate, we try to investigate effects and features of SAV based on actual road map, costs, and realistic conditions (ex. acceleration, deceleration, boarding and alighting time, and more). The simulation by the real data shows several interesting properties of the realistic smart system. Therefore, simulation environment SAVSQUID is constructs three part, traffic physical simulator, vehicle's demand / routing controller, exhibit simulation controller. SAVSQUID could reveal the condition of service conditions. The result of SAV simulation through the SAVSQUID shows some difference of virtual situations. This result suggests, importance of realistic conditions including physical conditions on those simulations.
In this paper, we develop a new class of iterative mechanisms called a VCG-equivalent in expectation mechanism. To guarantee that sincere strategies are an ex post equilibrium, it inevitably asks an irrelevant query, in which a participant has no incentive to answer the query sincerely. Such an irrelevant query causes unnecessary leakage of private information and a different incentive issue. In a VCG-equivalent in expectation mechanism, the mechanism achieves the same allocation as VCG, but the transfers are the same as VCG only in expectation. We show that in a VCG-equivalent in expectation mechanism, sincere strategies constitute a sequential equilibrium. Also, we develop a general procedure for constructing a VCG-equivalent in expectation mechanism that does not ask any irrelevant query. To demonstrate the applicability of this idea in a practical application, we develop a VCG-equivalent in expectation mechanism that can be applied to the Japanese 4G spectrum auction.
A mobile agent is autonomous software which can migrate among different nodes. A mobile agent can continue with the execution of their task before and after migration among different nodes with transferring of program codes. In a mobile agent system, cloned mobile agents have same program codes. When such mobile agents migrate from different nodes to same one node, a duplicate transfer of same program codes occur, and the data traffic increases. This paper formulates this problem as the generalized assignment problem (GAP) and proposes a mechanism of agent migration to prevent duplicate transfer of same program codes.
In this study, we developed an applied model using Axelrod's model for the dissemination of culture, aimed at understanding the effect of information recommendation on social media. Social media is a media that enables individuals to transmit and receive information. People are influenced by one another through the communication with others on the social media. The information recommendation of social media is a valuable function that assists individuals in choosing the preferable information from a massive amount of information. However, the information recommendation automatizes the decision-making process of individuals. Also, there is a concern that excessive use of information recommendation encourages conforming behavior to the view of a majority. To address the problem above, it is needed to investigate how the information recommendation influences individuals' opinions or interests and, at the same time, to understand how its influence is related to the cultural diversity of the whole site. We expressed social media as a range of feedback of information in the model. A large feedback range expresses a mass media, and a small feedback range expresses a restrictive communication among friends. As a result of the analysis, we found out that the aggregated information in the group of middle-range was very useful in order to bring in the cultural diversity.
This paper proposes Multiagent-based Artificial Bee Colony (M-ABC) algorithm by improving ABC algorithm for dynamical environments without using global information (i.e., local information only), and investigates its effectiveness on cooperation among rescue agents in dynamic disaster environments, which requires to quickly and efficiently find victims. Intensive simulations on the victim rescue in RoboCup Rescue Simulation System (RCRSS) have revealed the following implications: (1) M-ABC algorithm can rescue victims faster than the full search method as the conventional method. In particular, M-ABC distance (as one of the proposed M-ABC algorithms) can derive the highest performance; (2) M-ABC distance can keep high performance even in dynamical environments where victims move elsewhere; and (3) M-ABC distance can completely rescue victims in dynamical environments, while Ri-one method as the 2012 champion of RoboCup Rescue Simulation League (RCRSL) cannot in such a case.
This paper proposes mathematical forms of service model based on service dominant logic, in which the origin of the services is explained as work exchange between persons for living. If a person is not good at working for a specific task, the person can exchange this task to the other person who are good at working. When many tasks are exchanged between many persons, a service market is established. In this market, service prices are determined from the balance of demand and supply. A person who wants to decrease their working cost by exchanging their tasks with proper persons can move to a proper market in which his strong task is highly priced. Such movement builds up a small community where partner members can exchange their tasks for mutual benefits. This paper conducts service exchange simulations for deep investigation of service community formation, and as a result, it shows that the small communities are generated on their own account, but their sizes are limited. This limited-size communities, however, decrease the persons' average workload to an extent.
We investigate the conditions in which cooperation is dominant in social media using the models of evolutionary games of public goods games and try to clarify the mechanisms that social media thrive. Situation that cooperation is dominant corresponds to that in which posting articles and taking reactions by comment etc. result in some benefits than do nothing as free-riders. Thus, we can say that the social media thrive. It is, however, hard to foresee whether the current popular social media continue to thrive or whether new social media become to thrive in the future. A number of studies examined situations that cooperation is dominant by using the public goods game. However, they assume that users are connected with complete graph as the network structures, which are far from the actual network structures. Thus, we conducted the simulation experiments to identify the cooperation-dominant situations in two network models, WS and BA models, which are said to be close to real networks. We show the differences in the mechanisms for keeping social media thriving in these networks. Our results indicated that in the WS-model networks, the results were quite similar with those of completed graph. However, the BA model exhibited quite different characteristics and thriving conditions were relatively easier in BA-model networks than in others.
Recently, there is no significant difference among Japanese general contractors' building techniques. So, general contractors are becoming to focus on management skills of constructing. As there are various employment types, many works and many relations between works, it is very difficult for general contractors to plan the best process planning of works. Once general contractors created a process planning of works, which they think better, there are no ways to evaluate it except constructing a building in the real world according to the created one. In this research, we made a simulation model with agent-based approach to simulate how interior finish works proceeded in the apartment building construction with a planned process of works by the general contractor. And we show some simulation results and discuss about future possibilities of using our simulation model.
PP2P streaming today uses constant bitrate video, primarily because it is easier to cut such video into substreams and deliver it via multiple peers. The same method suffers from low reliability of end-to-end throughput which can cause playback freezes. This paper proposes a variable-bitrate method for P2P streaming which solves this problem. The proposed method outperforms traditional P2P streaming by a large margin and provides a highly resilient streaming platform.
Recently, a lot of streaming services making use of TCP (Transmission Control Protocol) are developed. To keep the QoS in the streaming services highly, it is important to estimate available bandwidth of TCP stream precisely in a short period of time. We propose the real time available bandwidth estimating method. Our proposed method has the two advantages for the conventional bandwidth estimating methods. (1) Our proposed method does not generate the test traffic for the estimation and (2) considers the network conditions such as the jitter and latency. Therefore, our proposed method realizes the real time bandwidth estimation with lower network load and higher precision. Application software can easily use the results obtained by the function through socket API (Application Programming Interface). We implement the method for the TCP protocol stacks and compare the precision with the conventional bandwidth estimation tools. From the experimental results, our method outperforms the tools in the presicion.
Web application is a typical open system desirable to be dynamically adaptive to allow desired levels of flexibility. The adaptivity may be achieved by replacing some of its components at runtime, and a new method is required to ensure that the replacement, substitution, is safe. This paper proposes a formal framework of the safe substitutability in self-adaptive Web applications, and presents a substitutabillity checking method based on the integrity. In an example case, an adaptive Web application demonstrates the effectiveness of the proposed method.
In this research, we aim to quantify the difficulty of program comprehension during source code reading. We use Near Infra-Red Spectroscopy(NIRS) to measure the activation of brain. As a result of an experiment with 10 subjects, 8 of them showed a strong activation in the brain during reading of strongly obfuscated programs that are extremely difficult to comprehend. We also normalized the data for each participant and aggregated them for statistical testing. As a result of t-test, significant difference (p < 0.001) was seen in the mean of the brain blood flow between obfuscated and non-obfuscated programs.
In software configuration management, it is important to separate source code changes into meaningful units before committing them (in short, Task Level Commit). However, developers often commit unrelated code changes in a single transaction. To support Task Level Commit, an existing technique uses an editing history of source code and enables developers to group the editing operations in the history. This paper proposes an automated technique for grouping editing operations in a history based on several criteria including source files, classes, methods, comments, and times editted. We show how our technique reduces developers' separating cost compared with the manual approach.
A large multi-touch tabletop has remote areas that the users can not reach by their hands. This forces users to walk around the tabletop. In this paper, we present a novel remote control technique which we call HandyScope. This technique allows users to manipulate those remote areas. Moreover, users can transfer an object between a nearby area and a remote areas using the widget. In addition, users use pull-out, our own bimanual multi-touch gesture, both to invoke HandyScope, and to determine appropriate control-display ratio to point remote areas. This gesture allows multiple users to simultaneously manipulate remote areas without conflict with other touch gestures. To evaluate the performance of HandyScope, we compared HandyScope with direct touch operation. The results show that HandyScope is significantly faster in selection.
In equational logics, validity of equations on data structures such as natural numbers or lists is formalized as inductive validity; and the equations inductively valid are called inductive theorems. In general, it is undecidable whether an equation is an inductive theorem of an equational logic. Thus, several decision procedures of inductive validity for some subclasses of conjectures have been known. A decision procedure given by Falke and Kapur (2006) is based on rewriting induction. Toyama (2002) gives a sufficient criterion of conjectures for which the rewriting induction procedure becomes a decision procedure for inductive validity, by reducing the problem of inductive validity to a problem of equivalence of two abstract reduction systems. However, the classes of conjectures having decidable inductive validity obtained by Falke and Kapur and obtained by Toyama are incomparable. In this paper, we extend known classes of conjectures having decidable inductive validity, combining these two approaches.
In order to improve the reliability of the embedded system, it is desired to find some problems in the early stages of the development. When developing embedded system, a prime concern is how a control software(Controller) and a control target(Plant) affect each other. On the other hand, because these characteristics are different greatly, it is hard to represent and analyze in a single framework. In this paper, we propose a Co-Analysis method which is combined each design information. To give flexibility to a design of controller, it is good to use a symbolic method with non-deterministic and representation of declarative constraints in analyzing the controller. The plant, which follows the physical laws, is analyzed by numerical method. Namely, in our co-analysis method we investigate the behavior of whole embedded system by a combination of symbolic representation of controller and numerical analysis of plant. Concrete example has applied this method to the two-wheeled inverted pendulum robot model of which the controller is described in SysML and the plant in Simulink. Then we confirmed the effectiveness through experiment to check the validity of the system constraints.
“Extract Method” is a refactoring pattern that extracts a part of an existing method as a new method. Although extract method refactoring is an effective way to decompose long and non-cohesive methods in general, how developers choose methods for “Extract Method” refactoring is still unexamined. For supporting this refactoring, the investigation of it is necessary. In this study, we investigated the differences of the size and cohesion of methods between refactored methods and not-refactored methods in open source software. The result shows significant deliverances in the most cases.
In this research, we created smartphone cases attaching a dimple or a wedge shaped object in order to improve eyes-free and single-handed touch accuracy. We considered that users could use these dimple or wedge shaped object as a tactile marker for a screen of smartphone. In the user study, we evaluated touch accuracies using a smartphone with each case.
Statistical model checking is approximate probabilistic model checking using statistical methods. It is originally developed for CSL whose formula has multiple probabilistic path quantifiers, however, the availability of the model checking method is limited, because it requires multiple testing. In this article, we propose an LTL-based probabilistic frequency temporal logic PFLTL whose formulae have exactly one probabilistic path quantifier, standard temporal operators, and frequency operators. A PFLTL formula can directly and intuitively express a quantitative property on a randomized behavior, with the notions of probability and frequency. Probabilistic periodicity is such a property that cannot be appropriately represented in CSL and existing real-time logics. Using statistical methods effectively, we also develop a statistical PFLTL model checking method that can check models which are untreatable in CSL model checking methods.