Currently, multi-agent model is the main method of disaster evacuation simulation. Therefore, enormous computational resources are required to execute a large scale simulation. We propose a new approximate simulation method to more efficiently calculate simulations based on multi-agent model. In our method, we perform multi-agent simulation in many small areas and calculate the evacuee flow rate. By functional approximating of the evacuee flow rate, approximate models called “partial area model” are obtained. In a large-scale evacuation simulation, the partial area models corresponding to the parameters of each region are tiled on the map. The evacuation progress can be obtained quickly by calculating the evacuee flow rate between the arranged areas. In this paper, we defined an evacuation agent model, and executed a large area evacuation simulation both by the simple multi-agent model and by the proposed method. We assumed 30000 evacuees from the 2 km square area of the Amagasaki city coastal area in the both simulation. Our method is able to calculate with less than 1% run time compared with simple multi-agent model. Despite the drastic reduction in calculation time, the proposed method was able to realize a global transition of evacuation obtained by the simple multi-agent model. For the planing and verification of evacuation plans for wide evacuation areas, we have to weigh a huge number of parameters for evacuees and target area. It will be possible to elaborate a more efficient evacuation plan by improving the efficiency of the evacuation simulation by our method.
Nowadays, ontologies as basic information technology are constructed in various domains such as medical information, engineering design, automated vehicles, and environmental issues. The quality of these ontologies, which serve as the base of domain knowledge, is a critical issue in determining the performance of application systems. Therefore, a number of guidelines for ontology construction have been proposed so far, and some ontology construction and refinement methods have implemented in support systems. In this study, we propose a new method that focuses on sibling relations in is-a (subclassOf) hierarchy in addition to super-sub relations. Then, we implement it into a new ontology-construction support system. In our previous study, we developed a comparison method that focused only on super-sub relations. Our new refinement method ensures not local but comprehensive is-a hierarchy. Evaluation was performed as follows. On five ontologies, five beginners and twelve experts of ontology construction used the new system to test the applicability of suggestion from the system. We evaluated 108 refinement points that were randomly extracted from the suggestions. As a result, 105 points out of 108 points (97%) were judged to be refined by beginners, and 88 points out of 108 points (81%) were judged to be refined by experts. Thus, this test proved the effectiveness and usefulness of our proposed method.
In the theme park problem, it is important to find a coordination algorithm that effectively shortens the visiting time of an entire theme park while guaranteeing individual optimality for each visitor. In a previous study, a coordination algorithm, called statement-based cost estimate (SCE), was proposed that allows individual visitors to select plans that minimize a visitor’s visiting time while shortening the visiting time of the entire theme park. However, the improvement in visiting time was not sufficient from their experiment using SCE. We thought it necessary to relax the premise constraint “minimize individual visiting time” to further improve SCE. In this paper, we propose a framework to further reduce visiting time by considering Pareto optimality. In the proposed framework, each visitor determines several Pareto optimal plans based on the evaluation value calculated using SCE and presents them to a coordination system. Then, the coordination system searches for the entire optimal plan that minimizes the predicted value of the total visiting time of the entire theme park among the Pareto optimal plan candidates. The proposed framework guarantees visitors’ “personal optimality” in the meaning of Pareto optimality, and there is a possibility that the framework will shorten the visiting time of the entire theme park. We conducted a simulation experiment using a coordination algorithm based on the proposed framework and clarified the effectiveness of the framework.
We propose probabilistic models for predicting future classifiers given labeled data with timestamps collected until the current time. In some applications, the decision boundary changes over time. For example, in activity recognition using sensor data, the decision boundary can vary since user activity patterns dynamically change. Existing methods require additional labeled and/or unlabeled data to learn a time-evolving decision boundary. However, collecting these data can be expensive or impossible. By incorporating time-series models to capture the dynamics of a decision boundary, the proposed model can predict future classifiers without additional data. We developed two learning algorithms for the proposed model on the basis of variational Bayesian inference. The effectiveness of the proposed method is demonstrated with experiments using synthetic and real-world data sets.
In the area of the Semantic Web, RDF datastores are required to search for metadata quickly from large scale RDF data, such as Wikidata and DBpedia in the Linked Open Data (LOD). This paper presents compressed index structures and URI dictionaries of RDF data in order to develop a fast in-memory RDF database system (called FROST). Instead of the complete six types of indexes SPO, SOP, PSO, POS, OSP, and OPS in RDF triples, FROST employs the two types of indexes SPO and OPS that enable us to compactly store RDF triples in the memory. Using the compressed index structures, we develop a fast search method in the datastore system FROST that solves SPARQL queries and returns the query answers from RDF graphs. Our experiments show that (i) FROST outperforms the inmemory RDF frameworks Jena and RDF4J with respect to both fast query processing and saved memory, using the datasets and queries of the LUBM (a benchmarking framework for semantic repositories) and BMDB (RDF Store Benchmarks with DBpedia) benchmarks, and (ii) FROST outperforms the on-disk RDF store Virtuoso with respect to fast query processing, using the LUBM benchmark.
Knowledge base completion (KBC) aims to predict missing information in a knowledge base. In this paper, we address the out-of-knowledge-base (OOKB) entity problem in KBC: how to answer queries concerning test entities not observed at training time. Existing embedding-based KBC models assume that all test entities are available at training time, making it unclear how to obtain embeddings for new entities without costly retraining. To solve the OOKB entity problem without retraining, we use graph neural networks (GNNs) to compute the embeddings of OOKB entities, exploiting the limited auxiliary knowledge provided at test time. The experimental results show the effectiveness of our proposed model in the OOKB setting. Additionally, in the standard KBC setting in which OOKB entities are not involved, our model achieves state-of-the-art performance on the WordNet dataset.