This paper shows an approach to integrating general-purpose Knowledge Graphs (KGs) and domain-speci_c KGs. General purpose KGs cover di_erent topics but lack speci_c topics or details. On the other hand, KGs extracted from speci_c domains usually represent only a small set of entities compared to the general purpose KG, while they represent entities from their domain with more details. So they can complement the information if used together. Current matching approaches are tested on datasets with one-to-one assumption or relatively small instance sizes. This research explored matching KG extracted from speci_c communities and DBpedia as a general-purpose KG in a real-world case. We used a traditional matching algorithm (PARIS) with a BERT model to _lter the result and random walk to expand the possible matches. Firstly, we executed PARIS for the entire KG and selected the obtained matches above a threshold. Secondly, the algorithm embedded the abstract of the entities using the BERT model, calculated the similarity between the vectors, and _ltered the matches. In the last step, the algorithm used the _ltered matches as seeds to apply random walks and created a sub-graph for each KG. Then, the instances of the sub-graphs were matched using string similarity between the labels and similarity between the abstracts using BERT when available on both sides. We tested the proposed approach between the entire DBpedia and our KG and improved the obtained matches. We found that the generated matches contained many entities with de_cient information in DBpedia, so the matching process can be used to identify and complement those entities.
Using the knowledge graphs described by event-centered models published in the Knowledge Graph Reasoning Challenge for Social Issue, the properties of Walk-based embeddings, such as DeepGraph, and embedding based on algebraic models, such as TransE and Rotate, are compared for tasks such as link prediction and type prediction.
Japan is an earthquake-prone country, with 1,000 to 2,000 sensible earthquakes observed per year. Seismological research is also active, and the Japan Meteorological Agency, the National Research Institute for Earth Science and Disaster Prevention, and local governments have established seismic observation networks. In recent years, various studies have been conducted to detect, classify, and predict the intensity of earthquakes using machine learning techniques based on the large amount of observed seismic waveform data. Therefore, it is necessary for researchers to set and collect the location of the hypocenter, time of occurrence, and target observation points in order to create data for training purposes. In this paper, we construct an earthquake ontology and assign URIs to earthquakes based on observed waveforms and hypocenters to investigate the availability and distribution of earthquake catalogs that can be used as learning data.