In the IT society, we need a good knowledge infrastructure to fill the gap between computer-generated information and human activities. In this paper, we discuss Public Knowledge Graph that works as such the knowledge infrastructure for our society. First, we introduce what kind of social tasks are expected to solve with the knowledge infrastructure, then we show a list of requirements for public knowledge graph for this purpose, such as wide coverage, variety, dynamics, integrity, consistency, fairness, reliability and robustness. We, then, discuss how to proceed to build public knowledge graph, i.e., exploration of knowledge sources, establishment of solid infrastructure, research on inference, and applications.
Entity Linking (EL), the task of mapping entity names in natural language sentences to resources in large-scale knowledge graphs, is attracting attention as a fundamental technology for question answering and dialogue systems, etc. DBpedia Spotlight (DS) proposed as an EL tool for DBpedia. Although DS supports multiple languages, the target language model of OpenNLP, a natural language processing library, is required to perform EL specific to a particular language. DS's multilingual model can be used for Japanese EL, but its accuracy is lower than that of OpenNLP's target language model. As of January 2022, both the Japanese model of OpenNLP and the Japanese model of DS have not been released. In this study, we aim to develop a Japanese model of DS by introducing the Japanese morphological analyzer Sudachi into DS. We showed the effectiveness of the Japanese model by comparative evaluating the model of DS and the multilingual model.
In recent years, research and development of dialogue systems have been actively carried out, and it is becoming a society in which humans and dialogue systems coexist. In order to conduct various dialogues in consideration of the user preferences in the chatbot, it is necessary to infer the knowledge possessed by the user from the user's utterance and to provide questions and topics based on the knowledge. In this research, we propose a dialogue system that allows users to chat based on the fact that some of the knowledge that the user is presumed to have from the user's utterance is extracted and accumulated from DBpedia using the SPARQL template as a user knowledge graph. It is considered that various dialogues considering the user's taste can be realized by the chat dialogue system presenting the topic based on the user knowledge graph.
Wikidata is one of the largest open knowledge graphs, and is used as a hub for various domain-specific knowledge graphs. Since its underlying software, Wikibase can be used as a foundation for constructing knowledge graphs, this paper reports on the construction of an original knowledge graph using Wikibase and its integration with Wikidata by using Japanese Corporation Information.
The automobile industry is in a VUCA world called "era of revolution once in 100 years". To respond to the VUCA world, development sections require to concentrate human resources on development of future main products and accelerate development. Therefore it is required to manage existing business and ensure the quality of products with fewer human resources than ever before. To resolve this problem we considered to use accumulated knowledge of expert engineers to assist development. A part of the knowledge of expert engineers is that concerning failures occurred during development such as causes and solutions of the failures. A failure ontology proposed in this paper helps to extract the knowledge concerning failures from accumulated documents. In this paper we introduce how to construct the failure ontology and use it.
Knowledges on local foods contain important wisdom that can be applied to daily life even today. We have been developing applications, designing ontologies, and conducting cooking events to study and record information about local food. In this paper, we consider an ontology that can be used to manage cooking events of local food. First, we describe the requirements in the management of cooking events and attempted to design an ontology for the management of cooking events in UML. In addition, we consider the functions to automate the calculation of the quantity of ingredients for the event participants, to study the replacement of ingredients with alternative ingredients, and to support the creation of educational materials for the dietary education of local food. The specifications of these functions will be finalized and implemented in the future.
The terminology of the livestock field is cited in various resources, and since there is no standard, it causes many problems in data integration. In this study, we develop a common vocabulary necessary for cattle rearing hygiene management using an ontology. Furthermore, we will develop and publish an information presentation system to utilize common vocabulary in livestock farms. Finally, we consider the usefulness and possibility of common vocabulary in the field of livestock through evaluation by domain experts.
There is a need to explore information efficiently and obtain suggestions from unstructured Enterprise data. Knowledge graphs, which enable visualization of relationships and inference, have recently been exploited in business to meet needs. We also expect to use ontologies that control the meaning and perspective of knowledge graphs in business, but creating ontologies manually takes a large cost. In this paper, we propose two types of ontology construction methods for each business data type. We show the effect of the proposed method with experiments that each type of ontology construction method is applied to enterprise data.
The behavior analysis technology "Actlyzer" developed by the authors realizes complex behavior recognition by combining deep learning recognition and rule recognition. In this paper, we explain the specification outline of the recognition rule which is scheduled to be adopted in the "Next Actlyzer", and give an example of the description. In addition, we explain the purchasing behavior analysis method realized by "Next Actlyzer", and mention the knowledge obtained from the analysis results.
This paper proposes a novel spatio-temporal scene graph dataset. Spatio-temporal scene graph generation is an essential task in household activity recognition that aims to identify human-object interactions. Constructing a dataset with per-frame object region and consistent relationship annotations requires extremely high labor costs. Existing datasets sparsely annotate frames sampled from videos, resulting in the lack of dense spatio-temporal correlation in videos. Additionally, existing datasets contain inconsistent relationship annotations, leading to the problem of learning ambiguous temporal associations. Moreover, existing datasets mainly discuss relationships that can be inferred from a single frame, ignoring the significance of temporal associations. To resolve those issues, we created a simulated dataset with per-frame consistent annotations and introduced a range of relationships requiring both spatial and temporal context.
In recent years, there is an interpretability problem that even experts cannot explain the reasoning process of machine learning. A contest featuring interpretability, "First Knowledge Graph Reasoning Challenge 2018." was held in Tokyo. A previous study presented a method based on embedding with triple for learning the sense of words. However, information about object simultaneity, such as location and time, which should be learned at the same time, is lost. Therefore, we propose an inference method that learns the graph structure by means of a graph convolutional network (GCN) and explains important connections on the graph by means of layered relevance propagation (LRP). The experimental results show that the proposed approach indicates the reasoning process using additional knowledge and the propagation of relevance by LRP.