Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
36th (2022)
Session ID : 2E6-GS-3-05
Conference information

Common Space Learning with Probability Distributions for Multi-Modal Knowledge Graph
*Kenta HAMATakashi MATSUBARA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Knowledge graphs are knowledge representations that focus on relationships among objects. They have been used for question answering systems and information retrieval. As datasets become larger and leverage multi-modal representations, it becomes more important to complement missing or insufficient information in a single knowledge graph using other knowledge graphs with additional information such as images and attribute values. The entity alignment is a task of finding objects with the same object in different knowledge graphs, and multi-modal entity alignment (MMEA) has been proposed for the entity alignment of multi-modal knowledge graphs. However, MMEA does not take into account well the granularity of each piece of information since it represents each piece of information obtained from images, relations, and attribute values by a single point in a common space. In this study, we propose a new method that expresses the granularity of each piece of information as the spread of a distribution. The proposed method outperforms MMEA in the entity alignment task of two multimodal knowledge graphs.

Content from these authors
© 2022 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top