Host: The Japanese Society for Artificial intelligence
Name : 71st SIG-SLUD
Number : 71
Location : [in Japanese]
Date : September 15, 2014
Pages 02-
This paper proposes a multi-modal corpus that will consist of audio-visual data, transcribed speech, and annotated body movements of both the science communicators and the visitors at the National Museum of Emerging Science and Innovation (Miraikan) that will be active as they talk throughout the exhibition floor. This multi-modal corpus can be used in several areas such as interaction analysis, natural language processing and video processing. In this paper, the authors show how 1) spontaneous audio-visual data of conversations taking place in the exhibition floor are recorded, 2) utterances and annotated body movements of both science communicators and visitors are transcribed and/or annotated, and 3) how the multi-modal corpus will be used as a means of achieving each purpose in several fields. It is believed that researchers in numerous fields will enjoy improved opportunities for collaborative research by getting to know one another better through the use of this multi-modal corpus, which is expected to become publicly available in the spring of 2015.