JSAI Technical Report, SIG-SLUD
Online ISSN : 2436-4576
Print ISSN : 0918-5682
71st (Sep, 2014)
Conference information

Creating a Multi-modal Corpus based on Interdisciplinary Science: Focus on Science Communication on the Exhibition Floor in Science Museum
Ayami JOHRyosaku MAKINOMayumi BONOKatsuya TAKANASHIShin'ichi SATOHYusuke MIYAO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Pages 02-

Details
Abstract

This paper proposes a multi-modal corpus that will consist of audio-visual data, transcribed speech, and annotated body movements of both the science communicators and the visitors at the National Museum of Emerging Science and Innovation (Miraikan) that will be active as they talk throughout the exhibition floor. This multi-modal corpus can be used in several areas such as interaction analysis, natural language processing and video processing. In this paper, the authors show how 1) spontaneous audio-visual data of conversations taking place in the exhibition floor are recorded, 2) utterances and annotated body movements of both science communicators and visitors are transcribed and/or annotated, and 3) how the multi-modal corpus will be used as a means of achieving each purpose in several fields. It is believed that researchers in numerous fields will enjoy improved opportunities for collaborative research by getting to know one another better through the use of this multi-modal corpus, which is expected to become publicly available in the spring of 2015.

Content from these authors
© 2014 The Japaense Society for Artificial Intelligence
Previous article Next article
feedback
Top