人工知能学会研究会資料 言語・音声理解と対話処理研究会
Online ISSN : 2436-4576
Print ISSN : 0918-5682
71回 (2014/9)
会議情報

異分野融合によるマルチモーダルコーパス作成 -展示フロアにおける科学コミュニケーションに着目して-
城綾 実牧野 遼作坊農 真弓高梨 克也佐藤 真一宮尾 祐介
著者情報
会議録・要旨集 フリー

p. 02-

詳細
抄録

This paper proposes a multi-modal corpus that will consist of audio-visual data, transcribed speech, and annotated body movements of both the science communicators and the visitors at the National Museum of Emerging Science and Innovation (Miraikan) that will be active as they talk throughout the exhibition floor. This multi-modal corpus can be used in several areas such as interaction analysis, natural language processing and video processing. In this paper, the authors show how 1) spontaneous audio-visual data of conversations taking place in the exhibition floor are recorded, 2) utterances and annotated body movements of both science communicators and visitors are transcribed and/or annotated, and 3) how the multi-modal corpus will be used as a means of achieving each purpose in several fields. It is believed that researchers in numerous fields will enjoy improved opportunities for collaborative research by getting to know one another better through the use of this multi-modal corpus, which is expected to become publicly available in the spring of 2015.

著者関連情報
© 2014 人工知能学会
前の記事 次の記事
feedback
Top