主催: 人工知能学会
会議名: 第74回 言語・音声理解と対話処理研究会
回次: 74
開催地: 千葉大学西千葉キャンパス人文社会科学系総合研究棟2階 マルチメディア会議室
開催日: 2015/07/22
p. 07-
We present a multimodal analysis of storytelling performance in group conversation as evaluated by external observers. A new multimodal data corpus, including the performance score of participants, is collected through group storytelling task. We extract multimodal features regarding explanators and listener from a manual description of spoken dialog and from various nonverbal patterns. We also extract multimodal co-occurrence features, such as utterance of explanator overlapped with listener's back channel. In the experiment, we modeled the relationship between the performance indices and the multimodal features using machine learning techniques. Experimental results show that the highest accuracy is 82% for the total storytelling performance (sum of score of indices) obtained with a combination of verbal and nonverbal features in a binary classification task.