IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<Softcomputing, Learning>
Estimation of Gestures for Utterance Text Using Conditional Random Fields
Toshiki HanawaShinichi ShirakawaDai HasegawaNaoya ShioiriKouzou OharaHiroshi Sakuta
Author information
JOURNAL FREE ACCESS

2016 Volume 136 Issue 3 Pages 308-317

Details
Abstract

The research field of human like agents that are often represented by an animation character is becoming increasingly active in recent years. As the motion of such agents influences the users' impression, it is easy to expect that the ability of the human like agent to make appropriate gestures could improve the understandability of the utterance contents. The load of the content creator, however, increases if he/she needs to determine when and what gestures the agent should make. This paper attempts to estimate the appropriate gestures for a given utterance text using conditional random fields (CRF), which can be used to reduce the effort spent by contents creators. We create the dataset consisting of the utterance text and the corresponding gesture labels from the educational movie contents and construct a gesture-labeling model using CRF in a supervised learning manner. The estimation performance of appearing the gestures is evaluated and compared with the simple existing model. Especially, we focus on the metaphoric gesture, often representing an abstract concept. This is because the metaphoric gesture is expected to facilitate the users' understanding of the abstract concepts. We empirically confirmed that the proposed model can distinctly estimate the metaphoric and other gestures.

Content from these authors
© 2016 by the Institute of Electrical Engineers of Japan
Previous article Next article
feedback
Top