Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Original Paper
Speech-to-Gesture Generation Using Bi-Directional LSTM Network
Naoshi KanekoKenta TakeuchiDai HasegawaShinichi ShirakawaHiroshi SakutaKazuhiko Sumi
Author information
JOURNAL FREE ACCESS

2019 Volume 34 Issue 6 Pages C-J41_1-12

Details
Abstract

We present a novel framework for automatic speech-driven natural gesture motion generation. The proposed method consists of two steps. First, based on Bi-Directional LSTM Network, our deep network learns speech-gesture relationships with both forward and backward consistencies for a long period of time. The network regresses full 3D skeletal pose of a human from perceptual features extracted from the input audio in each time step. Second, we apply combined temporal filters to smooth out generated pose sequences. We utilize a speech-gesture dataset recorded with a headset and a marker-based motion capture to train our network. We evaluate different acoustic features, network architectures, and temporal filters in order to validate the effectiveness of the proposed approach. We also conduct a subjective evaluation and compare our approach against real human gestures. The subjective evaluation result shows that our generated gestures are comparable to “original” human gestures and are significantly better than “mismatched” human gestures taken from a different utterance in the scale of naturalness.

Content from these authors
© The Japanese Society for Artificial Intelligence 2019
Previous article
feedback
Top