Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : September 09, 2018 - September 12, 2018
We have already developed a speech-driven embodied entrainment CG character system called InterActor, which automatically generates communicative motions and actions such as nods for entrained interaction from speech rhythm based on only voice input. However, the conventional InterActor character generates only positive actions. Therefore, in the case of a negative speech with high speech activity, the system may promote the negative speech by performing positive gestures in response to negative verbal content. In this paper, we develop an advanced speech-driven embodied entrainment character system that makes the behavior according to the state estimated by the speech activity and the semantic orientations of words in utterance. In the system, the speaker's words are converted to text by speech recognition, and the speaker's state is estimated from the speech activity and the character strings in the converted text. The system uses a database that quantifies each word and estimates the emotion associated with it. Then, the system automatically generates embodied entrainment motions or soothing and activated motions based on the speech activity and the semantic orientations of words in utterance. Furthermore, we evaluate the system by sensory evaluations.