The Proceedings of Design & Systems Conference
Online ISSN : 2424-3078
2022.32
Session ID : 2307
Conference information

Kansei evaluation for the impression of emotional expression given to avatars
*Tetsuo KASAHARATakashi OYAMATeruaki ITO
Author information
CONFERENCE PROCEEDINGS RESTRICTED ACCESS

Details
Abstract

This study estimates emotion from the speaker's voice signal and utterance character string. First, the acoustic features are extracted using openSMILE from the utterance voice acquired through the microphone, and emotions are estimated by classifying them into emotion classes by machine learning. Then, morphological analysis is performed from the spoken voice using a voice recognition engine, emotions are classified from the spoken character string, from where emotions are estimated. At the same time, facial expressions, head and neck movements are tracked by image analysis of video image of the speaker. After synthesizing and converting the estimated emotions, it will project onto the avatar to express the speaker's emotions with the facial expressions and movements of the avatar.

Content from these authors
© 2022 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top