設計工学・システム部門講演会講演論文集
Online ISSN : 2424-3078
セッションID: 2307
会議情報

アバターに付与する感情表現が及ぼす印象に対する感性評価
*笠原 鉄音大山 剛史伊藤 照明
著者情報
会議録・要旨集 認証あり

詳細
抄録

This study estimates emotion from the speaker's voice signal and utterance character string. First, the acoustic features are extracted using openSMILE from the utterance voice acquired through the microphone, and emotions are estimated by classifying them into emotion classes by machine learning. Then, morphological analysis is performed from the spoken voice using a voice recognition engine, emotions are classified from the spoken character string, from where emotions are estimated. At the same time, facial expressions, head and neck movements are tracked by image analysis of video image of the speaker. After synthesizing and converting the estimated emotions, it will project onto the avatar to express the speaker's emotions with the facial expressions and movements of the avatar.

著者関連情報
© 2022 一般社団法人 日本機械学会
前の記事 次の記事
feedback
Top