電気学会論文誌C(電子・情報・システム部門誌)
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<知能,ロボティクス>
モダリティを統合した認識モデルに基づく深層マルチタスク学習による感情の数理的表現
原田 誠一佐久間 拓人加藤 昇平
著者情報
ジャーナル 認証あり

2020 年 140 巻 12 号 p. 1343-1351

詳細
抄録

To emulate human emotions in robots, the mathematical representation of emotion is important for each component of affective computing, such as emotion recognition, generation, and expression. In a method that learns DNN from unimodality and represents emotions by vectors of continuous values (Emotional Space), the acquired Emotional Space is likely to depend on the modality, and considering the number of dimensions of Emotional Space is necessary. In this study, we aim at the acquisition of modality independent Emotional Space. We propose a method of acquiring Emotional Space by integrating multimodalities on a DNN and combining the emotion recognition task and unification task of Emotional Space of each modality. Through the experiments with audio-visual data, we confirmed in various dimensions of Emotional Space that there are differences in Emotional Space acquired from unimodality, and the proposed method can acquire a modality independent Emotional Space. We also investigated the compatibility of the recognition and the unification score by changing the number of dimensions of Emotional Space. Finally, we indicate the proposed method can acquire sufficient Emotional Space with a small number of dimensions, such as five or six dimensions, under this paper's experimental conditions.

著者関連情報
© 2020 電気学会
前の記事 次の記事
feedback
Top