Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
35th (2021)
Session ID : 4D3-OS-4b-04
Conference information

Hyperspherical Representation of Emotion by Combining Recognition and Unification Tasks Based on Multimodal Fusion
*Seiichi HARATATakuto SAKUMAShohei KATO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

To emulate human emotions in agents, the mathematical representation of emotion (an emotional space) is essential for each component, such as emotion recognition, generation, and expression. This study aims to model human emotion perception by acquiring a modality-independent emotional space by extracting shared emotional information from different modalities. We propose a method of acquiring a hyperspherical emotional space by fusing multimodalities on a DNN and combining the emotion recognition task and the unification task. The emotion recognition task learns the representation of emotions, and the unification task learns an identical emotional space from each modality. Through the experiments with audio-visual data, we confirmed that the proposed method could adequately represent emotions in a low-dimensional hyperspherical emotional space under this paper's experimental conditions. We also confirmed that the proposed method's emotional representation is modality-independent by measuring the robustness of the emotion recognition in the available modalities through a modality ablation experiment.

Content from these authors
© 2021 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top