The Transactions of Human Interface Society
Online ISSN : 2186-8271
Print ISSN : 1344-7262
ISSN-L : 1344-7262
Papers on Special Issue Subject "Kansei and Emotion -Sensing, Measuring, Modeling, and Engineering Application-"
Exploring the cross-modal perception of face and voice through the lens of two-dimensional emotion space
Haruka ShodaHaru NittaNoriko SuzukiWaka KishimotoMamiko Sakata
Author information
JOURNAL FREE ACCESS

2016 Volume 18 Issue 4 Pages 339-352

Details
Abstract

Humans communicate emotions by both their faces and voices. The emotions are generally congruent but they sometimes conflict across modalities. We investigated the perception of such emotionally congruent and conflicting expressions through the lens of two-dimensional emotion space (2DES). As experimental stimuli, we audioand video-recorded a male actor's expressions that portrayed four emotions: happy, angry, sad, and relaxed. Participants were exposed to 16 combinations of faces and voices and rated their perceived degree of valence (positive?negative) and activity (high?low), for each of which the reaction time was recorded. Participants also chose what emotion the actor expressed. Results showed the perceptual inconsistency across valence and activity. For the valence, the participants perceived negative emotions when watching a negative face, even with a positive voice. Such perceptual bias between modalities has not been shown for activity ratings. These results suggest that the mechanism that supports human cross-modal processing differs as a function of the dimension of emotion. The future directions of research are discussed through our proposed model of the audio-visual perception of emotions.

Content from these authors
© 2016 Non-Profit Organization, Human Interface Society
Previous article Next article
feedback
Top