Cognitive Studies: Bulletin of the Japanese Cognitive Science Society
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
Feature: Multisensory Communication
Cross-Modal Perception of Emotion in Facial Expression and Non-Sense Voice: Effects of Information Intensities and Reliabilities
Kota AraiYasuyuki InoueKazuya OnoShoji ItakuraMichiteru Kitazaki
Author information
JOURNAL FREE ACCESS

2011 Volume 18 Issue 3 Pages 428-440

Details
Abstract
Human beings perceive others' emotions in facial expressions and speech prosodies. Though they are of different modalities, they are inevitably interactive to make emotions to be perceived. We aimed to investigate a cross-modal modulation of emotions using facial expressions and non-sense emotive voices, and effects of varying strength and reliability of emotions included in the stimuli. We found that the perceived emotions in faces and voices were modulated in the direction of simultaneously presented but neglected voices and faces, respectively. This cross-modal modulation of emotions was more explicit when the dominant emotion included in the judged stimulus was consistent with the rated emotion, especially for voice ratings. The strength of emotions included in the judged stimulus had no effect on the cross-modal modulation. When the reliability of judged stimulus was deteriorated by applying a low-pass filter of spatial frequency on faces, the cross-modal modulation of emotions was no longer explicit even if the dominant emotion included in the judged stimulus was consistent with the rated emotion. These results suggest that the cross-modal modulation of emotions is not fully accounted by the weak fusion model of linear summations and that non-linear components on different emotions and congruencies of multi-modal emotions should be considered.
Content from these authors
© 2011 Japanese Cognitive Science Society
Previous article Next article
feedback
Top