JAPANESE JOURNAL OF RESEARCH ON EMOTIONS
Online ISSN : 1882-8949
Print ISSN : 1882-8817
ISSN-L : 1882-8817
Mutual regulation of audio-visual emotional information in the recognition of facial expression and voice tone
Momoko WatanabeToshiko Mochizuki
Author information
JOURNAL FREE ACCESS

2004 Volume 11 Issue 2 Pages 53-64

Details
Abstract
In everyday life, we communicate each other not only with verbal cue but also nonverbal information of multi-modality such as facial and vocal expression. But, it has not been studied enough how we combine those nonverbal information.
So, we investigated the mutual regulation rule between facial and vocal emotional expression. Any of seven kinds of emotional expressions (happiness, neutral, surprise, sadness, fear, disgust and anger) were presented to the subjects visually and vocally at the same time by still figure with facial expression and by his or her voice tone of short message. They judged the stimulus person's emotion by using both information.
In the condition that visual and audio emotion were the same, we found that correction rate of judgements was high (87.63%), response time was short (4.20sec) and confidence level was high (4.35/5.0) compare with incongruent condition in which visual and audio emotions were different. As to the incongruent condition, we had two main results: (1) Basically, visual information was more dominant than visual information except disgust. (2) Many fused responses were also found, which mean the person's emotion was interpreted as the third one different from visual and audio emotion. Fused responses of "disgust" appeared most frequently. We considered it as the following: when two kinds of conflict unpleasant emotions were expressed simultaneously, we may interpret the true emotion with the bias toward "disgust" to make the accounts balance because disgust is rather ambiguous displeasure emotion..
Content from these authors
© THE JAPAN SOCIETY FOR RESEARCH ON EMOTIONS
Next article
feedback
Top