JAPANESE JOURNAL OF RESEARCH ON EMOTIONS
Online ISSN : 1882-8949
Print ISSN : 1882-8817
ISSN-L : 1882-8817
Volume 14, Issue 1
Displaying 1-7 of 7 articles from this issue
  • Takuma Takehara, Makoto Nakamura
    2007 Volume 14 Issue 1 Pages 1-2
    Published: March 25, 2007
    Released on J-STAGE: October 03, 2008
    JOURNAL FREE ACCESS
    Download PDF (253K)
  • Atsunobu Suzuki, Susumu Shibui, Kazuo Shigemasu
    2007 Volume 14 Issue 1 Pages 3-14
    Published: March 25, 2007
    Released on J-STAGE: October 03, 2008
    JOURNAL FREE ACCESS
    Initial processing of facial expressions of emotions has been shown to occur very fast, even without awareness. In order to determine whether the fast initial processing of facial expressions is categorical, we investigated temporal characteristics of categorical perception (CP) of facial expressions. We examined the effects of shortening stimulus duration on participant performance with respect to identifying (Experiment 1) and discriminating (Experiment 2) morphed facial expressions. Results of the two experiments suggested that CP was attenuated or even disappeared when facial stimuli were presented for as briefly as 50-75 milliseconds. These findings suggest that CP may be irrelevant in the fast initial processing of facial expressions.
    Download PDF (456K)
  • Maiko Shiraishi, Makoto Miyatani, Yukimi Mine
    2007 Volume 14 Issue 1 Pages 15-26
    Published: March 25, 2007
    Released on J-STAGE: October 03, 2008
    JOURNAL FREE ACCESS
    The aim of this study was to provide the behavioral data demonstrating the emotion/valence nodes which were activated by facial expressions. This was attempted by investigating the priming effects based on the same facial expression of two successively presented different persons. In the experiment, two faces of the same or different person with smile or neutral expression were sequentially presented on the right or left side of fixation. Fifteen participants judged the location or the facial expression of the first (prime) and second (target) faces respectively. A combination of prime-target person identity (same, different), expressions (smile/smile, smile/neutral, neutral/smile, neutral/neutral), and tasks (location/location, location/expression, expression/location, expression/expression) made 32 different conditions. Results showed that when participants judged location of a prime and facial expression of a target, reaction times to target were shorter for smile face repetition than other combinations of face expression regardless of the identity of person. This priming effect was not observed when neutral faces were repeated. The results suggested that a smile face activated a happy/positive node in the associative network, and this facilitated the processing of subsequent happy expression face.
    Download PDF (1001K)
  • Naho Ichikawa, Michio Nomura, Tetsuya Iidaka, Hideki Ohira
    2007 Volume 14 Issue 1 Pages 27-38
    Published: March 25, 2007
    Released on J-STAGE: October 03, 2008
    JOURNAL FREE ACCESS
    This experiment was conducted to explore the effect of emotional face feedback on task performance. Ten participants were asked to perform an operant learning task in which they were to learn contingencies between key-pressing and consequences (gain or loss) for each of several target patterns, and to maximize their total score. For performance feedback stimuli, we used three types of facial expressions (Angry, Happy, Neutral), and two types of symbol conditions (〇[correct], ×[error]). In NA condition, a neutral face was presented as correct feedback and an angry face was presented as error feedback. In the HN condition, a happy face was presented as correct feedback and a neutral face was presented as error feedback. In the NN condition, different neutral faces were presented as correct and error feedback. In the 〇× condition, 〇 was presented as correct feedback and × was presented as error feedback. We also reversed the conditions (AN, NH, ×〇), in order to examine whether the congruency of feedback valence (e.g. correct-positive and error-negative) was critical. Results indicated that lower error rates were observed in the angry face feedback condition (NA and AN) as compared to the happy face condition (p<.05). We also found a congruency effect between the behavioral results and feedback valence in response time of subsequent trials. These findings suggest that emotional face feedback might be related to task performance during an operant learning task.
    Download PDF (1146K)
  • Nobuyuki Watanabe, Ryuta Suzuki, Hiroyuki Yoshida, Daisuke Tsuzuki, Ay ...
    2007 Volume 14 Issue 1 Pages 39-53
    Published: March 25, 2007
    Released on J-STAGE: October 03, 2008
    JOURNAL FREE ACCESS
    This paper offers a database of facial images of Japanese, available for various kinds of studies on face and facial expressions. The Facial Information Norm Database (FIND) currently includes more than 13,000 images of 150 Japanese neutral faces, seven prototypical facial expressions of emotion (happiness, surprise, fear, sadness, anger, disgust, and contempt), and facial behaviors of single Action Unit (AU) and AU combinations based on the Facial Action Coding System (FACS; Ekman et al., 2002). FIND also contains information on each image such as a demographics, facial structural (shape) information by fitting a facial wireframe model onto the image, cognitive judgment data, and psychophysiological data obtained in judgment studies. We call the images and all other information mentioned above “facial information.” This paper describes the FIND and efforts to establishing the environment of capturing the pictures, procedures for obtaining the pictures, an interface for database access, and the issue of personal information protection of the participants appearing in the images.
    Download PDF (1689K)
  • Rui Nouchi
    2007 Volume 14 Issue 1 Pages 54-63
    Published: March 25, 2007
    Released on J-STAGE: October 03, 2008
    JOURNAL FREE ACCESS
    Some investigators have proposed that self referent process mediated the presence of mood congruent effect. Self referent tasks are separated self descriptive task (participants decide whether a stimulus word is self descriptive) and autobiographical recall task (participants retrieve an autobiographical memory relates to a stimulus word). Many investigators used only self descriptive task. This study was investigated mood congruent effect by using autobiographical recall task under natural occurred mood condition. One hundred undergraduates participated in this experiment. Participants were divided for positive and negative mood group by the result with mood score. Each condition was presented with stimulus words per 4s. Stimuli were 30 pleasant and 30 unpleasant trait adjective words. Participants judged they could recall any of their autobiographical memory related to the word. Result showed positive and negative mood group occurred mood congruent effect in recall rate. Using autobiographical recall task showed mood congruent effect under natural occurred mood. This result differed from using self descriptive task under natural occurred mood. We need to investigate mood congruent effect in each self referent task.
    Download PDF (832K)
  • Ryo Tamura, Tatsuya Kameda
    2007 Volume 14 Issue 1 Pages 64-70
    Published: March 25, 2007
    Released on J-STAGE: October 03, 2008
    JOURNAL FREE ACCESS
    Using a brain-imaging technique, Breiter et al. (1996) and Morris et al. (1996) showed that the amygdala, which is known to respond to threatening stimuli, is activated when participants view fearful facial expressions. These results imply that fear is transferable between individuals. The purpose of this study was to provide behavioral evidence for ‘fear contagion’ using a probe detection task to measure attention bias following exposure to either fear-relevant or fear-neutral primes. As expected, the results revealed a fear-specific response bias in which participants selectively directed their attention towards a fearful facial-expression after being primed with a fear-face. Conversely, selective attention was not observed with neutral-face, sad-face, or snake primes. Interestingly, participants tended to selectively avoid a picture of a snake following a fear-face prime. Implications of these findings and future directions are discussed.
    Download PDF (756K)
feedback
Top