This study examined the validity of the newly devised inventory for testing a height avoidance tendency (HAT). The inventory tested two aspects of avoidance: approaching a height, and looking down from a height. The participants were divided into high and low avoidance groups on the basis of their HAT scores, and their skin conductance responses (SCRs) were recorded during exposure to video clips of heights. The total, and some componential, HAT scores were positively correlated with self-rated intensities of the fear of heights, whilst there was no significant correlation with the trait scores of the STAI (State-Trait Anxiety Inventory). Significantly more SCRs were obtained for the high avoidance group than for the low avoidance group in responses to the videos. These results successfully demonstrated that the HAT was valid for selecting participants with a high tendency to avoid heights, and suggested that the test would be a useful tool for investigating the development and mode of spatial emotions evoked through perceptual and cognitive processes.
This study examines the effects of frames on the recognition of Japanese hiragana characters when using a letter completion test (LCT). In this test only some of the strokes of a hiragana character were presented to the participants who were asked to complete the character. The LCTs, with and without frames, were administered to participants in adult and school-age groups. It was found that regardless of the age of the participants, higher letter-completion rates were obtained in the with-frame condition. These results indicated that frames have positive effects on completion of the task, and suggest generalized facilitative effects on hiragana recognition. In addition, further analyses indicated that frames can specify the relative position of the components of hiragana characters and the size of complete hiragana characters.
We investigated the effect of facial context on the detection of parts of the face to examine whether configural information was used in the early stage of face processing. Participants were presented with several types of stimuli: an intact face; a face in which one or more parts were masked; and only part of a face. They were asked to judge as quickly as possible whether the eyes (or the nose, or the mouth) were contained in the presented stimulus. Experiment 1 showed that the parts of a face were more quickly detected when they were presented in the context of faces than when they were presented alone. This facilitating effect of facial context was consistently and robustly observed for detection of the eyes in the subsequent experiments, but not for detection of the mouth. In Experiments 2, 3 and 4, the facilitating effect was independent of the distinctiveness, familiarity, and orientation of the faces. Experiment 5 showed that the context of houses did not facilitate the detection of windows, suggesting that configural information was used in the early stage of face processing, but not in the detection of other basic-level objects.
Nairne, Thompson, and Pandeirada (2007) demonstrated that processing words according to a grasslands survival scenario leads to better recall than using any other semantic processing. They named this effect adaptive memory. The present experiment examined whether fitness-relevant processing conditions, grassland survival processing, or spouse-choice processing, lead to better recall than other semantic processing conditions, using pleasant or self-reference processing. The results indicated that the recall of the participants was best in the grassland survival processing condition and was better in the spouse-choice processing condition than in the self-reference processing condition. Some caution is necessary if the spouse-processing is considered as fitnessrelevant, because there was not a significant difference between the self-reference processing condition and the pleasant processing condition. However, these results were discussed in the context of evolved human memory functions.
We used event-related potentials (ERPs) to investigate the effects of non-native language (English) exposure on four- to seven-year-old Japanese native children while they listened to semantically congruent and incongruent Japanese sentences. The children were divided into three groups on the basis of their exposure to English in kindergarten: High (a high degree of English exposure in a immersion program), Med (a medium degree of English exposure), and Low (a low degree of English exposure). Results showed that N400 was observed in all preschoolers, but with different latencies in the three groups. Late positive components (LPCs) were only observed in Med and Low groups. These results indicate that the ERP waveforms of semantic processing for native language sentences in preschool children varied depending on the development and degree of exposure to the non-native language.
Cognitive psychologists have likened attention to a limited resource or capacity to account for recognition performances under dual task circumstances. Some theorists use this analogy of limited resource to account for the attentional blink phenomenon, in which perception of the second of two targets embedded in a rapid stream of nontargets is impaired; they argue that the attentional blink reflects scarcity of available resource for the first target due to the resource depletion by the first target processing. Some other theorists propose an alternative explanation in which the attentional blink results from on-line selection mechanisms that act in response to distracting input, rather than being the result of first-target-induced cognitive resource depletion. The present article reviews recent arguments between these theories and introduces some new findings suggesting that selection mechanisms involved in filtering for targets provide a strong and coherent explanation of the attentional blink.
The prefrontal cortices play important roles in cognitive control of behavior. The medial prefrontal cortex guides actions on the basis of the representation of action-outcome contingency, because many neurons represent action-outcome contingency when represented information is examined in a visually cued go/no-go task with asymmetrical rewards with reversals. Also, the medial prefrontal cortex evaluates the correctness of actions on the basis of classifying the outcome of actions, because both success- and failure-responsive neurons were observed in an action-learning task. On the other hand, the lateral prefrontal cortex regulates the relationship between actions and rewards by comparing the values of intrinsic and extrinsic rewards, because the lateral prefrontal cortex shows the activity corresponding to the undermining effect of extrinsic rewards on intrinsic motivation in a task that could be voluntarily engaged in. Both the medial and lateral prefrontal cortices may interact to link between goal-directed behaviors and intrinsic motivation through the cortico-basal ganglia loops.
Faces and facial emotional expressions can be detected rapidly and efficiently. To understand the psychological mechanisms of such phenomena, studies of the spatio-temporal patterns of neural activity in response to facial stimuli are useful. For example, our intracranial field potential recording study revealed that the amygdala showed greater gamma-band activity in response to faces compared with mosaics at 200–300 ms. Our results also showed that the gamma-band activity of the amygdala was more pronounced in response to fearful versus neutral facial expressions at 50–150 ms. Other studies have revealed the spatio-temporal patterns of face-related activity in other brain regions, such as the fusiform gyrus, superior temporal sulcus, and inferior frontal gyrus. These neuroscientific data deepen our understanding of the mental mechanisms involved in processing faces and facial emotional expressions.