The negative compatibility effect (NCE) has been recognized as the opposite phenomenon to the ordinary compatibility effect (i.e., RT of the compatible condition is slower than that of the incompatible condition). To investigate the NCE two experiments used a Stroop-like task in which students were asked to name the color of a centrally fixated square (target) while ignoring the peripheral color word (task-irrelevant stimulus). Ordinary compatibility effects were observed when the task-irrelevant stimulus was located near the target, whereas an NCE was observed when it was distant from the target. The NCE was not dependent on the response modality (i.e., oral or manual). These results could not be explained by a previous hypothesis that a misled spatial attention, caused by a shape similarity of a target and a task-irrelevant stimulus, would produce an NCE. Thus an NCE would reflect an inhibitory processing mechanism at a post-categorical level.
We examined whether shading is a determinant of figure and ground assignment by using Rubin's face/vase patterns which were defined by shaded disks. The results of Experiment 1 showed that a region filled with top-lit disks was perceived as being a figure, and that depth from shading between the regions is a strong cue for figure and ground assignment. In Experiment 2, we checked whether the depth from shading inside the regions was also one of the factors for figure and ground assignment. Consequently we confirmed that shading inside the regions was not the decisive factor. In Experiment 3, the results indicated that the depth from shading between the regions was still a strong factor for figure and ground assignment in prolonged exposure. Therefore we suggest that shading can mediate figure and ground assignment and that there may be a boundary-surface representation in shading perception.
Renewal of operant performance formerly eliminated by omission training was explored with rats under two context-change conditions (ABA and AAB). When pressing a lever was trained with food reinforcement in one context and then eliminated by omission training (delivery of food for withholding the responding) in a second context, returning the rats to the original context tended to slightly renew the responding. In spite of this successful demonstration of ABA renewal effect, there was no hint of AAB renewal effect: When the responses had been eliminated in the context of acquisition, testing the rats in a second context did not renew the responses. As omission training, which is also called differential reinforcement of other behavior, has been widely used in behavior therapy for reducing undesired human behavior, implication of the present results for therapeutic applications were discussed.
We investigated the effects of tracking eye movements on predicting a future position of a three-dimensional moving object. Each stimulus movie was composed of four object frames. The participants judged whether the fourth object frame contained an object which had been changed an equivalent amount to the object in the second and third object frames. The results showed that tracking eye movements are required to accurately predict the future position of a moving object although cast shadows are also necessary for the prediction of the three-dimensional rotation. The results therefore imply the existence of a labeling mechanism which is mediated by tracking eye movements.
We have found a novel visual illusion where an ambiguous motion became unidirectional with the superimposition of an image of a human figure walking on a treadmill. When we presented a counterphase grating as an ambiguous backdrop, the grating appeared to drift in the opposite direction to the bipedal locomotion. This phenomenon indicates that the visual system would evaluate low-level motion signals relative to a high-level representation of an object's movement defined by its biological motion. In this study we presented images of human figures either walking forward or walking backward in order to examine the effects of a directional correspondence between human movement and form that was important for the recognition of biological motion. The results revealed that a forward walker produced the illusion more reliably than a backward walker, and confirmed that recognition of locomotion is a primary determinant for the illusion.
The perceived duration of events is known to be affected by factors that are at least partly conscious. In the present study we report that the perceived duration of visual events can be affected even by an unconscious factor. The participants were briefly (12.5ms) exposed to nonsense syllables which were followed by backward pattern masking. A discrimination test for other participants confirmed that the stimuli were presented below a threshold level. The participants subsequently estimated the duration of the presentation of each of the nonsense syllables that either had, or had not, been previously exposed. The results indicated that the mean estimated duration of the stimuli that were previously exposed was less than that for the new stimuli. These results suggest that unconscious experience is one of determinants that can reduce the perceived duration of events.
When four letters are repeatedly presented by the RSVP method with a gradually increasing duration of exposure, an observer can identify the letters but misreports their order (Holcombe, Kanwisher, & Treisman, 2001). Holcombe et al. (2001) argued that the order illusion is caused by an observer failing to direct their attention to the arbitrary initial item. However, our findings in Experiment 1 of the present study indicated that the accuracy of the order does not depend on the ease of directing attention to the salient item. In addition, the order illusion occurred when the target letters were presented in a decelerating manner. We found in Experiment 2 that an order illusion occurred when the observer attended to a letter presented very briefly and was required to report the order of the subsequent letters presented within 466ms. These results suggest that an order illusion is caused by the temporal limits of visual attention which also cause an attentional blink.
We investigated whether the direct gaze of another person captures attention and delays attentional disengagement from a face in an experimental situation. The participants were required to fixate on the centrally presented face with varying gaze direction and required speeded detection of the peripherally presented target. The results revealed that target detection was delayed in a direct gaze condition but not in an averted gaze or with a closed eyes condition. This effect, however, disappeared when a temporal gap was inserted between the offset of the central face and the onset of the peripheral target, suggesting a contribution of an attentional and/or oculomotor disengagement for a delayed response following a direct gaze. Moreover, a direct gaze made an effect only with a stimulus onset asynchrony (SOA) of 500ms and disappeared with an SOA of 1,200ms. This result implied that the effect of a direct gaze is transient.
Under incidental learning conditions spatial layouts can be acquired implicitly and facilitate visual search (contextual cueing effect). We examined whether the visual contextual cueing effect is specific to the modality or transfers to the haptic modality. The participants performed visual search trials based on a typical contextual cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic search. This suggests a commonality of spatial memory to allocate focused attention in both visual and haptic modalities.
In the present study we examined how perceptions of depth and the occluding edge in the optical tunnel were affected by head movement of an observer. The optical tunnel was displayed on a CRT and was observed under active and passive conditions. The results indicated that there were little perceptual differences between active and passive conditions. In the active condition a pattern was linked to an observer's head movement. In addition, in the passive condition an observer saw a series of transformations of the pattern similar to the perception of the pattern in the active condition. But in a second passive condition the pattern moved laterally at a uniform velocity and the perceptual outcome was different from the others. The results suggest that if the available optical information is detected then an observer's perception is the same under either active or passive conditions.
We conducted two experiments to investigate the relation between attention and perceptual processing. It is known that "distance perception" is based on egocentric processing, and that "depth perception" is based on exocentric processing. Many researchers have examined various aspects of spatial perception in 3-D space. However there are few studies about the relationships between attention and distance or depth perception. In this study we examined how attention operates when depth and distance perceptions are required. We controlled the expectancy of the location of target by using the validity of a precue. The results revealed that there is a relationship between attention and depth perception. However a relationship between attention and distance perception was not apparent. The results indicated that the allocation of attention might change according to the demands of the task.
Repetition deafness (RD) refers to a reduced performance in reporting a repeated item, compared to a nonrepeated one, during a rapid auditory presentation. We investigated the effects of a vocal difference on RD in order to examine whether an encoding failure was as a cause of RD. The stimuli pronounced by a male and a female were presented binaurally in a serial order. The results showed that RD was observed only under different voice conditions. We propose an alternative hypothesis that an encoding failure is a cause of RD.
A large memory set size and a high similarity have been reported to cause view-dependence in 3-D object recognition (Newell, 1998). In this study, we examined whether individual differences in the capacity of spatial information processing (spatial span) relate to this type of view-dependence or not. According to the scoring of a spatial span task (Shah & Miyake, 1996), ten participants with a high spatial span score and ten participants with a low spatial span score were assigned to high and low spatial span groups, respectively. The results indicated that only the low spatial span group showed a view-dependence in the high cognitive load condition of the 3-D object discrimination task. This suggested that individual differences in spatial span could be one of the factors which affect the view-dependence in 3-D object recognition.
We found an anisotropy of perceptual "filling-in" at the physiological blind spot. The stimulus was a pair of lines presented for 200ms with one line on each side of the blind spot. The length of the lines extended gradually whenever an observer pushed a key on a computer keyboard. The observer's task was to report whether the line appeared 'complete' or 'gapped' when compared to a reference line presented simultaneously to the temporal retina. The independent variable was the orientation of the line. The results with 9 observers showed that the minimum length of the line for perceptual filling-in increased systematically as the orientation of the lines changed from the horizontal to the vertical. We suggest that this anisotropy is due to the elliptic shape of the receptive fields of the binocular neurons that contribute to perceptual filling-in at the blind spot.
In this study, we analyzed the conditions for a new color spreading effect observed in a Varin's subjective contour figure with elliptic patches (Figure la). The effect involved a wide color spreading without clear subjective contours, and thus it differed from the spreading effects studied so far. The results of our experiments showed that (1) the amount of spreading was found to follow inverted-U functions of the line width; (2) the illusory color spreading effect was perceived both in open and closed figures; (3) the effect was not perceived in a red patch condition, although it was perceived in a gray patch condition. The results suggested that this color spreading effect was not related to subjective contours, and that it differed from the effects reported in the past.
Contrast is a cue for depth perception; the farther away an object is, the less contrast appears in the surface plane. What happens if the contrast of an object varies in time? Do we perceive the object moves in depth? We examined whether a contrast change induces motion-in-depth perception. The stimulus was a circular sine wave grating of 1s duration whose phase was shifted by various angles every 250ms. The contrast of the grating either decreased, remained constant, or increased. The subjects judged whether the stimulus appeared to move towards them, or away from them. When the contrast of the stimulus decreased (increased) the number of 'receding' ('looming') responses increased. The results suggest that a contrast change produces motion-in-depth perception.
Motion- and luminance-defined motion (MDM and LDM) were perceived to be misaligned when they were presented physically with the same speed and in phase (perceptual offset). Motion-defined motion is the motion of patterns defined by the direction of the local motion. In this study we examined the effects of the global motion speed (physical and perceptual) of MDM and LDM on the amount of the perceptual offset in order to explore the origin of this phenomenon. The null point for the apparent offset was calculated by using a probit analysis. The calculated offset increased as the physical global motion speed increased. The offset phenomenon reported here can be interpreted as a spatio-temporal conversion of differences in processing time for LDM and MDM.
In a rapid serial visual presentation (RSVP) task, the processing of a preceding target (T1) interrupts the response to a subsequent target (T2). This phenomenon is thought to indicate the temporal limitations of attention and is named attentional blink (AB). In previous studies two conflicting models have been proposed for the AB phenomenon: the serial processing model and the parallel processing model. The present study examined which could be more appropriate. We used negativity bias, which is a phenomenon that negative stimuli are more rapidly processed because they attract more attention than neutral stimuli. When T2 was negative the AB deficit decreased (Experiment 1). However, when T1 was negative the AB deficit increased (Experiment 2). Moreover, when both T1 and T2 were negative the AB deficits observed in Experiment 1 and Experiment 2 canceled each other (Experiment 3). It was therefore considered that the AB was due to attentional interference during parallel processing of RSVP items.
Recent studies have revealed that the superior temporal sulcus (STS) plays an important role in 'social perception'. In this study, we measured event-related potentials (ERPs) during the perception of biological motion that can be perceived from locomotion which is monitored by only points of light. Twelve subjects participated and they were shown representations of biological motion and scrambled motion. In the scrambled motion, each light point had the same velocity vector as in the biological motion, but the initial starting positions of the lights were randomized. The perception of both the biological and scrambled motion elicited negative peaks at around 200 (N200) and 240 (N240) ms. Notably, the N240 component was significantly larger over the bilateral occipitotemporal region for the biological motion than for the scrambled motion condition. These findings do not contradict previous neuroimaging results, but imply that the N240 component seems to be specific to biological motion perception. Furthermore, this component might be similar to the detection of intention component that was found in gaze direction.
Neuropsychological effects of brain damage on mental processes are varied and complex. Cognitive models from experimental psychology help us make sense of them. Such models now exist for many different domains of cognition, allowing us insight into how brain damage affects cognition in each of these domains-even higher-order domains of cognition such as belief formation. Work which applies experimental psychology to neuropsychology in this way also supports some very general conclusions about cognition, such as that the mind is highly modular, and that mental representations are typically local rather than distributed.
In this review, the characteristics of modular and hierarchical structures of the visual system are reviewed by using the mechanism for depth perception as an example. The depth mechanism has several cue systems as sub-modules. Each cue system has its own restrictions especially those come from ill-posedness. The limitation is especially strong among the graphical cues. Even binocular disparity cue system is not free from the limitation especially when we consider the possibility of false matches. The depth system overcomes these limitations by incorporating constraints reflecting the structures of physical world within each cue system, and by embodying interactions among cue systems and from top-down information.
Although dynamic information is undoubtedly important in the processing of facial expressions of emotion, surprisingly little research has examined this. This paper, briefly introduces the neural areas devoted to processing the social signals contained in a face, such as emotional expression and eye gaze, and describes three pieces of psychological research using dynamic facial expression stimuli made using a morphing technique: a perception study using a representational momentum paradigm, an fMRI study, and an analysis of the spontaneous facial reactions that occur when observing dynamic facial expression stimuli. Based on these findings, I emphasize the importance of an integrated approach to understanding the processing of dynamic facial expressions in terms of perceptual/cognitive, emotional, and motor planning processes.
Prefrontal cortex is believed to be involved in the executive control of behavior, particularly in complex situations. Using a multidimensional go/no-go task with macaque monkeys, however, we found that many prefrontal neurons were activated by irrelevant visual information that could interfere with the control of behavior. Here we show further evidence on the nature of prefrontal responses to irrelevant information, and propose a model that can explain how irrelevant stimulus-response codes compete with relevant stimulus-response codes in prefrontal cortex. Long-term training enables fixed associations between visual features and behavioral responses. Top-down attention can modulate the activity of these long-term codes depending on task requirements, but appears unable to completely suppress irrelevant codes. Consequently, congruency effects emerge in neurons that integrate information from different visual sources to compute the appropriate behavior. Thus, the decision-making process is more efficient when both relevant and irrelevant stimulus features activate the same response than when they activate different responses. These congruency effects in prefrontal cortex may lead to congruency effects in go/no-go behavior.
The first major task of vision is to extract useful information from the retinal signals obtained from the environment through the eye optics (bottom up signals). I attempt to classify the types of processing in the first stage of the visual information processing into four categories; information reduction, information extraction for particular functions, parallel processing, and complementary processing. Based on the classification, the process of motion in depth perception is considered as a typical visual processing that contains all of the four types of processing. This suggests that there are a limited number of basic strategies to process bottom up signal in the visual system.
Qualia and awareness are important constituents of consciousness. Qualia correspond to the consciousness of sensory qualities and have close relationships to the activities of sensory modules that have been clarified neurophysiologically. In contrast, awareness corresponds to the consciousness of things. This process is much related to object recognition processes, not directly tied to a sensory modality. The relationships between awareness and qualia have not been discussed well. In the present study, we tentatively assume a weak correlation between visual qualia and awareness and we classified various paintings into four groups by the combination of low or high activation levels of awareness and qualia. In each group, the common characteristics of paintings are discussed in relation to specific brain functions. We also consider the theoretical importance of our classification referring to the views on art and perception proposed by Ramachandran & Hirstein (1999) and Gregory (1998).
Experimental results of perceptual grouping and apparent motion imply that perceptual information on similarities (or differences) in different perceptual dimensions are summed up. Linear regression analysis indicated that the obtained effects of multiple perceptual differences were simple weighted sums of effects of differences in individual perceptual dimensions. A perceptual state-space model is proposed. In the model, each dimension corresponds to each perceptual dimension: hue, brightness, shape, size or position, and a point represents a perceptual state. Perceptual grouping and apparent motion are assumed to occur between pairs of the closest two points in this state space. In this state space, city-block metric rather than Euclidean metric is effective. This means perceptual dimensions are separable with each other. Perceptual information is supposed to be synthesized after information-processing in individual perceptual dimensions.
The first commentator, Yoshimura, focused on the top-down and sideways rules of Gregory's perceptual framework, particularly emphasizing the intellectual function, which originates from Irvin Rock. Yoshimura used Morinaga's "equal width," one of the Gestalt rules, as an example of intellectual factors in the sideways information. As the second commentator, Noguchi made comments as below: Firstly, he demonstrated that the top-down processing like "perceptual strategy" should not be neglected in the detection of motion in depth when sensory signals were simple and provided only poor cues. Secondly, as shown in Noguchi's study (2003) on the relationship between visual illusion and aesthetic preference, the problem of visual awareness/qualia that might be regarded as a representative of top-down processing could not be solved without taking Gestalt laws into consideration. Finally, an attempt to treat comprehensively and quantitatively several Gestalt factors as unified Praegnanz tendency is highly evaluated. Yet, this attempt should not be concerned only with stimulus structures, and should take account of "perceptual structures" or "perceptual dimensions" which are elucidated with experimental phenomenology.
Are facial responses to taste stimulation and recognition of them supported by domain-specific or domain-general mechanism? We examined facial responses elicited by taste stimulation (sweet, salty, sour, and bitter). Solution of each taste was injected into the subjects' mouths and their facial responses were recorded. Next, the pictures or movies of the facial responses were presented to the subjects who were required to identify the taste stimuli on the confidence rating scale and to rate emotional tone of the facial expressions. The results of the experiments showed that the facial expression of sourness and bitterness were conspicuous and identified correctly. These findings together with the other relative findings were discussed in terms of the evolutional meanings and domain-specificity/generality.
In this paper I will question about the domain-specificity of social cognition, which seems to be evident based on the neuropsychological studies on brain-damaged patients and the neuroimaging studies on normal adults, and challenge the accepted idea that social impairments in autism can be explained by a deficit of a "theory of mind" module. Two psychological experiments using face stimuli conducted on children and adolescents with high-functioning autism and Asperger's disorder are illustrated. The "theory of mind" theory cannot account for the findings, which suggest that difficulties in understanding others' affective states in autism may be derived from dysfunction in evaluating emotional significance normally occurring at the early stage of emotional processing where amygdala plays an important role. Social cognition is a domain with fuzzy boundaries overlapping with motivation, emotion, and memory, and the structures involved include sensory, motor, and cognitive component. Thus social cognition might be modularized during the developmental process in which these related functions interact each other in a dynamic manner. To understand the function and structures of social cognition whether it is domain-specific or domain-general, the involvement of factors on both genetic and epigenetic levels needs to be explored.
Domain-specificity and generality of cognition are exciting subjects relating that can shed light on the processes of human cognitive evolution. Their theorization, however, is very complicated, and the relationships between domain specificity and basic architecture of the brain have not yet been fully clarified. This paper reviews recent studies on the relationship between domain-specificity/generality and brain functions. Among visual cognition, in which the author is specialized, facial cognition needs special attention as it is connected to the basic social cognition and should have had an important adaptive value. The studies with neuroimaging show that face perception, cognition of facial expression, and that of facial attractiveness are processed in several particular brain areas including fusiform face area, amygdala and some else. Moreover, recent studies of neuroimaging suggest that there is a neural basis for not only domain specificity but also for the general intelligence as domain-generality of cognition.