This study examined how a three-dimensional (3D) background depicted by pictorial cues serves as a context in a visual search task. The task was to search for a target cube differentiated from distractor cubes by a perceptual slant. The reaction time for detecting the target was measured. When the search items were shaded cubes with clear 3D interpretations (Experiment 1), a background with the same perceptual slant as the distractors facilitated the search performance in target-present trials. However, when the search items were linedrawings of cubes without shading which had ambiguous 3D interpretations (Experiment 2), the facilitation in target-present trials was eliminated. We suggest that the background is influential when the perceptual load on the processing of search items is low.
Periodical expansion/contraction of stimulus induces vergence responses. In this study, we examined whether stimulus expansion/contraction without observer's prediction based on periodical expansion/contraction will generate vergence responses. In Experiment 1, we used Maltese cross and checked pattern as stimulus (the size: 11.6 and 2.0 as diagonals). The result showed that expansion of the lager stimulus elicited convergence responses (2 observers out of five with Maltese cross; 3 out of five with checked pattern) while contraction of the larger stimulus and expansion/contraction of the smaller stimulus did not elicit vergence responses. In Experiment 2, we used checked pattern as stimulus. The results showed that larger range of expansion in area and texture of stimulus (up to 20 deg) elicited convergence (four observers out of five), and its contraction (down to 2 deg) elicited divergence responses (three out of five), while expansion in texture without changing the stimulus area elicited smaller convergence response than expansion in texture and the area. Moreover, contraction in texture without changing the stimulus area did not elicited divergence response. We concluded that: (a) expansion elicited larger vergence response than contraction, and (b) vergence response increases with combination of texture and the area to be changed.
Objects encountered in daily visual experience often consist of regions that differ in color, luminance and shape. The human visual system is adept at binding together these various regions to perceive the whole object, while simultaneously separating them from those that belong to other objects or the background. How this region integration is achieved and how the visual system develops these capabilities is not well understood. We recently had a unique opportunity to investigate this question when we met an individual (SK) in India, who had an unusual visual history. At the time of our meeting, SK was 28 years old and had untreated congenital bilateral aphakia, resulting in highly compromised visual abilities. After providing treatment to SK, we studied the early stages of his visual skills. Specifically, we examined SK's performance on simple image parsing and recognition tasks. We found that prominent Figural cues of grouping, such as good-continuation, junction structure and symmetry, were largely ineffective for image parsing. By contrast, motion cues were of profound significance and played two critical roles in SK's object processing abilities. First, they enabled intra-object integration, and segregation from background. Second, they facilitated the development of object representations which permitted recognition in static images. Together with data from earlier infant studies, these results suggest that motion information plays a fundamental role in organizing early visual experience.
About the perception of causality, a lot of researches of Albert Michotte are famous. Phenomenal causality is perception of the motive power of an object. It appears to move by itself or something other's power. Michotte showed that the verbal descriptions of perceived causal relation changed only by the changes in the conditions on vision. Michotte believed that our perception was not limited by the elements of sensation, and our perceived world was meaningful. At that time, the hypothesis of perceptual causality based on the elementalism is predominant. For Michotte, it was necessary to resist it. However, the kind of relations that we perceive is not limited to the causal relation. This paper doesn't limit to causal relation and discusses the perceptual organization of meaning more widely.
First, the position of the perception of causality in the entire theory of perception by Michotte was discussed, and the theory of agency by Leslie was introduced in order to classify the various causal relations. Second, the reliability of verbal reports by young children, from 4 to 6 years old, was discussed and some evidence was presented that young children could tell a causal stimulus from a non-causal stimulus and that they used different words to report what they saw. Third, perceptual, intellectual and tactile-kinesthetic factors which were considered to contribute to the development of the perception of causality by young children were discussed. The perceptual ability to segregate the causal stimulus into cause and effect was verified to contribute to the development of the perception of causality, but intellectual ability, which was measured by an intelligence test, was not found to contribute to that development. The experience of operating a moving target on a monitor by means of a key to make it stop facilitated integration of the tactile-kinesthetic experience of pushing some object with the visual experience of two moving objects in a causal stimulus. As supporting evidence some data which showed the development of the size-weight illusion by young children was presented.
It is natural for humans to perceive the relationship between items in a visual field. One such relationship is causation, as when an observer sees one object cause another object to move by colliding with it. In the field of comparative cognition, studies on visual perception in non-human animals have been important in understanding how non-human animals perceive their world, and to manifest the evolutionary foundation of human visual cognition. Perception of causality, however, has been less frequently investigated in non-human animals. Here I report chimpanzee studies on stream/bounce perception, which have been recently discussed in the context of causal perception, and discuss about approaches to understand the perception of causality in non-human animals.
We investigated whether squirrel monkeys can perceive mechanical causality (bouncing and streaming). A squirrel monkey was trained to track an object (target) when the object and the other identical object (distracter) moved diagonally toward and then past one another. If the monkey perceived them as bouncing, the monkey would report the goal of the target was on the same side. The results indicated that the monkey's choices of the goal positions were affected by the speed of the objects. We compared the results to those of human studies which were conducted with the same procedure. We found that the squirrel monkey and humans showed similar results, suggesting that the monkey might perceive mechanical causality between two moving objects.
It is generally believed that memory stores as psychological constructs are distinguishable from other cognitive mechanisms, such as language processing systems. One such memory system might be a short-term store (STS), which is a putative construct assumed to retain small amounts of information for a short period of time. In this article, I discuss two lines of research in relation to this construct. One indicates that some data in the working memory literature can be explained without postulating a STS, suggesting its redundancy in our cognitive system, and the other suggests the importance of STS in explaining observed phenomena. The first line includes studies on working memory span tests and the second includes those on the role of the phonological loop in long-term learning. I distinguish two situations: one in which retention over the short term is required and one in which we must assume the presence of STS, a dedicated system for temporary storage over the short term.
Episodic memory is recently evolved, late-developing, more vulnerable than other memory systems to neuronal dysfunction, and the highest-level memory system, probably unique to humans. But, the position of Episode memory has not yet be established in memory theory. Prof. Tulving proposed that the episode memory was becoming able to be once more understood from the viewpoint of "the brain-the mind" system. In this paper, we introduced a series of our brain imaging researches and showed the strong relationships between the hippocampus memory system and human high level cognition. Then, we proposed the hippocampus model of constructive memories: the role of the hippocampus is to bind independent cognitive functions/elements that are important for the should-be-memorized cognitive behavior. This model can explain many excellent features of our human intelligence; its openness to the world, rapid learning/memory, flexibility and other capacities; almost all features of "the brain-the mind" system of Prof. Tulving.
Prospective memory is to remember the future intentions or plans in every day life. We used prospective memory training for two brain-damaged amnesic patients to investigate the neural mechanism of two component of prospective remembering: remembering to remember and remembering content. The results provide strong evidence that these two remembering components have independent neural bases, with the basal forebrain and right medial frontal lobe being required for remembering to remember and the medial temporal lobe being required for remembering content. The amnesic patients often develop a spontaneous and fantastic confabulation which is an extreme type of false memory. To elucidate the mechanism of confabulation, we investigated the neural basis and psychological process of false memory. Neuroimaging studies of human memory demonstrated that the prefrontal cortex (PFC) is activated during episodic memory retrieval. We focused on the role of anterior and posterior prefrontal cortex in false recognition by neuropsychological studies and by event-related functional magnetic resonance imaging (MRI) and the functional connectivity method. In the neuropsychological study, we administered the typical false recognition paradigm to eight patients with ventromedial prefrontal cortex (VMPFC) lesion. The VMPFC patients showed significantly higher false alarm rate than the normal subjects for lure words, suggesting that the VMPFC patients were more likely to fail to retrieve past similar events. In fMRI study, subjects were first asked to try to remember a series of associate-word lists outside the MRI scanner in preparation for a later recognition test. In the MRI scanning phase, they were asked to make recognition judgments in regard to old words, semantically related lure words, and unrelated new words. The right anterior PFC showed a greater signal change for false alarm than for hit and correct rejection. The signal increase in the right anterior PFC was greater for false alarm responses led us to speculate that the excessively high sensitivity to familiarity in the right anterior PFC may be a cause of the frequent occurrence of false alarm responses. The finding of a greater signal change in the left anterior PFC for correct rejection than for hit and false alarm suggests a major contribution of the left anterior PFC to systematic source monitoring or conscious recollection. The analysis of functional connectivity revealed that the posterior PFC in each hemisphere had strong functional interconnections with the contralateral posterior PFC, whereas the anterior PFC in each hemisphere had only weak functional interconnections with the contralateral anterior PFC.
In this research, we investigated the function of human eye-gaze perception and control system. The whites of human eyes produce a strong contrast with the pupils, and in light of this fact, detection of the eye-gaze direction is relatively easy to do. Our result in Experiment 1, however, showed that the perceived eye-gaze direction is affected not only by the eyes but also by the direction of the whole head. From the perceived position, the actual gazed position can be estimated by a linear function model. From the result of Experiment 2, we found that the head and eye movements of perceiver are also systematically affected by the looker's head rotation. Additionally, in the MRI imaging study in Experiment 3, it was shown that the rotation center of the eyeball when the participant gazed at various points was located backward, and the rotation angle was asymmetric on both eyes.
Previous research has shown a large difference between recognition for familiar and unfamiliar faces. We are surprisingly good at recognizing familiar faces, while we are poor at matching images of unfamiliar faces. It is known too that an advantage for matching the internal features of familiar faces in contrast to an advantage for the external features of unfamiliar faces. However it is not clear how facial representation changed as the faces became more familiar. Two experiments were conducted to examine how difference there is between familiar and unfamiliar facial representations. Experiment 1 showed familiar faces differ from unfamiliar faces qualitatively in mental representation. Experiment 2 suggested the distinctiveness of internal features is exaggerated in memory representations of familiar faces. These results suggest that one of the differences between familiar and unfamiliar faces is the distinctiveness of inner features in mental representation.
Artificially human faces such as robotic faces, dolls' faces, computer graphic (CG) images of faces, and cartoon faces are perceived as human faces. We rarely, however, confuse the artificial faces with the faces of real human individuals (natural faces). This suggests that our visual system involves mechanisms that are sensitive to the realism of faces (i.e. similarity of an artificial face to the natural face) and enable us to distinguish between artificial and natural faces. In past face perception research, researchers have ignored investigating how realism influences the perception of the face. In this article, a research strategy to investigate Mori's (1970, Energy, 7, 33-35) 'uncanny valley', a hypothesis that people will have an unpleasant impression of a robot that has an almost, but not perfectly, realistic human appearance, is shown to demonstrate that realism can be an interesting independent variable in the face perception study.
Our research goal is to discover principles underlying natural communication among individuals. For this purpose we explore human likeness of androids and naturalness of human-android communication through the development of the androids. In this paper, we focus on the android's face and discuss on the human likeness of a face from the viewpoint of the android development. The paper presents essential points to make the android's face humanlike. Moreover, the paper describes a psychological experiment to evaluate the likability of the android's movements, including facial movements (e.g., blinking and eye movement).
We conducted a visual search experiment in which participants detected a target (a moving natural image) which moves in the opposite direction to the other distractor images. A set size effect, that is, decrease of accuracy as the number of stimuli in the display increased, was observed. In another experiment to identify a factor that restricted the participants' performance in the task, we used moving random-dot patterns having various amount of relative motion. We found that a target with zero relative motion did pop-out, but the accuracy decreased as the amount of relative motion increased. We estimated the strength of the relative motion of the moving natural images based on the motion-energy model, and found a negative correlation between the accuracy and the strength of relative motion in the natural images. These results suggest that the relative motion is the primary factor for a visual search of moving natural images based on direction information.
This study investigated the mechanism by which human vision determines the orientation of depth-rotated familiar objects. We hypothesized that the symmetry of the contours in the front and back views of objects is used to determine their orientation because familiar objects are often bilaterally symmetrical. The participants were required to detect 15°differences of depth orientation of the two objects presented simultaneously. Detection was better for the 0°(front) and 180°(back) conditions than for the 45°, 90°, and 135°conditions (Experiment 1). These results confirmed a predicted contribution of symmetry. The same pattern of results was observed when detection was required of an orientation difference of two different objects (Experiment 2).
We investigated the effect of haptic information on visual illusions and the appearance of simple geometrical patterns without figural components which induce visual illusions. Experiment I was designed to examine the effect of haptic information on the visual illusions of the Hering and Wundt figures. The haptic stimuli with three different curvatures were made of wooden board. The participants were asked to judge the apparent curvature of each test figure, with or without the haptic stimuli. In Experiment II, the participants judged the apparent curvature of each test figure but components inducing visual illusions, with or without the haptic stimuli. It was found that the visual illusions were biased towards the direction of the information given by actively touching the haptic stimuli and that, the visual perception of simple patterns without any inducing components was not affected by actively touching.
We investigated the effects of the biomechanical constraints of the human body on the visual recognition of body movements and view dependency. First, we made pictures of twenty-seven different body movements with three-dimensional computer modeling software. All of these movements were biomechanically possible. Twenty-seven impossible movements were then made by modifying the possible ones. A pair of these movements from a variety of viewing angles (0-180 deg) were displayed sequentially, and the subjects were asked if the movements were identical, or different, regardless of any difference in viewing angle. We observed that the recognition of the possible movements was better and more view independent, than for the impossible movements. The recognition of human body movements therefore, was affected by the biomechanical relationship of the body parts and movements.
Previous studies have suggested that experience of sign language has an affect on visuospatial cognitive ability. Because the production and perception of hand postures are important elements of sign language it has been suggested that the cognitive ability of deaf signers for hand postures is specialized in some way. This study investigated the ability for production of mental images of hand postures by deaf signers and hearing non-signers. The participants were required to imagine various hand postures, and to judge whether the hand postures they imagined would be physically natural or not. It has been suggested that it takes longer to imagine unnatural postures of parts of the body than natural postures because of biomechanical constraints. However, the present results have indicated that the naturalness of postures did affect the reaction time of the hearing group but not of the deaf group. These results have suggested that an experience of sign language may alter cognitive processing of hand postures.
A memorized target is often mislocalized towards a task-irrelevant landmark (the landmark effect). By using Kanizsa subjective figures as a landmark we have shown that attentional shifts towards an illusory object result in the landmark effect (Experiment 1). The observers manually reproduced the position of a target circle which was presented above a central fixation cross accompanied by a physical or subjective square. As a result, the degree of the memory displacement towards illusory squares was comparable with that towards physical squares. This result indicated that a coarse spatial filtering cannot explain the landmark effect. In addition, we confirmed the landmark effect in object- and location-based attention paradigms (Experiments 2 and 3). Given the results, we suggest that attentional modulation of neural location signals might be the source of the displacement.
The detectability of luminance-modulated grating patches was determined for 4 adult chimpanzees. The behavioral experiment with the chimpanzees used a Parameter Estimation by Sequential Testing (PEST) procedure. The first experiment evaluated the consistency and reliability of the thresholds estimated with the PEST procedure in comparison to those collected with the method of constant stimuli. The chimpanzees displayed very small between-session variances of threshold values to detect a 0.5c/deg horizontal Gabor-type patch. The threshold values were also well-matched to those estimated with the method of constant stimuli. In the second experiment, contrast sensitivity functions were obtained over the range of 0.5-8.0c/deg. The contrast sensitivity was similar to that of humans, showing bandpass shape functions with both low and high frequency attenuation. These results suggest that humans and chimpanzees share the same physiological mechanisms which underlie spatial vision.
Two male Japanese monkeys were trained to perform a sequential conditional discrimination task with an asymmetric reward. In this task the first cue A1 (or A2) was followed by cues B1 and B2, and the monkey made a saccadic eye movement to choose one of these two (B1 or B2). After this first choice the third cues, C1 and C2, were displayed and the monkey selected either. The two correct association chains were: A1, B1, and C1; and A2, B2, and C2. The asymmetric reward rule was introduced block by block. In one block the A1-chain was rewarded but the A2-chain was not. In another block the reverse situation occurred. The monkeys displayed a differential reaction time and choice accuracy, based on a reward prediction. We also observed neural activity in the prefrontal cortex which was associated with the reward prediction and the stimulus-reward relationship.
The perceived duration of a stimulus is increased or decreased when its physical attributes, such as area size, differ from those of a comparison stimulus. We measured the perceived duration of a visual object with an invariant physical size, but whose apparent area size was altered by the Ebbinghaus illusion: a central circle surrounded by larger inducers appears smaller than a central circle of the same size surrounded by smaller inducers. The results showed that the perceived duration of apparently large circles was longer than that of apparently small circles, even though the actual area size remained invariant. We conclude that information about time perception is embedded in later visual processing systems.
We investigated the perception of motion trajectories of objects from moving cast shadows by human and macaque infants, and an adult chimpanzee, using a habituation-dishabituation procedure. We tested the ability to discriminate between a "depth" display which contained a ball and a cast shadow moving diagonally, and an "up" display containing a ball with a diagonal trajectory and a cast shadow with a horizontal trajectory. The results suggest that the ability to perceive the motion trajectories of the objects from moving cast shadows was shared by the three primate species.
Previous findings have suggested that infants prefer other infants, especially infants of the same age, rather than older or younger infants. The preference for individuals who are similar to themselves might form a basis for such peer preference in infancy. The present study examined the role of bodily movements as a cue for detecting a similarity. The study focused on the types of locomotion which develop rapidly in infancy and investigated whether infants show a looking preference for a particular type of locomotion which is similar to their own. Two types of biological motion (BM) were used as stimuli. The results indicated that infants who could crawl, but could not walk, preferred to look at crawling BM, while infants who could walk without support preferred to look at walking BM. This suggested that bodily movement could be one that plays an important role for detecting a similarity to self.
Recent studies have reported a similarity in the neural processing of human and robot actions; however whether this is the case remains controversial. We have examined this controversy by using the inversion effect: A component of an event-related potential which is face and body sensitive is enhanced and delayed by an inverted face and body, but not by an inverted object. The results revealed that the inversion effect occurs only with a human, and not with a robotic or light-point, appearance. This suggests that our visual system differentially processes human and robot actions.