We examined the effects of relative motion components of a stimulus changing size on the aftereffect of motion-in-depth. In experiment 1, we used four moving lines forming a square. In one condition, two pairs of opposing lines moved simultaneously outward or inward (similarity). Whereas, in the other condition, one pair moved outward while the other pair moved inward and vice versa (shear). After adapting to these stimuli, two stationary vertical lines were presented. In the "similarity" condition, the stationary lines were first perceived to be moving in the opposite direction to the moving stimuli, and subsequently perceived to be moving in depth (3D-MAE). In the "shear" condition, however, the 3D-MAE was not induced. In experiment 2, we used two moving lines taken from the upper corner of the stimuli in experiment 1. After adaptation, two right angled lines were presented. Consequently the 3D-MAE was induced in the "similarity" condition, but not in the "shear" condition. These results suggest that our visual system integrates the relative motion components of changing size images for the perception or motion-in-depth.
Previous research (Kawahara, 1996) suggest that visual attention is directed to the most salient stimulus in the visual field irrespective of the subjects' knowledge or intention (attentional capture). In the present study, we conducted two visual search experiments and confirmed the idea that visual attention is directed in the stimulus-driven way. Subjects searched for a target which was defined by motion. In one condition, the target was presented among static nontargets; in another condition, one of the nontarget was replaced with a task-irrelevant distractor which had a unique color. Experiment 1 showed that the presence of the distractor prolonged the reaction time for target identification even when the target can be detected independent of the total number of items in the stimulus display (display size). Decreasing the saliency of the distractor color diminished attentional capture (Experiment 2). These results are consistent with the notion of stimulus-driven way of attentional deployment. A further analysis and experiment were conducted to characterize the nature of attentional capture. The distance and the compatibility of features between the target and distractor were not significant determinant of attentional capture in the present study. Finally, a possibility and problem for proposing the single activation map model of attentional deployment was discussed.
The anisotropy of angle illusion is already established. If the Muller-Lyer illusion is caused by angle illusion, it should be possible to observe the anisotropy of the Muller-Lyer illusion. The apparent length of the shaft lines of the Muller-Lyer figures (obliques-in, H-shape, and obliques-out figures) and the control figure, were estimated by twenty-two university students, when the orientation of each figure was varied in eight steps: 0°, 22.5°, 45°, 67.5°, 90°, 112.5°, 135°, 157.5° counterclockwise from the horizontal. The anisotropy of the apparent length of the shaft line of each figure was observed, and these regression curves became quadratic functions. But the anisotropy of the Muller-Lyer illusion was not observed. It was concluded that the relationship between angle illusion theory and the Muller-Lyer illusion was doubtful.
Songbirds learn their song through imitation. Acute auditory perception is required throughout this process. Except for hearing range that is much narrower in birds than in humans, basic hearing abilities in birds and humans are very similar. However, when complex sounds are used, bird and human perceptions are quite different. In this report I compared perception of temporal and spectral factors in humans and birds. I first used operant conditioning techniques to measure discrimination performances on species-specific songs that were played normally and played in reverse order. Birds were more sensitive to the local cues (i.e., direction of frequency modulation) than to the global cues (i.e., the order of the syllables). I also tested whether birds could perceive missing fundamental when spectrally complex sounds were used. Birds not only attended to the missing fundamentals, but also to the average spectra of the stimulus complex.
This paper describes two types of auditory aftereffects concerning sound localization. The first type involves the shifts in apparent location of a test sound with a given interaural time difference (ITD) following presentation of an adapting sound having a different ITD. The second type involves changes in ITD discrimination thresholds following adaptation. The aftereffects can be simulated by the interaural cross-correlation model with gain control. These results are consistent with the idea that the gain of ITD-selective units, located after binaural interaction but before across-frequency integration, changes according to recent input.
The McGurk effect (McGurk & MacDonald, 1976) is an audiovisual illusion which demonstrates that visual articulatory information is integrated with auditory information during speech perception. This paper describes what kind of cues visible speech conveys and how visual information is integrated with auditory information depending on one's native language (or culture) and proficiency of a second language. In lipreading, Japanese subjects could categorize visual consonants into labials and nonlabials with accuracy equivalent to American subjects reported in the literature. Compared with native speakers of American English, native speakers of Japanese and Chinese were less susceptible to the McGurk effect, indicating a manner of processing of heavy weighting on auditory information. Examining the data of the Chinese subjects who were residing in Japan, the McGurk effect got stronger as the subjects lived in Japan longer, suggesting that acquisition of a second language promotes the use of visual cues.