The participants judged whether two successively presented geons (Biederman, 1987) were the same or different. The geons were presented with different rotation angles (0°, 30°, and 60°) through a haploscope. In the qualitatively changed condition the visible qualitative information of the geons was changed by rotation. In the quantitatively changed condition only quantitative information was changed by the rotation. Half of the participants observed the geons with binocular disparity (3D condition). The other half observed the geons without disparity (2D condition). In the 2D condition the reaction time (RT) increased with the rotation in both of the changed conditions. The error rate in the qualitatively changed condition was increased with the rotation, but the error rate of the quantitatively changed condition was unaffected. On the other hand, in the 3D condition the RT and error rate in both of the changed conditions were not affected by the rotation. Furthermore, the RT in the 3D condition was shorter than the 2D condition. These results suggested that binocular depth information plays a crucial role in view-independent, structural object representation.
Visual object recognition uses both viewpoint specific and invariant information. In this study we investigated what types of information were used when visually learned objects were recognized by haptics, and vice versa. First, a novel 3-D object was presented, either visually or haptically, with five viewpoints along its vertical axis. This was followed by a series of recognition tests in which the test stimuli were presented either visually or haptically from various viewpoints. The participants then indicated whether the test stimuli were, or were not, the same as the object presented earlier. In Experiment 1 the participants were told the test modality before the novel object was presented. The recognition performance of the participants showed a viewpoint invariance across modalities, whereas the learned viewpoint showed an advantage within a modality. In Experiment 2, and without the knowledge of the test modality, the performance of the participants both within, and across, modalities showed a viewpoint invariance. These results suggest that viewpoint-independent information alone is available for object recognition across modalities, whereas viewpoint-dependent information also becomes available for recognition within a modality when the test modality is known before the novel object is presented.
Japanese two-kanji compound words (Jukugo : e.g.,侵略"invasion") have two sets of neighbors : rear neighbors that share the same kanji in the front (e.g.,侵入"intrusion") ; and front neighbors that share the same kanji at the rear of the Jukugo (e.g.,侵略"strategy"). A neighbor of higher frequency is represented as a "+ neighbor", whereas a neighbor of a lower frequency is a "- neighbor". We investigated the neighborhood frequency effect of rear neighbors (R+/R-) and front neighbors (F+/F-) in Jukugo identification by using a progressive demasking task, where the pattern mask degraded the visibility of a presented Jukugo. The position of the pattern mask was either of four types : front, rear, whole, and no position. The results indicated that in the whole and the front mask conditions, Jukugo in the F-R+ condition yielded longer reaction times than those in the other conditions. In the rear mask condition however, a difference in reaction times for the F-R+ and F+R+ conditions was not observed. The results are discussed in terms of two activation processes : a local activation of a rear neighbor of higher frequency and a global activation of a front neighbor.
To explore effective proofreading methods this study focused on the characteristics of professional proofreaders and control participants who were not proofreading professionals. In an anomalous word detection task the performance of the proofreaders was better than control participants (Experiment 1). However, this better ability of proofreaders to detect anomalous words may not be due to lexical knowledge because the estimated vocabulary of the two groups was approximately the same. In the Stroop color-word task only the control participants demonstrated Stroop interference (Experiment 2). The change detection performance of the proofreaders was better than control participants in a change blindness task (Experiment 3). This better change detection ability, and the lack of interference in the Stroop test, suggests that proofreaders inhibit interference from destructive information and can conduct an exhaustive search for anomalous words. These characteristics of a high-level vision are regarded as keys for effective proofreading.
Visual search experiments were conducted to examine if the representation of our own face was able to draw a line between the normal and mirror-reversed images. The participants performed a more efficient search when they detected a normal image of their own face against a background of a mirror-reversed image, than for the reverse condition. This search asymmetry reflected an advantage in a familiarity for the mirror-reversed image over the normal image. The results suggest that the human visual system holds a refined representation of our own face as a mirror-reversal and perceives any slight fluctuating asymmetry. This acute sensitivity to our own face may be the upper-boundary of ability for visual object recognition and be a consequence of daily experiences, of viewing the reversed image in a mirror, which have raised the acuity to the highest level.
By using spatial frequency (SF) analysis we investigated the visual information of human heads in different views in the inverted presentation. The stimuli were inverted images of the heads of three familiar people, viewed from the frontal view to the back of the head, which were created by four band-pass filters (8, 16, 32, and 64c/fw). The results of Experiment 1 indicated that mean reaction times (RT) for both the three-quarter and profile views were linearly slower as the heads were changed from upright to inverted orientations. However, there was no significant difference in RT for the upright and inverted presentation on frontal view and the back of the head. The results of Experiment 2 showed that RT decreased for the inverted frontal and profile views when the images had high SF. RT for the back of the head was decreased in the middle range of SF. These results suggest that different view-specific information may exist in each view, even in the inverted presentation. In addition, there was also different processing for the upright and inverted orientations not only for the frontal view but also the profile view and the back of the head.
We conducted two experiments to investigate the effects on the recognition of facial expressions in a time-series of facial expression changes. Ten subjects participated in each experiment. They were asked to judge the facial expression (either smiling or sad) by 2AFC. In Experiment 1 recognition of the morphed and actual dynamic images was compared. The results indicated that the accuracy was approximately 100%, and there was no difference in the reaction time. In Experiment 2 we degraded the image quality of the stimuli. The accuracy was then lower for the morphed images than for the actual images and the reaction time was longer for the morphed images. The results demonstrated that the effects of changes in facial expression in a time-series are observed only when the quality of the dynamic images is degraded. We hypothesize that minute dynamic facial changes influence the recognition of the facial expressions.
Kamide & Mitchell (1997) examined the preference of relative clause attachment ambiguity in Japanese. They observed that low attachment was preferred in the middle of processing, but that high attachment was preferred at the end. Mazuka & Uetsuki (2003) examined NPs with an adjective attachment ambiguity and reported that low attachment was strongly preferred. In the present study I examined the factors that affected these results. I focused on the differences of the stimuli. The effects of both of the types of adjunct (adjective and relative clause) and linguistic form (NP and sentence) on the resolution of syntactic ambiguity were examined. The results indicated that the adjunct type affected the interpretation, and the rate of high attachment in the adjective condition declined only in the NP condition. On the other hand there was no difference for the adjunct type in the sentence conditions. The rate of high attachment in the sentences with an adjective might be increased because of the Predicate Proximity (Gibson, Pearlmutter, Canseco-Gonzalez, & Hickok, 1996). These results mean that the type of adjunct and sentence influence the resolution of attachment ambiguity.
The text (lyrics) and melody are two prominent components of a song. Serafine, Crowder and Repp (1984) have demonstrated an asymmetrical facilitation for remembering songs in which a song's text improves the recognition of a melody more than a song's melody improves recognition of the text. Recent studies have suggested that the rhythm and pitch patterns in a melody are basically processed independently. The present study explored the contribution of these two components to this asymmetrical facilitation. In the three experiments participants listened to 20 excerpts from unfamiliar songs and recognized the text and melody (or one of the components of melody). The performances in the three experiments were consistent with an asymmetry between the text and melody (or one of the components of melody). These results suggest that both the rhythm and pitch patterns contribute to the text-melody asymmetrical facilitation.
For decades, the first author has engaged in making commercial films including animations and has obtained several rules of thumb concerning how to animate the characters smoothly and effectively. In Section One, it is pointed out that CRT and liquid crystal screens may produce different appearance for same source images. In Section Two, the function of animation's Obake, an ambiguous figure briefly inserted between two postures of an animation character, is examined. Japanese animations, especially TV anime, often contain one and two koma shoots. The rule and the reason for the proper use of the combination are introduced in Section Three. In Section Four, the problem of strobing, jerkiness in what should be a smooth movement of an image on a display, is pointed out. In Section Five, the authors emphasize the importance of taking the nature of audiences' minds into account when making animations. In Section Six, it is pointed out that the precise lip-synchronization would be unnecessary for the effective appearance in Japanese animations.
The reason why the sampled motion in movies or television looks to move smoothly has been explained by apparent motion in the brain. This, however, can be proved clearly as a physical phenomenon by the theory of spatio-temporal sampling and the characteristics of human eyes. Through this, phenomenon related to picture quality, such as blur in the motion areas in LCDs (Liquid Crystal Displays), are made clear. In a similar way, some interactive phenomenon of both motion and color are explained in spatio-temporal frequency domain. In addition, it is pointed out that some examples which have been treated in psychology have nothing to do with the functions of both the brain and the eyes.
Three kinds of psychological approaches toward art are discussed. The first is the rating method, which is thought to be a standard but sharp probe to understand art from psychology. The second is a mathematical approach, in which, to understand line perspective better, a novel equation of coordinate transformation from the real world to the retinal coordinates is proposed. The third is an approach using visual illusion, which depends on a rule that the larger the illusion magnitude the more beautiful the image.