Visual object recognition uses both viewpoint specific and invariant information. In this study we investigated what types of information were used when visually learned objects were recognized by haptics, and vice versa. First, a novel 3-D object was presented, either visually or haptically, with five viewpoints along its vertical axis. This was followed by a series of recognition tests in which the test stimuli were presented either visually or haptically from various viewpoints. The participants then indicated whether the test stimuli were, or were not, the same as the object presented earlier. In Experiment 1 the participants were told the test modality before the novel object was presented. The recognition performance of the participants showed a viewpoint invariance across modalities, whereas the learned viewpoint showed an advantage within a modality. In Experiment 2, and without the knowledge of the test modality, the performance of the participants both within, and across, modalities showed a viewpoint invariance. These results suggest that viewpoint-independent information alone is available for object recognition across modalities, whereas viewpoint-dependent information also becomes available for recognition within a modality when the test modality is known before the novel object is presented.
抄録全体を表示