A multi-modal display, including vision, audition, and touch, is generally believed to enhance human task performance (e.g., virtual surgery). However, the empirical validation of the enhancement for the inclusion of touch in a multi-modal display is less established as compared with vision and audition. Here, in a series of psychophysical experiments, we investigated how human participants integrate dynamic information between vision and touch in a virtual reality environment. We presented an autonomous deformation of a virtual object, and observers were asked to estimate the amount of deformation through vision, touch, or both. In Experiment 1, we validated that multi-modal integration can be described by a computational model based on weighted linear summation. A multi-modal display of dynamic information increased the perceptual accuracy when compared to a uni-modal display. The relative weight to each modality was influenced by the relative accuracy of each modality, although the weights were individually biased for each participant. In Experiment 2, we showed that the participants were capable of controlling the weight for each modality, leading to a reduction of the bias and an increased perceptual accuracy. These results imply that a multi-modal display including touch has the potential to enhance task performance by increasing perceptual accuracy, but also that individual difference such as perceptual strategy(i.e., biased weight)must be controlled for to maximize the benefit of a multi-modal display.
抄録全体を表示