Magnetoencephalography (MEG) was used to investigate the brain mechanism of audio-visual predictive information processing and to propose a basic theory on optimal presentation of multimodal information in VR. After repetitive delivery of a specific visual-auditory pattern with a prior visual presentation, either a congruent or incongruent pattern with a deviant auditory part was presented to subjects (e.g., V_aA_a, V_aA_a, V_aA_a, V_bA_b/V_aA_b, ...). When they predicted a probable sound based on a visual cue, a congruent/incongruent pattern evoked smaller/larger auditory activity in the bilateral supratemporal areas, respectively. Since this so-called mismatch field (MMF) is considered to reflect unimodal change-detection process in the auditory sensory memory, the strength change suggests that a template, which is usually used in the process, is also created explicitly by visual information to detect incongruities earlier. We thus conclude that the essence of such crossmodal completion lies in the template creation.
抄録全体を表示