日本バーチャルリアリティ学会論文誌
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
視聴覚情報の整合性を用いた予測情報処理機構の脳磁場解析(「BMI/BCI時代の心理学とVR」特集)
青山 敦遠藤 博史本多 敏武田 常広
著者情報
ジャーナル フリー

2007 年 12 巻 1 号 p. 45-55

詳細
抄録

Magnetoencephalography (MEG) was used to investigate the brain mechanism of audio-visual predictive information processing and to propose a basic theory on optimal presentation of multimodal information in VR. After repetitive delivery of a specific visual-auditory pattern with a prior visual presentation, either a congruent or incongruent pattern with a deviant auditory part was presented to subjects (e.g., V_aA_a, V_aA_a, V_aA_a, V_bA_b/V_aA_b, ...). When they predicted a probable sound based on a visual cue, a congruent/incongruent pattern evoked smaller/larger auditory activity in the bilateral supratemporal areas, respectively. Since this so-called mismatch field (MMF) is considered to reflect unimodal change-detection process in the auditory sensory memory, the strength change suggests that a template, which is usually used in the process, is also created explicitly by visual information to detect incongruities earlier. We thus conclude that the essence of such crossmodal completion lies in the template creation.

著者関連情報
© 2007 特定非営利活動法人 日本バーチャルリアリティ学会
前の記事 次の記事
feedback
Top