VISION
Online ISSN : 2433-5630
Print ISSN : 0917-1142
ISSN-L : 0917-1142
解説
What Do Deep Neural Networks Reveal About the Neural Mechanisms of Self-Motion Perception?
Oliver W. LAYTON
著者情報
ジャーナル フリー

2026 年 38 巻 1 号 p. 7-15

詳細
抄録

Artificial neural networks (ANNs) have drawn substantial inspiration from the visual system over the past half century. This relationship has become increasingly bidirectional, with deep learning now providing insight into the brain mechanisms underlying visual perception. In this paper, we use ANNs as a “toolbox” to test the extent to which biological neural tuning proper-ties emerge from different computational objectives and mechanisms. We focus on replicating the tuning properties of area MSTd in the primate dorsal visual stream, which supports self-motion perception from optic flow—the pattern of retinal motion produced during movement through the environment. Interestingly, ANNs optimized for accurate self-motion estimation exhibit weak correspondence with MSTd tuning. By contrast, networks trained to achieve the more biologically plausible goal of reconstructing motion representations without labeled data (autoencoders) show stronger correspondence. These findings suggest that the brain may prioritize an efficient representation over accuracy when processing self-motion, offering new insight into the computational goals of the visual system.

著者関連情報
© 2026 The Vision Society of Japan
前の記事 次の記事
feedback
Top