VISION
Online ISSN : 2433-5630
Print ISSN : 0917-1142
ISSN-L : 0917-1142
What Do Deep Neural Networks Reveal About the Neural Mechanisms of Self-Motion Perception?
Oliver W. LAYTON
Author information
JOURNAL FREE ACCESS

2026 Volume 38 Issue 1 Pages 7-15

Details
Abstract

Artificial neural networks (ANNs) have drawn substantial inspiration from the visual system over the past half century. This relationship has become increasingly bidirectional, with deep learning now providing insight into the brain mechanisms underlying visual perception. In this paper, we use ANNs as a “toolbox” to test the extent to which biological neural tuning proper-ties emerge from different computational objectives and mechanisms. We focus on replicating the tuning properties of area MSTd in the primate dorsal visual stream, which supports self-motion perception from optic flow—the pattern of retinal motion produced during movement through the environment. Interestingly, ANNs optimized for accurate self-motion estimation exhibit weak correspondence with MSTd tuning. By contrast, networks trained to achieve the more biologically plausible goal of reconstructing motion representations without labeled data (autoencoders) show stronger correspondence. These findings suggest that the brain may prioritize an efficient representation over accuracy when processing self-motion, offering new insight into the computational goals of the visual system.

Content from these authors
© 2026 The Vision Society of Japan
Previous article Next article
feedback
Top