VISION
Online ISSN : 2433-5630
Print ISSN : 0917-1142
ISSN-L : 0917-1142
Current issue
Displaying 1-10 of 10 articles from this issue
  • Yuko (SASAGAWA) SHIOGAMA, Tadashi MIYAMOTO
    2026Volume 38Issue 1 Pages 1-6
    Published: January 20, 2026
    Released on J-STAGE: January 21, 2026
    JOURNAL FREE ACCESS

    Using a 32-patch color-naming test based on the PCCS hue circle, we examined 122 individuals with congenital color vision deficiency. Dichromats averaged 16.7 errors, whereas anomalous trichromats averaged 5.4 errors. Errors were most frequent in very pale and very dark colors. Dichromats confused both adjacent and widely separated hues, while anomalous trichromats primarily confused adjacent hues on the hue circle. Naming responses were highly unstable and exhibited large inter-individual variability. This quick, clinically practical test provides valuable information for vocational guidance and daily-life support for individuals with color vision deficiency.

    Download PDF (587K)
  • Oliver W. LAYTON
    2026Volume 38Issue 1 Pages 7-15
    Published: January 20, 2026
    Released on J-STAGE: January 21, 2026
    JOURNAL FREE ACCESS

    Artificial neural networks (ANNs) have drawn substantial inspiration from the visual system over the past half century. This relationship has become increasingly bidirectional, with deep learning now providing insight into the brain mechanisms underlying visual perception. In this paper, we use ANNs as a “toolbox” to test the extent to which biological neural tuning proper-ties emerge from different computational objectives and mechanisms. We focus on replicating the tuning properties of area MSTd in the primate dorsal visual stream, which supports self-motion perception from optic flow—the pattern of retinal motion produced during movement through the environment. Interestingly, ANNs optimized for accurate self-motion estimation exhibit weak correspondence with MSTd tuning. By contrast, networks trained to achieve the more biologically plausible goal of reconstructing motion representations without labeled data (autoencoders) show stronger correspondence. These findings suggest that the brain may prioritize an efficient representation over accuracy when processing self-motion, offering new insight into the computational goals of the visual system.

    Download PDF (1065K)
Noticeboard
feedback
Top