We propose a new neural network model for extraction of binocular disparity. The model makes use of corners as features to extract binocular disparity, as well as edges and lines. A neuron in the model interacts with neurons in neighbors through excitatory and inhibitory connections so that false matchings can be eliminated. These connections realize smoothness and uniqueness constraints which are known in computational theory of vision. The model has multi-channels dealing with different spatial frequencies to raise possibility of extracting more correct binocular disparities. Moreover, the model represents binocular disparity with population coding so that it is possible to encode binocular disparity in sub-pixel resolution. We demonstrate in computer simulation that the model has good performance for extracting binocular disparities.
抄録全体を表示