Psychological findings suggest that humans keep spatial relation between objects in memory as ‘cognitive maps’, and that three-dimensional spatial information is represented by a set of two-dimensional information in the brain. Based on these findings, we propose a neural network model that forms a two-dimensional spatial map self-organizingly. The model is mainly composed of two functionally different parts. One represents spatial position of objects by using signals from saccadic eye movements. The other represents relative position of objects existing simultaneously in a visual field. This model was simulated on a computer to be shown to have the desired behavior.
The Correlated Information Extracting Neural Network has been proposed to extract the common information among multiple kinds of inputs. Applying this neural network to a robot with a visual sensor, the distance to an object could be extracted as the correlated information between motional signals and visual signals after learning. In the case of stereo vision which uses two visual sensors, the output representing the distance, did not depend on the size of the object. When the signals of tactile sensor were added to the neural network, the robot could detect from the visual signals or from the motional signals if the robot touched the object.