25 巻 (2007) 5 号 p. 727-737
This paper presents a robotic learning model for joint attention based on self-other motion equivalence. Joint attention is a type of imitation, by which a robot looks at the object that another person is looking at by producing an eye-head movement equivalent to the person's. It means that this ability can be acquired by detecting an equivalent relationship between the robot's movement and the person's. The model presented here enables a robot to detect the eye-head movement of a person as optical flow in the vision and the movement of its eyes and head as a motion vector in the somatic sense. Because both of the movements are represented with population codes, the robot can acquire the motion equivalence as simultaneous activations of homogeneous neurons that are responsible to a same motion direction in the two senses. Experimental results show that the model enables a robot to learn to establish joint attention based on the early detection of the self-other motion equivalence and that the equivalence is acquired in a well-structured visuomotor map. The results moreover provide analogies with the development of human infants, which indicates that the model might help to understand infant development.