Journal of the Robotics Society of Japan
Online ISSN : 1884-7145
Print ISSN : 0289-1824
ISSN-L : 0289-1824
Joint Attention Learning based on Early Detection of Self-Other Motion Equivalence with Population Codes
Yukie Nagai
Author information
JOURNAL FREE ACCESS

2007 Volume 25 Issue 5 Pages 727-737

Details
Abstract

This paper presents a robotic learning model for joint attention based on self-other motion equivalence. Joint attention is a type of imitation, by which a robot looks at the object that another person is looking at by producing an eye-head movement equivalent to the person's. It means that this ability can be acquired by detecting an equivalent relationship between the robot's movement and the person's. The model presented here enables a robot to detect the eye-head movement of a person as optical flow in the vision and the movement of its eyes and head as a motion vector in the somatic sense. Because both of the movements are represented with population codes, the robot can acquire the motion equivalence as simultaneous activations of homogeneous neurons that are responsible to a same motion direction in the two senses. Experimental results show that the model enables a robot to learn to establish joint attention based on the early detection of the self-other motion equivalence and that the equivalence is acquired in a well-structured visuomotor map. The results moreover provide analogies with the development of human infants, which indicates that the model might help to understand infant development.

Content from these authors
© The Robotics Society of Japan
Previous article Next article
feedback
Top