2011 Volume 44 Issue 2 Pages 2_29-2_45
Human beings can convey their intentions to other people non-verbally, through looks and gestures. It is known that during development, infants acquire this ability before they learn to speak. We have focused on the phenomenon of “joint attention,” wherein infants try to look at objects by following their parent’s gaze. This behavior begins as a reflective one, but then progresses further into the next stage-understanding and sharing other peoples’ intentions. We try to clarify joint attention by constructing a simple robot that visually interacts with human beings. In this paper, we suggest two important mechanisms for understanding and sharing intentions; an association mechanism to produce an intentional gaze following, and an inference mechanism to recursively rectify the robot’s own intentions by understanding the interacting person’s intentions.