Abstract
In this paper, we propose an utterance generation method that will enable a robot to form a shared belief efficiently and in a mutually adaptive way with the user. The shared belief is formed based on a common experience between the robot and user. It enables inference of the state of each other's belief systems, as well as comprehension of some of each other's ambiguous utterances. In the proposed method, a belief system that a robot has consists of two parts: a shared belief function, which expresses the shared belief assumed by the robot, and a global confidence function, which represents the degree of coincidence between the user's shared belief and the robot's. The shared belief function is composed of a set of weighted belief modules each of that represents a concept such as motion, object, and spoken language and so on. The global confidence function outputs the predictive probability that each other's utterances have been understood correctly. The belief system is learned incrementally, online, through human-robot interaction with objects. By learning the global confidence function, the robot becomes capable of inferring the state of the user's belief system and develops a prediction probability, adaptively generating utterances according to the situation as well as to the degree of coincidence of the shared belief, e.g., the increase and decrease in the number of words. Through interactions with generated utterances, the robot can itself update the global confidence function; in this way, the user and the robot adaptively form mutually-shared beliefs. The validity of the proposed method is shown through a number of experiments under various conditions.