2004 年 19 巻 1 号 p. 47-56
This paper discusses a problem of human-machine interaction when spoken word to object reference ambiguity occurs. We study joint activity of several agents in which a remote robot finds an object while communicating with the user over a voice-only channel. We focus on the problem in which the robot disambiguates the reference of the uttered word or phrase to the target object. For example, the utterance of the word ``cup'' may refer to a ``teacup'', a ``coffee cup'', or even a ``glass'' for different users in some situations. This reference (hereafter, ``object reference'') is user and situation dependent. We conducted two experiments. The first experiment including 12 subjects confirmed that the user model of object references is significant. In the second experiment conducted on 20 subjects, we show the model reference sensitivity to the situation. In addition to the ambiguity of the object reference, the actual system must cope with two sources of uncertainty: speech and image recognition. We present the belief network based probabilistic reasoning system to determine the object reference. The resulting system demonstrates that the number of interactions needed to find a common reference is reduced as the user model is refined.