We are developing a mobile robot, called
Jijo-2, which provides office services, such as answering queries about people's location, route guidance, and delivery tasks. To smoothly interact with office people,
Jijo-2 is expected to conduct natural spoken conversation. This paper describes dialogue techniques implemented on our
Jijo-2 office robot, i.e. noise-free voice acquisition system by a microphone array, inference of under-specified referents and zero pronouns using the attentional states, and context-sensitive construction of semantic frames from fragmented utterances. The behavior of the dialogue system integrated with the sound source detection, navigation, and face recognition vision is demonstrated in real dialogue examples in a real office.
抄録全体を表示