Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
24th (2010)
Session ID : 1J1-OS13-4
Conference information

Detection of Robot-Directed Speech by Situated Understanding in Physical Interaction
*[in Japanese][in Japanese][in Japanese][in Japanese][in Japanese][in Japanese][in Japanese][in Japanese]
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

In this paper, we propose a novel method for a robot to detect robot-directed speech from other speech: to distinguish speech that users speak to a robot from speech that users speak to other people or to himself. The originality of this work is the introduction of Multimodal Semantic Confidence measure, which is used for domain classification of input speech based on deciding whether the speech can be interpreted as a feasible action under the current physical situation in an object manipulation task. This measure is calculated by integrating speech, object, and motion confidence with weightings that are optimized by logistic regression. Then we integrate this measure with gaze tracking, and conduct experiments under conditions of natural human-robot interactions. Experimental results show that the proposed method achieved a high performance of 94% and 96% in average recall and precision rates for robot-directed speech detection.

Content from these authors
© 2010 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top