人工知能学会全国大会論文集
33rd Annual Conference, 2019
セッションID: 3J3-E-4-02
会議情報

Eye-gaze in Social Robot Interactions
Grounding of Information and Eye-gaze Patterns
Koki IJUIN*Kristiina Jokinen JOKINENTsuneo KATOSeiichi YAMAMOTO
著者情報
会議録・要旨集 フリー

詳細
抄録

This paper examines human-robot interactions and focuses on the use of eye-gaze patterns in evaluating the partner’s understanding process. The goal of the research is to understand better how humans focus their attention when interacting with a robot and to build a model for natural gaze patters to improve the robot’s engagement and interaction capabilities. The work is based on the AIST Multimodal Corpus which contains human-human and human-robot interactions on two different activities: instruction dialogues and story-telling dialogues. The preliminary experiments show that there are differences in the eye-gaze patterns given expected and non-expected responses, which affects their understanding and grounding of the presented information. The paper corroborates with the hypothesis that eye-gaze patterns can be used to predict grounding process and provide information to the speaker about how to proceed with the presentation, so as to support the partner’s understanding and building of the mutual knowledge. Some consideration is given to future improvements in methodology

著者関連情報
© 2019 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top