Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
34th (2020)
Session ID : 2G6-ES-3-03
Conference information

QOL Estimation based on Multimodal Learning through Interaction with a Communication Agent
*Satoshi NAKAGAWAShogo YONEKURAHoshinori KANAZAWASatoshi NISHIKAWAYasuo KUNIYOSHI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

When a monitoring system or a communication robot for the elderly welfare interacts with a human, it is important to estimate the user's state and to generate a behavior based on it. In the field of welfare for the elderly, quality of life (QOL) is a useful indicator, not only for human physical suffering, but also for treating mental and social activities in a comprehensive manner. In this study, we propose a QOL estimation approach that integrates facial expressions, head movements, and eyes movements in the process of interaction with a communication agent. To this end, we implemented a communication agent and constructed a database based on the information collected through communication experiments with humans. In addition, we implemented a multimodal learning estimator that incorporates C3D, a three-dimensional convolution and performed learning with head fluctuation and gaze feature extraction. Our results show that multimodal learning that integrates all of facial expressions, head fluctuations, and eyes movements was realized with less error than single modal learning using each feature separately. From our experimental results, we concluded that the proposed system can be used sufficiently as a QOL estimator.

Content from these authors
© 2020 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top