Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : May 27, 2020 - May 30, 2020
It is important to consider non-verbal information to understand the situation in speech. The purpose of this research is to develop robot partners for providing recommendations based on non-verbal communication. In this paper, we propose a system that recognizes surrounding objects and humans’ expressions for recommending appropriate topics to users. The proposed system consists of two convolutional neural networks (CNNs) for performing object recognition and human expression detection respectively. The facial expressions and objects present in the surroundings are used as non-verbal information, and the provided topics are hierarchical. We first evaluate the proposed system through objects and human expressions recognition accuracy. With this two non-verbal information, the proposed system able to provide relevant recommendations to users according to their situation.