主催: 一般社団法人 日本機械学会
会議名: ロボティクス・メカトロニクス 講演会2020
開催日: 2020/05/27 - 2020/05/30
It is important to consider non-verbal information to understand the situation in speech. The purpose of this research is to develop robot partners for providing recommendations based on non-verbal communication. In this paper, we propose a system that recognizes surrounding objects and humans’ expressions for recommending appropriate topics to users. The proposed system consists of two convolutional neural networks (CNNs) for performing object recognition and human expression detection respectively. The facial expressions and objects present in the surroundings are used as non-verbal information, and the provided topics are hierarchical. We first evaluate the proposed system through objects and human expressions recognition accuracy. With this two non-verbal information, the proposed system able to provide relevant recommendations to users according to their situation.