抄録
Deep level emotion understanding and its visualization method are proposed for multi-agent communication such as humans-robots communication, where customized learning knowledge of an observed agent is used with the observed input information from Kinect. It aims to realize agent dependent emotion understanding by utilizing special customized knowledge of the agent, rather than ordinary surface level emotion understanding by using visual/acoustic/distance information without any customized knowledge and visualizing the final emotion in shape-color-size diagram. In the experiment employing special demonstration scenarios where a company employee's emotion is understood by a secretary eye robot equipped with Kinect, it is confirmed that the proposed method provides deep level emotion understanding different from ordinary surface level emotion understanding. The proposal is being planned to be applied to a part of the emotion understanding module in the demonstration experiments of an ongoing robotics research project entitled "Multi-Agent Fuzzy Atmosfield".