Abstract
It is likely that in the near future, human-symbiotic robots will share people's living spaces. Since there are many objects that can cause visual occlusion in these environments, situations in which a robot cannot see an object due to occlusion whereas a user can, and vice versa, will often occur. In such situations, ideally the human-symbiotic robot should be able to interact with the user while taking into consideration the differences between their fields of view. We anticipate that such a “considerate” robot will be friendlier and provide more pleasant interactions with users. In this paper, we aim to test this anticipation. First, we propose a robot that can estimate a user's and its own fields of view and thus behave appropriately due to being aware of the difference between their perceptions. The robot estimates the orientation of the user's head and the structure of its surrounding environment using a stereo camera. By combining these results, it can inform itself of the user's and its own fields of view: that is, what they can see and what they cannot see. Next, we carry out experimental subjective evaluations of the impressions that people gain during interaction with the proposed robot. Participants who observe the interactions are asked to subjectively evaluate their impressions of the robot's behaviors. The experimental results show that the proposed robot, which can guess what the user can see and understand the differences between their different viewpoints, is perceived as more “companionable.” This ability is likely to be one of the basic requirements for achieving interactions that people interpret as “friendly” with robots and other intelligent systems.