抄録
This report describes a collaborative object search by a robot at a remote site and a human operator. The robot can autonomously search for a target object using an RGB-D camera. The robot performs the object search in the following three phases. First, the robot finds a desk and moves there. Then the robot extracts the candidate using a color. Finally, the robot recognizes the target object using SIFT features and estimates its pose. The operator examines the remote scene using a fish-eye camera on the remote robot, and gives advice on, for example, the object location to the robot. The robot then follows the advice to approach the target object. Experimental results show the effectiveness of the collaborative object search in comparison with a fully autonomous one.