抄録
Visual functions have an important role for a robot who observes partner robots and engages an assemble task with the other robots cooperatively. In order to develop an effective visual function for robots, we investigate features of the human visual scanpath in a scene of robot hand movement. Human regions-of-interest are measured based on eye-movement recording experiments and compared by using a positional similarity index on the basis of scanpath theory. This study also discusses how bottom-up image processing algorithms are able to predict the human regions-of-interest. We compare them with algorithmic regions-of-interest which are generated by the algorithms. The results suggest that the bottom-up algorithms whose support size is smaller than the size of the fovea have high ability to predict the human regions-of-interest.