Abstract
Sight is important for our motion generation such as walking and reaching for an object. However, it is not evident which visual features give helpful information for motion generation. In this paper, we examine the prior visual features used for coordinative eye and hand movement. Our system is aiming at applications to retinal prostheses. Subjects participating in our experiment were asked to look at a low-resolution image converted from an image captured by a head-mounted camera and to reach for a target. We compared the accuracy of reaching between two different converters: the visual saliency model and only brightness feature. The experimental results showed that the saliency map retrieved important features and gave robust sight against changing environments.