抄録
In this paper, we discuss a new visual tracking algorithm based on resampling images obtained from virtual view-points. An area depth sensor can capture Point-Cloud-Data of real 3D-scene. The virtual view-points can be located everywhere in a reconstructed scene space. If the virtual view-points are located on a motion vector of a tracking target object, the target object appears at rest in the resampling 2D-image. Therefore, it is easy to track a motionless object. We have implemented above-mentioned concept. Experiment results indicate that our approach is reasonable.