Abstract
We propose a method of improvement of accuracy of reconstructed shape. When rigid object motion of the target is acquired, we can integrate the volume intersections in time sequences. This integration is equivalent to increasing the number of cameras. The rigid object motion of the target is calculated by tracking feature points on each intersection. The landmarks on the intersection are adopted as feature points, because they are observed from anywhere and any directions. In the experiments, we reconstruct shape of a simulated model. As a result, we can obtain better accuracy in shape from integrated intersection than from an intersection at one time.