Abstract
Three-dimensional position information is essential in order to give appropriate support for human beings in an intelligent space. Distributed cameras in an intelligent space have to be calibrated for acquisition of three-dimensional position. Since calibration of many cameras is time-consuming, camera calibration method for easy construction of an intelligent space is needed. This paper proposes automatic camera calibration method based on image features and 3D information of minimal calibrated cameras. Image features and 3D positions are shared with uncalibrated cameras via network, and these cameras are calibrated with common image features. In this paper, the first experimental results with SIFT (Scale-Invariant Feature Transform) as image features are shown.