抄録
An online approach is proposed to recover human body pose from 3D voxel data. The use of voxel data leads to viewpoint-free estimation, which benefits in that retraining is redundant in different multi-camera arrangements. Other advantages of our approach are speed and robustness. These are provided by an example-based method, applied by extracting posture labels from a large motion capture database. During the online process, only a similarity evaluation is needed between posture labels and online voxel data. The metric is formulated by introducing a histogram-based feature vector for representing the context of 3D volume. Estimation stability is improved by a precomputed graphical model of motion, which adds a smoothing effect to the motion sequence. We demonstrate speed and robustness of our approach with experiments on both synthetic and real image data.