Abstract
Simulated experience and reliving of real-life scenes are good examples of the use of three-dimensional virtual space. In particular, objects that are difficult to manipulate in real world, such as cultural assets, are worth experiencing through a tangible 3D model. However, models produced by current 3D reconstruction techniques are static, and cannnot provide experiences involving dynamic interactions such as the manipulation of tools. In this study, we propose a method to reproduce a three-dimensional dynamic scene from a static point cloud of an object as well as a sample video of the object’s motion and a rotation axis specified by the user. We applied this method to point clouds created from mesh models and a point cloud of actual cultural asset, and were able to reproduce visually plausible articulated motions.