Abstract
Multimodal synchronization is essential for virtual agents. This paper presents an effective approach to synchronize gesture with speech by motion graph technique. The basic idea is to consider the synchronization problem as a motion synthesis problem, where the goal is to match the duration and timing between the gesture and speech.