2012 Volume 17 Issue 2 Pages 79-89
This paper proposes a direction giving avatar system in Metaverse, which automatically generates direction giving gestures based on linguistic information obtained from the user's chat text input and spatial information in Metaverse. First, we conduct an experiment to collect direction giving conversation corpus. Then, using the collected corpus, we analyze the relationship between the proxemics of conversation participants and the position of their direction giving gestures. Next, we analyze the relationship between linguistic features in direction giver's utterances and the shape of their spatial gestures. We define five categories of gesture concepts and four gesture shape parameters, and analyze the relationship between the gesture concepts and a set of gesture parameters. Based on these results, we propose an automatic gesture decision mechanism and implement a direction giving avatar system in Metaverse.