Transactions of the Virtual Reality Society of Japan
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
Determining Avatar's Direction Giving Gestures based on Linguistic and Spatial Information in Metaverse
Takeo TsukamotoYukiko Nakano
Author information
JOURNAL FREE ACCESS

2012 Volume 17 Issue 2 Pages 79-89

Details
Abstract

This paper proposes a direction giving avatar system in Metaverse, which automatically generates direction giving gestures based on linguistic information obtained from the user's chat text input and spatial information in Metaverse. First, we conduct an experiment to collect direction giving conversation corpus. Then, using the collected corpus, we analyze the relationship between the proxemics of conversation participants and the position of their direction giving gestures. Next, we analyze the relationship between linguistic features in direction giver's utterances and the shape of their spatial gestures. We define five categories of gesture concepts and four gesture shape parameters, and analyze the relationship between the gesture concepts and a set of gesture parameters. Based on these results, we propose an automatic gesture decision mechanism and implement a direction giving avatar system in Metaverse.

Content from these authors
© 2012 THE VIRTUAL REALITY SOCIETY OF JAPAN
Previous article Next article
feedback
Top