In daily personal communication, nonverbal expression plays an important role. It is crucial to build nonverbal human interface systems for realizing a friendly information environment. In the future, computer will understand real human motions, and create human mimetic motions in CG, ie., Virtual mannequin, for interactive conversations. A structural model for understanding nonverbal expression in communication was proposed. This model is composed of a functional combination of real motions, measuring data of the motions, physical motion features, linguistic motion features and feelings of player's motions. Focusing on head motions in human communication, experiments of motion capturing were conducted in a laboratory, and motion feature indices were extracted from the sequential motion capture data. The video images of human body motions were used for questionnaire surveys. From the results of questionnaire surveys, the linguistic motion features and the feelings of player's motions were extracted. Relationships among the motion feature indices, the linguistic motion features and the feelings of player's motions were analyzed by multivariate analyses. From all of the obtained results, validity of the mathematical approach was verified according to the proposed structure model and the analyzing methods. Then this result shows the potential of developing nonverbal human interface technology by this framework.
View full abstract