ヒューマンインタフェース学会論文誌
Online ISSN : 2186-8271
Print ISSN : 1344-7262
ISSN-L : 1344-7262
一般論文
E-VChat:頭部動作連動型音声駆動身体引き込みキャラクタを対面合成した実映像対話システム
石井 裕高田 友寛渡辺 富夫
著者情報
ジャーナル フリー

2012 年 14 巻 4 号 p. 467-476

詳細
抄録

We proposed an embodied video communication system in which a humantype avatar called "VirtualActor" which represents interactive behavior is superimposed on the other speech partner's video image in a virtual face-to-face scene. The effectiveness of a video communication system was demonstrated in an experiment of comparison with the scene in which a reduced own video image is superimposed on the other talker's video image using the picture-in-picture method. However, this system had some problems, such as the detailed adjustment of video images and the lack of portability of sensors. In this paper, we develop a headset-type motion-capture device which reflects the talker's head movements directly using an acceleration sensor and gyro sensor, and employ a CG character which moves based on talker's own motion and generates motion automatically based on the on-off pattern of talker's voice. Further, we propose the concept of an embodied video communication system in which the CG character is superimposed on the other talker's video image in a face-to-face scene, and develop a prototype called "E-VChat". A communication experiment is performed to confirm the effectiveness of the E-VChat system for 12 pairs of subjects using three communication modes: "Headset," "Headset + Generated motion automatically as a talker's avatar," and "Headset + Generated motion automatically as an talker's support agent." The results show that all communication modes tested are affirmatively assessed by sensory evaluation, and the "Headset + Generated motion automatically as a talker's avatar" mode is evaluated highly by a paired comparison. Finally, we develop a multiple-character E-VChat system using an audience that nods in response to the talker's voice, and confirm the effectiveness of the system in an interview-style communication experiment.

著者関連情報
© 2012 ヒューマンインタフェース学会
前の記事 次の記事
feedback
Top