Embodied Conversational Agent (ECA) is a life-like virtual human capable of carrying on conversations with humans by both understanding and producing verbal and nonverbal behaviors. Since gestures are integrated with speech frequently in humans’ conversations, how to realize gestures is an important step towards ECA as a means of human-computer interaction (HCI). In this paper, we propose architecture of an ECA for presentations. We take an analysis of gestures employed by lecturers in symposiums. We synthesize gestures by utilizing an animated character having DOFs. Gestures synthesized by our system are convinced are more natural. Our work on gestures can provide a base to build an embodied conversation presentation agent.
抄録全体を表示