抄録
The multimodal anthropomorphic agent interfaces are effective means as a human communication technology. This paper has proposed the simple utterance model of the mouth shape for applying a multimodal anthropomorphic agent to small-scale system environment. We consider that the sequence of the height/width of a mouth shape measured from a Japanese newscaster's mouth movement image video is a signal waveform. The frequency ingredients contained in these waveforms are analyzed by Fourier transform. We have created the simple utterance model that is the typical waveform of the height/width of a mouth shape calculated from only nine or ten frequency ingredients. The validity of our proposed utterance model is verified by the comparison examination for two conventional simple utterance animation models.