抄録
This paper proposes an approach to human motion understanding by computational information fusion intermediating words. The procedure consists of three stages, i.e., (1) feature extraction from sequential pattern data of physical body motions, (2) motion modality recognition with respect to basic and modal phases, and (3) semantic understanding of behavior, intention or emotion classes. For these three respective stages, "Eigen-Space Methods (ESM)", "Hidden Markov Models (HMM)", and "Choquet Integral Agent Networks (CHIAN)" are employed. Based on the subjects experiments for expressing their emotion by human gesture in everyday life, the computational information fusion mechanism was constructed, and the effectiveness of the proposed computation procedure for understanding emotion emerged from gesture was investigated.