2018 年 138 巻 5 号 p. 611-618
This paper describes a person-invariant method of classifying subtle facial expressions. The method uses keypoints detected by using a face tracking tool called “Face Tracker”. It describes features such as coded movements of keypoints and uses them for classification. Its classification accuracy was evaluated using the facial images of unlearned people. The results showed the average F-measure was 0.93 for neutral (expressionless) facial images, 0.78 for subtle smile images, and 0.93 for exaggerated smile images.
J-STAGEがリニューアルされました! https://www.jstage.jst.go.jp/browse/-char/ja/