2024 Volume E107.D Issue 12 Pages 1558-1561
Recent studies on facial expression recognition mainly employs discrete category labels to represent emotion states. However, current intelligent emotion interaction systems require more diverse and precise emotion representation metrics, which has been proposed as Valence, Arousal, Dominance (VAD) multi-dimensional continuous emotion parameters. But there are still very less datasets and methods for VAD analysis, making it difficult to meet the needs of large-scale and high-precision emotion cognition. In this letter, we build multi-dimensional facial expression recognition method by using multi-task learning to improve recognition performance through exploiting the consistency between dimensional and categorial emotions. The evaluation results show that the multi-task learning approach improves the prediction accuracy for VAD multi-dimensional emotion. Furthermore, it applies the method to academic outcomes prediction which verifies that introducing the VAD multi-dimensional and multi-task facial expression recognition is effective in predicting academic outcomes. The VAD recognition code is publicly available on github.com/YeeHoran/Multi-task-Emotion-Recognition.