IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Dynamic Graph-Guided Domain-Invariant Feature Representation for Cross-Domain Facial Expression Recognition
Xingxin WANPeng SONGSiqi FUChangjia WANG
著者情報
ジャーナル フリー 早期公開

論文ID: 2025EDL8020

詳細
抄録

In ideal facial expression recognition (FER) tasks, the training and test data are assumed to share the same distribution. However, in reality, they are often sourced from different domains, which follow different feature distributions and would seriously impair the recognition performance. In this letter, we present a novel Dynamic Graph-Guided Domain-Invariant Feature Representation (DG-DIFR) method, which addresses the issue of distribution shifts across different domains. First, we learn a robust common subspace to minimize the data distribution differences, facilitating the extraction of invariant feature representations. Concurrently, the retargeted linear regression is employed to enhance the discrimination of the proposed model. Furthermore, a maximum entropy based dynamic graph is further introduced to maintain the topological structure information in the low-dimensional subspace. Finally, numerous experiments conducted on four benchmark datasets confirm the superiority of the proposed method over state-of-the-art methods.

著者関連情報
© 2025 The Institute of Electronics, Information and Communication Engineers
前の記事 次の記事
feedback
Top