IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508

This article has now been updated. Please use the final version.

Multi Feature Fusion Attention Learning for Clothing-Changing Person Re-Identification
Liwei WANGYanduo ZHANGTao LUWenhua FANGYu WANG
Author information
JOURNAL RESTRICTED ACCESS Advance online publication

Article ID: 2021EAL2097

Details
Abstract

Person re-identification (Re-ID) aims to match the same pedestrain identity images across different camera views. Because pedestrians will change clothes frequently for a relatively long time, while many current methods rely heavily on color appearance information or only focus on the person biometric features, these methods make the performance dropped apparently when it is applied to Clohting-Changing. To relieve this dilemma, we proposed a novel Multi Feature Fusion Attention Network (MFFAN), which learns the fine-grained local features. Then we introduced a Clothing Adaptive Attention (CAA) module, which can integrate multiple granularity features to guide model to learn pedestrain's biometric feature. Meanwhile, in order to fully verify the performance of our method on clothing-changing Re-ID problem, we designed a Clothing Generation Network(CGN), which can generate multiple pictures of the same identity wearing different clothes. Finally, experimental results show that our method exceeds the current best method by over 5% and 6% on the VCcloth and PRCC datasets respectively.

Content from these authors
© 2022 The Institute of Electronics, Information and Communication Engineers
feedback
Top