Pose Attention-Guided Paired-Images Generation for Visible-Infrared Person Re-Identification

A key challenge of visible-infrared person re-identification (VI-ReID) comes from the modality difference between visible and infrared images, which further causes large intra-person and small inter-person distances. Most existing methods design feature extractors and loss functions to bridge the mo...

Full description

Saved in:
Bibliographic Details
Published inIEEE signal processing letters Vol. 31; pp. 346 - 350
Main Authors Qian, Yongheng, Tang, Su-Kit
Format Journal Article
LanguageEnglish
Published New York The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A key challenge of visible-infrared person re-identification (VI-ReID) comes from the modality difference between visible and infrared images, which further causes large intra-person and small inter-person distances. Most existing methods design feature extractors and loss functions to bridge the modality gap. However, the unpaired-images constrain the VI-ReID model's ability to learn instance-level alignment features. Different from these methods, in this paper, we propose a pose attention-guided paired-images generation network (PAPG) from the standpoint of data augmentation. PAPG can generate cross-modality paired-images with shape and appearance consistency with the real image to perform instance-level feature alignment by minimizing the distances of every pair of images. Furthermore, our method alleviates data insufficient and reduces the risk of VI-ReID model overfitting. Comprehensive experiments conducted on two publicly available datasets validate the effectiveness and generalizability of PAPG. Especially, on the SYSU-MM01 dataset, our method accomplishes 7.76% and 5.87% gains in Rank-1 and mAP.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2024.3354190