Unleashing Potential of Unsupervised Pre-Training with Intra-Identity Regularization for Person Re-Identification

Existing person reidentification (ReID) methods typically load the pretrained ImageNet weights for initialization directly. However, as a fine-grained classification task, ReID is more challenging and there exists a large domain gap between ImageNet classification. Inspired by the great success of s...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 14278 - 14287
Main Authors Yang, Zizheng, Jin, Xin, Zheng, Kecheng, Zhao, Feng
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Existing person reidentification (ReID) methods typically load the pretrained ImageNet weights for initialization directly. However, as a fine-grained classification task, ReID is more challenging and there exists a large domain gap between ImageNet classification. Inspired by the great success of self-supervised representation learning with contrastive objectives, in this paper, we design an Unsupervised Pretraining framework for reidentification (UP-ReID) based on the contrastive learning (CL) pipeline. During the pre-training, we attempt to address two critical issues for learning fine-grained ReID features: (1) the augmentations in the CL pipeline usually distort the discriminative clues in person images, and (2) the fine-grained local features of person images are not fully-explored. Therefore, we introduce an intra-identity (1 2 -) regularization in the UP-ReID, which is instantiated as two constraints coming from the global image and local patch aspects, respectively. A global consistency constraint is enforced between augmented and original person images to increase robustness to augmentation, while an intrinsic contrastive constraint among local patches of each image is employed to fully explore the local discriminative clues. Extensive experiments on multiple popular reid datasets, PersonX, Market1501, CUHK03, and MSMT17, demonstrate that our UP-ReID pretrained model can significantly benefit the downstream ReID fine-tuning and achieve state-of-the-art performance.
ISSN:2575-7075
DOI:10.1109/CVPR52688.2022.01390