ProFD: Prompt-Guided Feature Disentangling for Occluded Person Re-Identification
To address the occlusion issues in person Re-Identification (ReID) tasks, many methods have been proposed to extract part features by introducing external spatial information. However, due to missing part appearance information caused by occlusion and noisy spatial information from external model, t...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | To address the occlusion issues in person Re-Identification (ReID) tasks,
many methods have been proposed to extract part features by introducing
external spatial information. However, due to missing part appearance
information caused by occlusion and noisy spatial information from external
model, these purely vision-based approaches fail to correctly learn the
features of human body parts from limited training data and struggle in
accurately locating body parts, ultimately leading to misaligned part features.
To tackle these challenges, we propose a Prompt-guided Feature Disentangling
method (ProFD), which leverages the rich pre-trained knowledge in the textual
modality facilitate model to generate well-aligned part features. ProFD first
designs part-specific prompts and utilizes noisy segmentation mask to
preliminarily align visual and textual embedding, enabling the textual prompts
to have spatial awareness. Furthermore, to alleviate the noise from external
masks, ProFD adopts a hybrid-attention decoder, ensuring spatial and semantic
consistency during the decoding process to minimize noise impact. Additionally,
to avoid catastrophic forgetting, we employ a self-distillation strategy,
retaining pre-trained knowledge of CLIP to mitigate over-fitting. Evaluation
results on the Market1501, DukeMTMC-ReID, Occluded-Duke, Occluded-ReID, and
P-DukeMTMC datasets demonstrate that ProFD achieves state-of-the-art results.
Our project is available at: https://github.com/Cuixxx/ProFD. |
---|---|
DOI: | 10.48550/arxiv.2409.20081 |