Cross-Scale Transformer-Based Matching Network for Generalizable Person Re-Identification
While the person re-identification (Re-ID) task has made significant progress in closed-set setting in recent years, its generalizability to unknown domains continues to be limited. To tackle the issue, the domain generalization (DG) Re-ID task has been proposed. The current state-of-the-art approac...
Saved in:
Published in | IEEE access Vol. 13; pp. 47406 - 47417 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | While the person re-identification (Re-ID) task has made significant progress in closed-set setting in recent years, its generalizability to unknown domains continues to be limited. To tackle the issue, the domain generalization (DG) Re-ID task has been proposed. The current state-of-the-art approach involves deep feature matching, where key regions of image pairs are matched at the same scale. However, the method does not take into account the variability of angles in real image acquisition. To resolve the problem, we propose an innovative deep image matching framework called Cross-scale Transformer-based Matching Network (CTMN) for DG Re-ID task. CTMN model matches two images through cross-scale local respondence rather than using fixed representations. The Transformer is specifically adjusted to enable effective local interactions between query and gallery images across different scales. Additionally, deformable convolution is incorporated to better segment the local regions of the person before the procedure for matching image pairs. Lastly, the Style Normalization Module (SNM) is added to remove identity-irrelevant features, improving the matching results. Extensive experiments on multiple DG Re-ID tasks demonstrate the advantages of our proposed method over existing state-of-the-arts. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2025.3548321 |