AMC-Net: Attentive modality-consistent network for visible-infrared person re-identification
Visible-infrared person re-identification (VI-ReID) aims at matching people across images from RGB and infrared modality. Existing methods tend to utilize the readily available models to extract features, ignoring mining spatial and channel information of features. In this paper, we propose an atten...
Saved in:
Published in | Neurocomputing (Amsterdam) Vol. 463; pp. 226 - 236 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
06.11.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Visible-infrared person re-identification (VI-ReID) aims at matching people across images from RGB and infrared modality. Existing methods tend to utilize the readily available models to extract features, ignoring mining spatial and channel information of features. In this paper, we propose an attentive modality-consistent network (AMC-Net) for VI-ReID. Firstly, to avoid the damage to discriminability of features caused by overfitting of local regions in the network, a context-aware attention block (CAB) is designed to mine the spatial information of the whole person region by magnifying the perception scope of convolution layers. Secondly, the attentive channel aggregation block (ACB) is adopted to mine the channel information with richer semantic cues by considering local cross-channel interactive information. Thirdly, we propose a modality-consistent regularizer to narrow down the discrepancy of high order features between heterogeneous images. Extensive experiments on two datasets indicate that the proposed method outperforms the state-of-the-art methods. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2021.08.053 |