DMM: Dual-Modal Model for Person Re-Identification
This paper explores how to boost the performance of current person re-identification (ReID) models by incorporating auxiliary information such as contour sketch. Most current ReID methods consider only RGB images as input, with little attention on extra yet important information contained in other m...
Saved in:
Published in | 2022 International Joint Conference on Neural Networks (IJCNN) pp. 1 - 8 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
18.07.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper explores how to boost the performance of current person re-identification (ReID) models by incorporating auxiliary information such as contour sketch. Most current ReID methods consider only RGB images as input, with little attention on extra yet important information contained in other modal images. We propose a dual-modal model (DMM), consisting of a main stream that inputs RGB images, and an auxiliary stream that inputs other modal images, to explore how the auxiliary information will help to promote the performance of existing ReID models. To fuse these two streams, a novel dual-modal attention (DMA) mechanism is proposed. Specifically, we apply spatial attention to auxiliary feature maps to take full advantage of the informative spatial locations contained in this stream. Then channel attention is applied to the spatially refined main feature maps, resulting in further refined representations. Moreover, we adopt DMA at multiple scales to exploit different semantics from low to high levels, which finally generates more discriminative feature representations. Comprehensive experiments on publicly available datasets, Market1501, DukeMTMC, MSMT17, and Black ReID, show that our proposal achieves SOTA results. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN55064.2022.9892837 |