Deep Group-Shuffling Dual Random Walks with Label Smoothing for Person Reidentification

Person reidentification (ReID) is a challenging task of finding a target pedestrian in a gallery set collected from multiple nonoverlapping camera views. Recently, state-of-the-art ReID performance has been achieved via an end-to-end trainable deep neural network framework, which integrates convolut...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 8; p. 1
Main Authors Guo, Ruopei, Lin, Chaoqun, Li, Chun-Guang, Lin, Jiaru
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Person reidentification (ReID) is a challenging task of finding a target pedestrian in a gallery set collected from multiple nonoverlapping camera views. Recently, state-of-the-art ReID performance has been achieved via an end-to-end trainable deep neural network framework, which integrates convolution feature extraction, similarity learning and reranking into a joint optimization framework. In such a framework, the similarity is learned via an embedding network, the reranking is conducted with a random walk, and the whole framework is optimized with a cross-entropy-based verification loss. Unfortunately, the embedding net is difficult to train well because their two-dimensional outputs mutually interfere each other when using the conventional random walk. In addition, the supervision information has not been fully exploited during the training phase due to the binary nature of the verification loss. In this paper, we propose a novel approach, called group-shuffling dual random walks with label smoothing (GSDRWLS), in which random walks are performed separately on two channels-one for positive verification and one for negative verification-and the binary verification labels are properly modified with an adaptive label smoothing technique before feeding into the verification loss in order to train the overall network effectively and to avoid the overfitting problem. Extensive experiments conducted on three large benchmark datasets, including CUHK03, Market-1501 and DukeMTMC, confirm the superior performance of our proposal.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.2976849