Person Re-identification by Multi-hypergraph Fusion

Matching people across nonoverlapping cameras, also known as person re-identification, is an important and challenging research topic. Despite its great demand in many crucial applications such as surveillance, person re-identification is still far from being solved. Due to drastic view changes, eve...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 28; no. 11; pp. 2763 - 2774
Main Authors An, Le, Chen, Xiaojing, Yang, Songfan, Li, Xuelong
Format Journal Article
LanguageEnglish
Published United States IEEE 01.11.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
DOI10.1109/TNNLS.2016.2602082

Cover

Loading…
More Information
Summary:Matching people across nonoverlapping cameras, also known as person re-identification, is an important and challenging research topic. Despite its great demand in many crucial applications such as surveillance, person re-identification is still far from being solved. Due to drastic view changes, even the same person may look quite dissimilar in different cameras. Illumination and pose variations further aggravate this discrepancy. To this end, various feature descriptors have been designed for improving the matching accuracy. Since different features encode information from different aspects, in this paper, we propose to effectively leverage multiple off-the-shelf features via multi-hypergraph fusion. A hypergraph captures not only pairwise but also high-order relationships among the subjects being matched. In addition, different from conventional approaches in which the matching is achieved by computing the pairwise distance or similarity between a probe and a gallery subject, the similarities between the probe and all gallery subjects are learned jointly via hypergraph optimization. Experiments on popular data sets demonstrate the effectiveness of the proposed method, and a superior performance is achieved as compared with the most recent state-of-the-arts.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2016.2602082