Exploring context information for inter-camera multiple target tracking

In this paper, we present a new solution to inter-camera multiple target tracking with non-overlapping fields of view. The identities of people are maintained when they are moving from one camera to another. Instead of matching snapshots of people across cameras, we mainly explore what kind of conte...

Full description

Saved in:
Bibliographic Details
Published inIEEE Winter Conference on Applications of Computer Vision pp. 761 - 768
Main Authors Yinghao Cai, Medioni, Gerard
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.03.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we present a new solution to inter-camera multiple target tracking with non-overlapping fields of view. The identities of people are maintained when they are moving from one camera to another. Instead of matching snapshots of people across cameras, we mainly explore what kind of context information from videos can be used for inter-camera tracking. We introduce two kinds of context information, spatio-temporal context and relative appearance context in this paper. The spatio-temporal context indicates a way of collecting samples for discriminative appearance learning where target-specific appearance models are learned to distinguish different people from each other. The relative appearance context models inter-object appearance similarities for people walking in proximity. The relative appearance model helps disambiguate individual appearance matching across cameras. We show improved performance with context information for inter-camera tracking. Our method achieves promising results in two crowded scenes compared with state-of-art methods.
ISSN:1550-5790
2642-9381
DOI:10.1109/WACV.2014.6836026