UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation
Multi-object tracking (MOT) in video sequences remains a challenging task, especially in scenarios with significant camera movements. This is because targets can drift considerably on the image plane, leading to erroneous tracking outcomes. Addressing such challenges typically requires supplementary...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Multi-object tracking (MOT) in video sequences remains a challenging task,
especially in scenarios with significant camera movements. This is because
targets can drift considerably on the image plane, leading to erroneous
tracking outcomes. Addressing such challenges typically requires supplementary
appearance cues or Camera Motion Compensation (CMC). While these strategies are
effective, they also introduce a considerable computational burden, posing
challenges for real-time MOT. In response to this, we introduce UCMCTrack, a
novel motion model-based tracker robust to camera movements. Unlike
conventional CMC that computes compensation parameters frame-by-frame,
UCMCTrack consistently applies the same compensation parameters throughout a
video sequence. It employs a Kalman filter on the ground plane and introduces
the Mapped Mahalanobis Distance (MMD) as an alternative to the traditional
Intersection over Union (IoU) distance measure. By leveraging projected
probability distributions on the ground plane, our approach efficiently
captures motion patterns and adeptly manages uncertainties introduced by
homography projections. Remarkably, UCMCTrack, relying solely on motion cues,
achieves state-of-the-art performance across a variety of challenging datasets,
including MOT17, MOT20, DanceTrack and KITTI. More details and code are
available at https://github.com/corfyi/UCMCTrack |
---|---|
DOI: | 10.48550/arxiv.2312.08952 |