Frame-level temporal calibration of video sequences from unsynchronized cameras
This paper describes a method for temporally calibrating video sequences from unsynchronized cameras by image processing operations, and presents two search algorithms to match and align trajectories across different camera views. Existing multi-camera systems assume that input video sequences are s...
Saved in:
Published in | Machine vision and applications Vol. 19; no. 5-6; pp. 395 - 409 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer-Verlag
01.10.2008
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper describes a method for temporally calibrating video sequences from unsynchronized cameras by image processing operations, and presents two search algorithms to match and align trajectories across different camera views. Existing multi-camera systems assume that input video sequences are synchronized either by genlock or by time stamp information and a centralized server. Yet, hardware-based synchronization increases installation cost. Hence, using image information is necessary to align frames from the cameras whose clocks are not synchronized. The system built for temporal calibration is composed of three modules: object tracking module, calibration data extraction module, and the search module. A robust and efficient search algorithm is introduced that recovers the frame offset by matching the trajectories in different views, and finding the most reliable match. Thanks to information obtained from multiple trajectories, this algorithm is robust to possible errors in background subtraction and location extraction. Moreover, the algorithm can handle very large frame offsets. A RANdom SAmple Consensus (RANSAC) based version of this search algorithm is also introduced. Results obtained with different video sequences are presented, which show the robustness of the algorithms in recovering various range of frame offsets for video sequences with varying levels of object activity. |
---|---|
Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
ISSN: | 0932-8092 1432-1769 |
DOI: | 10.1007/s00138-008-0122-6 |