Time-Series Fusion-Based Multicamera Self-Calibration for Free-View Video Generation in Low-Texture Sports Scene

Multicamera calibration is an important technique for generating free-view videos. By arranging multiple cameras in a scene after camera calibration and image processing, a multidimensional viewing experience can be presented to the audience. To address the problem that low texture cannot be robustl...

Full description

Saved in:
Bibliographic Details
Published inIEEE sensors journal Vol. 23; no. 3; pp. 2956 - 2969
Main Authors Zhou, Feng, Li, Jing, Dai, Yanran, Liu, Lichen, Qin, Haidong, Jiang, Yuqi, Hong, Shikuan, Zhao, Bo, Yang, Tao
Format Journal Article
LanguageEnglish
Published New York IEEE 01.02.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multicamera calibration is an important technique for generating free-view videos. By arranging multiple cameras in a scene after camera calibration and image processing, a multidimensional viewing experience can be presented to the audience. To address the problem that low texture cannot be robustly self-calibrated in common sports scenes when placing artificial markers or towers in the calibration process is impractical, this article proposes a robust multicamera calibration method based on sequence feature matching and fusion. Additionally, to validate the effectiveness of the proposed calibration algorithm, a virtual axis fast bullet-time synthesis algorithm is proposed for generating a free-view video. First, camera self-calibration is performed in low-texture situations by fusing dynamic objects in time series to enrich geometric constraints in scenes without the use of calibration panels or additional artificial markers. Second, a virtual-axis bullet-time video synthesis method based on the calibration result is proposed. In the calibrated multicamera scenario, a fast bullet-time video is generated by constructing a virtual axis. Qualitative and quantitative experiments in comparison with a state-of-the-art calibration method demonstrate the validity and robustness of the proposed calibration algorithm for free-view video synthesis tasks.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2022.3230792