A Content-Adaptive Resizing Framework for Boosting Computation Speed of Background Modeling Methods

Recently, most background modeling (BM) methods claim to achieve real-time efficiency for low-resolution and standard-definition surveillance videos. With the increasing resolutions of surveillance cameras, full high-definition (full HD) surveillance videos have become the main trend and thus proces...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on systems, man, and cybernetics. Systems Vol. 52; no. 2; pp. 1192 - 1204
Main Authors Huang, Chun-Rong, Huang, Wei-Yun, Liao, Yi-Sheng, Lee, Chien-Cheng, Yeh, Yu-Wei
Format Journal Article
LanguageEnglish
Published New York IEEE 01.02.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, most background modeling (BM) methods claim to achieve real-time efficiency for low-resolution and standard-definition surveillance videos. With the increasing resolutions of surveillance cameras, full high-definition (full HD) surveillance videos have become the main trend and thus processing high-resolution videos becomes a novel issue in intelligent video surveillance. In this article, we propose a novel content-adaptive resizing framework (CARF) to boost the computation speed of BM methods in high-resolution surveillance videos. For each frame, we apply superpixels to separate the content of the frame to homogeneous and boundary sets. Two novel downsampling and upsampling layers based on the homogeneous and boundary sets are proposed. The front one downsamples high-resolution frames to low-resolution frames for obtaining efficient foreground segmentation results based on BM methods. The later one upsamples the low-resolution foreground segmentation results to the original resolution frames based on the superpixels. By simultaneously coupling both layers, experimental results show that the proposed method can achieve better quantitative and qualitative results compared with state-of-the-art methods. Moreover, the computation speed of the proposed method without GPU accelerations is also significantly faster than that of the state-of-the-art methods. The source code is available at https://github.com/nchucvml/CARF .
ISSN:2168-2216
2168-2232
DOI:10.1109/TSMC.2020.3018872