LVE-S2D: Low-Light Video Enhancement From Static to Dynamic

Recently, deep-learning-based low-light video enhancement methods have drawn wide attention and achieved remarkable performance. However, limited by the difficulty in collecting dynamic low-light and well-lighted video pairs in real scenes, how to construct video sequences for supervised learning an...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 32; no. 12; pp. 8342 - 8352
Main Authors Peng, Bo, Zhang, Xuanyu, Lei, Jianjun, Zhang, Zhe, Ling, Nam, Huang, Qingming
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, deep-learning-based low-light video enhancement methods have drawn wide attention and achieved remarkable performance. However, limited by the difficulty in collecting dynamic low-light and well-lighted video pairs in real scenes, how to construct video sequences for supervised learning and design a low-light enhancement network for real dynamic video remains a challenge. In this paper, we propose a simple yet effective low-light video enhancement method (LVE-S2D), which generates dynamic video training pairs from static videos, and enhances the low-light video by mining dynamic temporal information. To obtain low-light and well-lighted video pairs, a sliding window-based dynamic video generation mechanism is designed to produce pseudo videos with rich dynamic temporal information. Then, a siamese dynamic low-light video enhancement network is presented, which effectively utilizes temporal correlation between adjacent frames to enhance the video frames. Extensive experimental results demonstrate that the proposed method not only achieves superior performance on static low-light videos, but also outperforms the state-of-the-art methods on real dynamic low-light videos.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2022.3190916