LVE-S2D: Low-Light Video Enhancement From Static to Dynamic

Recently, deep-learning-based low-light video enhancement methods have drawn wide attention and achieved remarkable performance. However, limited by the difficulty in collecting dynamic low-light and well-lighted video pairs in real scenes, how to construct video sequences for supervised learning an...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 32; no. 12; pp. 8342 - 8352
Main Authors Peng, Bo, Zhang, Xuanyu, Lei, Jianjun, Zhang, Zhe, Ling, Nam, Huang, Qingming
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Recently, deep-learning-based low-light video enhancement methods have drawn wide attention and achieved remarkable performance. However, limited by the difficulty in collecting dynamic low-light and well-lighted video pairs in real scenes, how to construct video sequences for supervised learning and design a low-light enhancement network for real dynamic video remains a challenge. In this paper, we propose a simple yet effective low-light video enhancement method (LVE-S2D), which generates dynamic video training pairs from static videos, and enhances the low-light video by mining dynamic temporal information. To obtain low-light and well-lighted video pairs, a sliding window-based dynamic video generation mechanism is designed to produce pseudo videos with rich dynamic temporal information. Then, a siamese dynamic low-light video enhancement network is presented, which effectively utilizes temporal correlation between adjacent frames to enhance the video frames. Extensive experimental results demonstrate that the proposed method not only achieves superior performance on static low-light videos, but also outperforms the state-of-the-art methods on real dynamic low-light videos.
AbstractList Recently, deep-learning-based low-light video enhancement methods have drawn wide attention and achieved remarkable performance. However, limited by the difficulty in collecting dynamic low-light and well-lighted video pairs in real scenes, how to construct video sequences for supervised learning and design a low-light enhancement network for real dynamic video remains a challenge. In this paper, we propose a simple yet effective low-light video enhancement method (LVE-S2D), which generates dynamic video training pairs from static videos, and enhances the low-light video by mining dynamic temporal information. To obtain low-light and well-lighted video pairs, a sliding window-based dynamic video generation mechanism is designed to produce pseudo videos with rich dynamic temporal information. Then, a siamese dynamic low-light video enhancement network is presented, which effectively utilizes temporal correlation between adjacent frames to enhance the video frames. Extensive experimental results demonstrate that the proposed method not only achieves superior performance on static low-light videos, but also outperforms the state-of-the-art methods on real dynamic low-light videos.
Author Zhang, Xuanyu
Ling, Nam
Huang, Qingming
Zhang, Zhe
Peng, Bo
Lei, Jianjun
Author_xml – sequence: 1
  givenname: Bo
  orcidid: 0000-0002-6616-453X
  surname: Peng
  fullname: Peng, Bo
  email: bpeng@tju.edu.cn
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 2
  givenname: Xuanyu
  surname: Zhang
  fullname: Zhang, Xuanyu
  email: jstxzxy@tju.edu.cn
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 3
  givenname: Jianjun
  orcidid: 0000-0003-3171-7680
  surname: Lei
  fullname: Lei, Jianjun
  email: jjlei@tju.edu.cn
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 4
  givenname: Zhe
  orcidid: 0000-0002-8772-2107
  surname: Zhang
  fullname: Zhang, Zhe
  email: zz300@tju.edu.cn
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 5
  givenname: Nam
  orcidid: 0000-0002-5741-7937
  surname: Ling
  fullname: Ling, Nam
  email: nling@scu.edu.cn
  organization: Department of Computer Science and Engineering, Santa Clara University, Santa Clara, CA, USA
– sequence: 6
  givenname: Qingming
  orcidid: 0000-0001-7542-296X
  surname: Huang
  fullname: Huang, Qingming
  email: qmhuang@ucas.ac.cn
  organization: School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China
BookMark eNp9kL1OwzAURi1UJNrCC8ASiTnFduzYhgn1B5AiMbR0tRzHoa4auziuqr49Ka0YGJjuHb5zP90zAD3nnQHgFsERQlA8LMbz5WKEIcajDAkoUH4B-ohSnmIMaa_bIUUpx4hegUHbriFEhBPWB0_FcprO8eQxKfw-LeznKiZLWxmfTN1KOW0a42IyC75J5lFFq5Pok8nBqcbqa3BZq01rbs5zCD5m08X4NS3eX97Gz0WqsaAxFaXRSNXa1IhRwSvFeY0rUWJaUoZqbijPSoIrTBgxnIuKEaVzCnORwZyaKhuC-9PdbfBfO9NGufa74LpKiRlhOUJEsC6FTykdfNsGU8ttsI0KB4mgPEqSP5LkUZI8S-og_gfS9vimdzEou_kfvTuh1hjz2yU4FjwT2TfEIHSZ
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_TCSVT_2022_3220412
crossref_primary_10_1109_TIP_2022_3203213
crossref_primary_10_1109_TIM_2023_3271762
crossref_primary_10_1109_TMM_2023_3260620
crossref_primary_10_1145_3596445
crossref_primary_10_1109_TCSVT_2023_3294521
crossref_primary_10_1080_2150704X_2022_2132122
crossref_primary_10_1109_TCSVT_2023_3238580
crossref_primary_10_1109_TCSVT_2023_3296583
crossref_primary_10_1007_s11760_024_03439_z
crossref_primary_10_3390_electronics13050982
crossref_primary_10_1109_ACCESS_2023_3318745
crossref_primary_10_1109_TCSVT_2023_3299232
crossref_primary_10_1109_TCSVT_2023_3312213
crossref_primary_10_1109_TII_2022_3210589
crossref_primary_10_1016_j_displa_2023_102614
crossref_primary_10_1145_3587467
crossref_primary_10_1016_j_patcog_2024_111180
crossref_primary_10_1109_TETCI_2023_3272003
crossref_primary_10_3390_math12081228
crossref_primary_10_1007_s11263_024_02292_4
crossref_primary_10_1016_j_eswa_2024_125803
crossref_primary_10_1016_j_neucom_2024_128909
crossref_primary_10_1109_TETCI_2024_3369858
crossref_primary_10_3390_rs17071165
crossref_primary_10_3390_app12157384
crossref_primary_10_1007_s11042_024_19087_x
crossref_primary_10_1109_TCSVT_2024_3465875
crossref_primary_10_1109_TCSVT_2022_3213515
crossref_primary_10_1109_TBC_2024_3484269
crossref_primary_10_1109_TCSII_2023_3259689
crossref_primary_10_3390_app14062271
crossref_primary_10_3390_electronics13224372
crossref_primary_10_1109_TII_2022_3227722
crossref_primary_10_1117_1_JEI_33_4_043009
Cites_doi 10.1109/TCE.2015.7064113
10.1109/CVPR.2018.00347
10.1109/TIP.2020.2973499
10.1109/TIP.2013.2284059
10.1109/ICCV.2019.00421
10.1109/TCSVT.2017.2787190
10.1109/CVPR42600.2020.00235
10.1109/TCSVT.2020.2981652
10.1109/CVPR.2018.00265
10.1109/TIP.2012.2199324
10.1007/978-3-030-58601-0_7
10.1109/CCDC.2016.7531629
10.1109/InCIT50588.2020.9310971
10.1109/TIP.2003.819861
10.1016/j.patcog.2016.06.008
10.1109/TCSVT.2020.3009235
10.1109/SPAC.2014.6982691
10.1109/TMM.2021.3054509
10.1109/TCSVT.2020.3037068
10.1109/ICCV48922.2021.00956
10.1016/j.neucom.2020.05.123
10.1109/TIP.2009.2021548
10.1109/ISPACS.2013.6704591
10.1109/83.597272
10.1007/978-3-030-01267-0_11
10.1109/TIE.2017.2682034
10.1109/CVPR42600.2020.00237
10.1109/ICCV.2019.00742
10.1109/TCSVT.2021.3073371
10.1109/TIP.2021.3051462
10.1109/CVPR42600.2020.00313
10.1109/TIP.2016.2639450
10.1109/ICCVW.2019.00293
10.1109/CVPR42600.2020.00185
10.1007/s11263-018-01144-2
10.1109/TCYB.2018.2831447
10.1109/TCSVT.2020.3027616
10.1109/ICCV.2019.00328
10.1038/scientificamerican1277-108
10.1109/CVPR46437.2021.00493
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2022.3190916
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 8352
ExternalDocumentID 10_1109_TCSVT_2022_3190916
9829839
Genre orig-research
GrantInformation_xml – fundername: Natural Science Foundation of Tianjin
  grantid: 18JCJQJC45800
  funderid: 10.13039/501100006606
– fundername: National Natural Science Foundation of China
  grantid: 62125110; 62101379; 61931014; U21B2038; 61931008
  funderid: 10.13039/501100001809
– fundername: National Key Research and Development Program of China
  grantid: 2018YFE0203900
  funderid: 10.13039/501100012166
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-9bec1afcef17598da88f2d9b25b571f8e583b42d2474e889d74ac650693065ed3
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 05:05:17 EDT 2025
Tue Jul 01 00:41:18 EDT 2025
Thu Apr 24 23:12:03 EDT 2025
Wed Aug 27 02:15:00 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 12
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-9bec1afcef17598da88f2d9b25b571f8e583b42d2474e889d74ac650693065ed3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-8772-2107
0000-0002-6616-453X
0000-0003-3171-7680
0000-0001-7542-296X
0000-0002-5741-7937
PQID 2747611497
PQPubID 85433
PageCount 11
ParticipantIDs proquest_journals_2747611497
crossref_primary_10_1109_TCSVT_2022_3190916
crossref_citationtrail_10_1109_TCSVT_2022_3190916
ieee_primary_9829839
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-12-01
PublicationDateYYYYMMDD 2022-12-01
PublicationDate_xml – month: 12
  year: 2022
  text: 2022-12-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref14
ref11
ref10
li (ref8) 2021
wei (ref19) 2018
ref17
ref16
ref18
simonyan (ref44) 2015
lv (ref36) 2018
ref48
ref42
ref41
ronneberger (ref45) 2015
ref43
ref49
ref7
ref9
ref4
ref3
ref6
ref5
ref40
ref35
ref34
ref37
ref31
ref30
ref33
ref2
ref1
ref39
ref38
ref24
dong (ref32) 2011
ref23
ref26
ref25
ref20
ref22
ref21
ref28
ref27
ref29
ying (ref15) 2017
kingma (ref46) 2015
abadi (ref47) 2016
References_xml – year: 2017
  ident: ref15
  article-title: A bio-inspired multi-exposure fusion framework for low-light image enhancement
  publication-title: arXiv 1711 00591
– ident: ref33
  doi: 10.1109/TCE.2015.7064113
– ident: ref20
  doi: 10.1109/CVPR.2018.00347
– ident: ref5
  doi: 10.1109/TIP.2020.2973499
– ident: ref11
  doi: 10.1109/TIP.2013.2284059
– ident: ref39
  doi: 10.1109/ICCV.2019.00421
– start-page: 1
  year: 2015
  ident: ref44
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Proc 3rd Int Conf Learn Represent (ICLR)
– ident: ref3
  doi: 10.1109/TCSVT.2017.2787190
– ident: ref24
  doi: 10.1109/CVPR42600.2020.00235
– ident: ref1
  doi: 10.1109/TCSVT.2020.2981652
– ident: ref38
  doi: 10.1109/CVPR.2018.00265
– ident: ref34
  doi: 10.1109/TIP.2012.2199324
– ident: ref29
  doi: 10.1007/978-3-030-58601-0_7
– start-page: 1
  year: 2011
  ident: ref32
  article-title: Fast efficient algorithm for enhancement of low lighting video
  publication-title: Proc ICME
– ident: ref30
  doi: 10.1109/CCDC.2016.7531629
– ident: ref6
  doi: 10.1109/InCIT50588.2020.9310971
– ident: ref49
  doi: 10.1109/TIP.2003.819861
– ident: ref18
  doi: 10.1016/j.patcog.2016.06.008
– ident: ref26
  doi: 10.1109/TCSVT.2020.3009235
– ident: ref31
  doi: 10.1109/SPAC.2014.6982691
– ident: ref17
  doi: 10.1109/TMM.2021.3054509
– ident: ref43
  doi: 10.1109/TCSVT.2020.3037068
– year: 2021
  ident: ref8
  article-title: Low-light image and video enhancement using deep learning: A survey
  publication-title: IEEE Trans Pattern Anal Mach Intell
– ident: ref41
  doi: 10.1109/ICCV48922.2021.00956
– ident: ref4
  doi: 10.1016/j.neucom.2020.05.123
– ident: ref9
  doi: 10.1109/TIP.2009.2021548
– ident: ref12
  doi: 10.1109/ISPACS.2013.6704591
– ident: ref13
  doi: 10.1109/83.597272
– ident: ref42
  doi: 10.1007/978-3-030-01267-0_11
– ident: ref35
  doi: 10.1109/TIE.2017.2682034
– ident: ref37
  doi: 10.1109/CVPR42600.2020.00237
– ident: ref27
  doi: 10.1109/ICCV.2019.00742
– ident: ref22
  doi: 10.1109/TCSVT.2021.3073371
– ident: ref21
  doi: 10.1109/TIP.2021.3051462
– start-page: 250
  year: 2018
  ident: ref19
  article-title: Deep Retinex decomposition for low-light enhancement
  publication-title: Proc BMVC
– ident: ref25
  doi: 10.1109/CVPR42600.2020.00313
– start-page: 265
  year: 2016
  ident: ref47
  article-title: Tensorflow: A system for large-scale machine learning
  publication-title: Proc OSDI
– ident: ref14
  doi: 10.1109/TIP.2016.2639450
– ident: ref7
  doi: 10.1109/ICCVW.2019.00293
– start-page: 234
  year: 2015
  ident: ref45
  article-title: U-Net: Convolutional networks for biomedical image segmentation
  publication-title: Proc Med Image Comput Comput -Assist Intervent
– ident: ref23
  doi: 10.1109/CVPR42600.2020.00185
– start-page: 1
  year: 2015
  ident: ref46
  article-title: Adam: A method for stochastic optimization
  publication-title: Proc ICLR
– ident: ref48
  doi: 10.1007/s11263-018-01144-2
– ident: ref2
  doi: 10.1109/TCYB.2018.2831447
– ident: ref16
  doi: 10.1109/TCSVT.2020.3027616
– ident: ref28
  doi: 10.1109/ICCV.2019.00328
– ident: ref10
  doi: 10.1038/scientificamerican1277-108
– ident: ref40
  doi: 10.1109/CVPR46437.2021.00493
– start-page: 220
  year: 2018
  ident: ref36
  article-title: MBLLEN: Low-light image/video enhancement using CNNs
  publication-title: Proc BMVC
SSID ssj0014847
Score 2.564581
Snippet Recently, deep-learning-based low-light video enhancement methods have drawn wide attention and achieved remarkable performance. However, limited by the...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 8342
SubjectTerms Correlation
Deep learning
Frames (data processing)
Histograms
Image enhancement
Light
Lighting
Low-light video enhancement
sliding window
Task analysis
temporal correlation
Training
Video
Video sequences
Title LVE-S2D: Low-Light Video Enhancement From Static to Dynamic
URI https://ieeexplore.ieee.org/document/9829839
https://www.proquest.com/docview/2747611497
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED4BEwy8CqJQkAc2cNvYTmLDhEorhFoWStUtcpKLqIAGQSokfj22k1a8hNgy2JZ1d_Z9X3wPgGNbYivjXBnjxTYVTCPVXhBTmamEoVBaujrbg5vg6k5cj_3xEpwucmEQ0QWfYdN-urf8NE9m9ldZS0mmjENfhmVD3MpcrcWLgZCumZiBCx6Vxo_NE2TaqjXs3I6GhgoyZhiqMg4y-OKEXFeVH1ex8y-9DRjMd1aGlTw0Z0XcTN6_FW3879Y3Yb0CmuSitIwtWMLpNqx9Kj9Yg_P-qEtv2eUZ6edvtG9pOhlNUsxJd3pvrcGuSXov-ROxmHSSkCInl2UL-x2463WHnStaNVOgCVN-QZVRlqezBDMDGJRMtZQZS1XM_NgPvUyiL3ksWMpEKFBKlYZCJwa-2VaJgY8p34WVaT7FPSDa42iAX2wuKC40-rHmba2MC8xSHoQc6-DNpRslVaVx2_DiMXKMo60ip5HIaiSqNFKHk8Wc57LOxp-ja1bEi5GVdOvQmCsxqo7ia2Rpd2BYnwr3f591AKt27TJGpQErxcsMDw3SKOIjZ2If30DMAg
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Nb9QwEB2VcgAOFCiIhQI-wAl5u7GdxAb1gLq72tK0l25XvQXHmYgVNEFtVlX7W_pX-G-Mk-yKL3GrxC0HO3E8T35v7PEMwGufYquQ0hB4ccCVsMhtEGVcF8YJVMbqJs_2wWE0OVYfT8KTNbhe3YVBxCb4DPv-sTnLzyu38Ftl20YLQ4TehVDu4-UFOWjnO3tDsuYbIcaj6e6EdzUEuBMmrLmhMQa2cFgQTxqdW60LkZtMhFkYB4XGUMtMiVyoWKHWJo-VdaRafIXAKMRc0ntvwW3SGaFob4etziiUbsqXkUAJuCbmXF7JGZjt6e7RbErOpxDkExui5OgX2mvquPyx-DeMNt6A78u5aANZvvQXddZ3V7-lifxfJ-sB3O-kNPvQYv8hrGH5CO79lGBxE94nsxE_EsN3LKkueOI3IthsnmPFRuVnj3f_D2x8Vp0yr7rnjtUVG16W9nTuHsPxjYz-CayXVYlPgdlAIknbjJZgqSyGmZUDa4jki1xGscQeBEtrpq7Lpe5LenxNG59qYNIGAalHQNohoAdvV32-tZlE_tl605t01bKzZg-2lqBJu8XmPPUbCxH5tSZ-9vder-DOZHqQpMne4f5zuOu_00bkbMF6fbbAF6Sr6uxlA28Gn24aIj8AJSkpug
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LVE-S2D%3A+Low-Light+Video+Enhancement+From+Static+to+Dynamic&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Peng%2C+Bo&rft.au=Zhang%2C+Xuanyu&rft.au=Lei%2C+Jianjun&rft.au=Zhang%2C+Zhe&rft.date=2022-12-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=12&rft.spage=8342&rft.epage=8352&rft_id=info:doi/10.1109%2FTCSVT.2022.3190916&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2022_3190916
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon