RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement
Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of computer vision algorithms. Among the existing enhancement techniques, Retinex-based and learning-based methods are under the spotlight today. In t...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 3; pp. 1076 - 1088 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.03.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of computer vision algorithms. Among the existing enhancement techniques, Retinex-based and learning-based methods are under the spotlight today. In this paper, we bridge the gap between the two methods. First, we propose a novel "generative" strategy for Retinex decomposition, by which the decomposition is cast as a generative problem. Second, based on the strategy, a unified deep framework is proposed to estimate the latent components and perform low-light image enhancement. Third, our method can weaken the coupling relationship between the two components while performing Retinex decomposition. Finally, the RetinexDIP performs Retinex decomposition without any external images, and the estimated illumination can be easily adjusted and is used to perform enhancement. The proposed method is compared with ten state-of-the-art algorithms on seven public datasets, and the experimental results demonstrate the superiority of our method. Code is available at: https://github.com/zhaozunjin/RetinexDIP . |
---|---|
AbstractList | Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of computer vision algorithms. Among the existing enhancement techniques, Retinex-based and learning-based methods are under the spotlight today. In this paper, we bridge the gap between the two methods. First, we propose a novel "generative" strategy for Retinex decomposition, by which the decomposition is cast as a generative problem. Second, based on the strategy, a unified deep framework is proposed to estimate the latent components and perform low-light image enhancement. Third, our method can weaken the coupling relationship between the two components while performing Retinex decomposition. Finally, the RetinexDIP performs Retinex decomposition without any external images, and the estimated illumination can be easily adjusted and is used to perform enhancement. The proposed method is compared with ten state-of-the-art algorithms on seven public datasets, and the experimental results demonstrate the superiority of our method. Code is available at: https://github.com/zhaozunjin/RetinexDIP . |
Author | Kuang, Fa Wang, Lei Zhao, Zunjin Ou, Qiaofeng Yu, Lei Xiong, Bangshu |
Author_xml | – sequence: 1 givenname: Zunjin orcidid: 0000-0003-3224-650X surname: Zhao fullname: Zhao, Zunjin email: zhaotitus111@gmail.com organization: School of Information Engineering, Nanchang Hangkong University, Nanchang, China – sequence: 2 givenname: Bangshu orcidid: 0000-0002-7652-5589 surname: Xiong fullname: Xiong, Bangshu email: xiongbs@126.com organization: School of Information Engineering, Nanchang Hangkong University, Nanchang, China – sequence: 3 givenname: Lei orcidid: 0000-0001-7266-148X surname: Wang fullname: Wang, Lei organization: School of Information Engineering, Nanchang Hangkong University, Nanchang, China – sequence: 4 givenname: Qiaofeng surname: Ou fullname: Ou, Qiaofeng organization: School of Information Engineering, Nanchang Hangkong University, Nanchang, China – sequence: 5 givenname: Lei surname: Yu fullname: Yu, Lei organization: School of Information Engineering, Nanchang Hangkong University, Nanchang, China – sequence: 6 givenname: Fa surname: Kuang fullname: Kuang, Fa organization: School of Information Engineering, Nanchang Hangkong University, Nanchang, China |
BookMark | eNp9kE1PwkAURScGEwH9A7pp4ro4b6bTTt0RPpSkiUbBbTMtb2AQZnBagv57ixAXLlzdt7jnvuR0SMs6i4RcA-0B0PRuOnh9m_YYZdDjNOE8gTPSBiFkyBgVreamAkLJQFyQTlWtKIVIRkmbDF-wNhY_h5Pn-6AfzKzRBufBEHEbjL3a4N7590A7H2RuH2ZmsayDyUYtMBjZpbIlbtDWl-Rcq3WFV6fsktl4NB08htnTw2TQz8KSx1CHoDAuJMWYM0CRaiVQSR6zFBIVC8kKDZo2qWRRpgnjQqRAUeoI0mIuNOddcnvc3Xr3scOqzldu523zMmcxT6KIMnFoyWOr9K6qPOq8NLWqjbO1V2adA80PzvIfZ_nBWX5y1qDsD7r1ZqP81__QzREyiPgLpBEVcZTyb8lRd4g |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_TCSVT_2023_3292940 crossref_primary_10_3390_electronics11071000 crossref_primary_10_1007_s00530_024_01298_9 crossref_primary_10_1145_3616013 crossref_primary_10_1109_JSTARS_2024_3357093 crossref_primary_10_1109_TPAMI_2024_3432308 crossref_primary_10_1109_JSEN_2023_3253697 crossref_primary_10_1109_TCSVT_2022_3146731 crossref_primary_10_1016_j_dsp_2024_104961 crossref_primary_10_3390_s23239593 crossref_primary_10_1016_j_engappai_2024_109521 crossref_primary_10_1109_TCSVT_2022_3195996 crossref_primary_10_11834_jig_230271 crossref_primary_10_1007_s11042_022_14210_2 crossref_primary_10_1016_j_ijleo_2022_170329 crossref_primary_10_1016_j_neucom_2025_129992 crossref_primary_10_1109_TCSVT_2024_3441713 crossref_primary_10_3788_AI_2024_20003 crossref_primary_10_1038_s41598_024_81231_2 crossref_primary_10_1109_TIM_2022_3232641 crossref_primary_10_1109_TCSVT_2023_3262685 crossref_primary_10_1016_j_cag_2023_08_004 crossref_primary_10_1109_TMM_2023_3293736 crossref_primary_10_1016_j_patcog_2022_109039 crossref_primary_10_1016_j_cviu_2024_104276 crossref_primary_10_1109_TETCI_2024_3359051 crossref_primary_10_1016_j_jvcir_2023_103932 crossref_primary_10_1109_TCSVT_2022_3141578 crossref_primary_10_3390_electronics13183645 crossref_primary_10_1109_TNNLS_2023_3289626 crossref_primary_10_1016_j_patrec_2025_02_001 crossref_primary_10_1109_TCSVT_2023_3233989 crossref_primary_10_1016_j_imavis_2024_105149 crossref_primary_10_1109_TPAMI_2021_3126387 crossref_primary_10_1109_TIP_2023_3301332 crossref_primary_10_1109_TCSVT_2022_3186880 crossref_primary_10_1109_ACCESS_2022_3227069 crossref_primary_10_1109_ACCESS_2022_3202940 crossref_primary_10_1049_ipr2_12732 crossref_primary_10_1109_TCSVT_2023_3323128 crossref_primary_10_1109_TCSVT_2024_3454763 crossref_primary_10_1016_j_engappai_2022_105411 crossref_primary_10_1109_TCSVT_2022_3190916 crossref_primary_10_1109_TIM_2024_3522633 crossref_primary_10_1117_1_JEI_33_2_023020 crossref_primary_10_1117_1_JEI_33_4_043033 crossref_primary_10_1109_TCSVT_2024_3487849 crossref_primary_10_1109_TETCI_2024_3369321 crossref_primary_10_1145_3700136 crossref_primary_10_1007_s00371_023_03249_3 crossref_primary_10_1016_j_jvcir_2023_103962 crossref_primary_10_2139_ssrn_4157024 crossref_primary_10_1038_s41598_024_64421_w crossref_primary_10_1109_TCSVT_2023_3299232 crossref_primary_10_1109_JSEN_2024_3511554 crossref_primary_10_1117_1_JEI_31_6_063055 crossref_primary_10_1364_OE_495858 crossref_primary_10_1038_s41598_023_46693_w crossref_primary_10_1117_1_JEI_31_5_053001 crossref_primary_10_1007_s11227_024_06683_9 crossref_primary_10_1016_j_aei_2025_103165 crossref_primary_10_1016_j_inffus_2024_102467 crossref_primary_10_1109_ACCESS_2023_3278734 crossref_primary_10_1109_TMM_2023_3291498 crossref_primary_10_1016_j_sigpro_2024_109689 crossref_primary_10_3390_electronics13050950 crossref_primary_10_1016_j_neucom_2023_126772 crossref_primary_10_1109_TCSVT_2023_3340506 crossref_primary_10_1109_TGRS_2024_3422314 crossref_primary_10_3390_bioengineering12030239 crossref_primary_10_1007_s10489_024_05569_w crossref_primary_10_1109_TCSVT_2023_3285765 crossref_primary_10_1109_TCSVT_2023_3343696 crossref_primary_10_3390_app142311033 crossref_primary_10_3390_s23187763 crossref_primary_10_1038_s41598_024_58965_0 crossref_primary_10_1080_13682199_2023_2260663 crossref_primary_10_23919_cje_2021_00_350 crossref_primary_10_3390_computers13060134 crossref_primary_10_1109_TCSVT_2023_3325357 crossref_primary_10_1109_TIP_2023_3266171 crossref_primary_10_1109_JSTSP_2024_3463416 crossref_primary_10_1016_j_jvcir_2024_104313 crossref_primary_10_1109_TCSVT_2023_3239511 crossref_primary_10_1109_TCSVT_2024_3408007 crossref_primary_10_1109_TCSVT_2022_3181781 crossref_primary_10_1007_s11042_023_17527_8 crossref_primary_10_1016_j_jvcir_2023_103863 crossref_primary_10_3390_electronics12163517 crossref_primary_10_1038_s41598_024_69412_5 crossref_primary_10_3934_mbe_2024085 crossref_primary_10_3390_s24030772 crossref_primary_10_1007_s41064_022_00226_8 crossref_primary_10_1016_j_engappai_2023_106969 crossref_primary_10_1016_j_compind_2023_103862 crossref_primary_10_1109_TNNLS_2023_3280037 crossref_primary_10_1007_s10489_024_05534_7 crossref_primary_10_1007_s11831_023_09998_7 crossref_primary_10_1016_j_displa_2023_102614 crossref_primary_10_1109_TCE_2024_3377110 crossref_primary_10_1007_s11042_023_15242_y crossref_primary_10_26599_TST_2022_9010047 crossref_primary_10_3390_app14209430 crossref_primary_10_1109_TETCI_2024_3358200 crossref_primary_10_1109_TMM_2022_3193059 crossref_primary_10_1109_LSP_2024_3399119 crossref_primary_10_1016_j_neunet_2025_107149 crossref_primary_10_1109_TCSVT_2021_3113559 crossref_primary_10_3390_s22186799 crossref_primary_10_1364_JOSAA_533672 crossref_primary_10_1016_j_eswa_2023_120271 crossref_primary_10_1016_j_patcog_2022_109195 crossref_primary_10_1016_j_engappai_2023_106611 crossref_primary_10_1007_s11042_023_17159_y crossref_primary_10_1109_JSEN_2023_3328995 crossref_primary_10_1109_TCI_2023_3323835 crossref_primary_10_1007_s11263_024_02084_w crossref_primary_10_1007_s11063_023_11407_w crossref_primary_10_1016_j_cag_2023_10_016 crossref_primary_10_14801_jkiit_2024_22_1_39 crossref_primary_10_1016_j_apm_2022_11_022 crossref_primary_10_1109_TCSVT_2024_3465670 crossref_primary_10_21595_jme_2024_23977 crossref_primary_10_1109_TCSII_2024_3392600 crossref_primary_10_1016_j_imavis_2024_105102 crossref_primary_10_3390_photonics10020198 crossref_primary_10_1109_TCSVT_2023_3284856 crossref_primary_10_1016_j_engappai_2023_107793 crossref_primary_10_1109_JSEN_2025_3534566 crossref_primary_10_2139_ssrn_4162676 crossref_primary_10_1016_j_knosys_2023_110730 crossref_primary_10_1109_TCSVT_2023_3303574 crossref_primary_10_11834_jig_220707 crossref_primary_10_1016_j_jvcir_2024_104293 crossref_primary_10_1049_ipr2_13226 crossref_primary_10_1109_TCSVT_2022_3202692 crossref_primary_10_3390_app15031604 crossref_primary_10_1016_j_cviu_2024_103952 crossref_primary_10_1109_ACCESS_2022_3209807 crossref_primary_10_1016_j_jvcir_2024_104211 crossref_primary_10_1007_s11063_024_11565_5 crossref_primary_10_1016_j_neunet_2025_107162 crossref_primary_10_1109_TII_2024_3476574 crossref_primary_10_3390_s22155593 crossref_primary_10_1016_j_jvcir_2023_103887 crossref_primary_10_1117_1_JEI_32_2_023014 crossref_primary_10_1016_j_displa_2023_102550 crossref_primary_10_1007_s11760_023_02613_z crossref_primary_10_1109_TCSVT_2023_3313348 crossref_primary_10_3389_fnbot_2021_700011 crossref_primary_10_1109_TCE_2023_3280229 crossref_primary_10_1109_TCSVT_2023_3286802 crossref_primary_10_1109_TMM_2024_3400668 crossref_primary_10_3390_app13010380 crossref_primary_10_1109_TCSVT_2023_3260212 crossref_primary_10_1007_s10489_022_04013_1 crossref_primary_10_1007_s00530_024_01515_5 crossref_primary_10_3390_electronics13193883 crossref_primary_10_1109_TIM_2024_3370779 crossref_primary_10_3390_rs14143398 crossref_primary_10_1049_ipr2_13239 crossref_primary_10_1117_1_JEI_32_5_053023 crossref_primary_10_3390_rs17050905 crossref_primary_10_1016_j_knosys_2023_111099 crossref_primary_10_1016_j_jvcir_2025_104402 crossref_primary_10_3390_rs14184608 crossref_primary_10_1016_j_patcog_2025_111554 crossref_primary_10_1016_j_jvcir_2024_104223 crossref_primary_10_1109_TCSVT_2023_3324591 crossref_primary_10_1111_cgf_15210 crossref_primary_10_1016_j_dsp_2025_105087 crossref_primary_10_1016_j_sigpro_2022_108752 crossref_primary_10_1109_ACCESS_2022_3199771 crossref_primary_10_3390_electronics11172750 crossref_primary_10_1117_1_JEI_32_2_023005 crossref_primary_10_3390_app14124962 crossref_primary_10_1016_j_jksuci_2023_101814 crossref_primary_10_1109_TCSVT_2022_3153685 crossref_primary_10_32604_cmc_2023_042416 |
Cites_doi | 10.1109/ICIP.1996.560995 10.1109/ICIP.2015.7351501 10.1109/TIP.2018.2794218 10.1109/TCSVT.2017.2763180 10.1109/ICIP.2019.8803197 10.1145/3343031.3350926 10.1109/CVPR.2016.304 10.1109/VCIP.2017.8305143 10.1109/TIP.2013.2261309 10.1109/ICCV.2019.00281 10.1109/VBC.1990.109340 10.1109/TCSVT.2018.2828141 10.1109/SITIS.2013.19 10.1016/j.patrec.2018.01.010 10.1109/83.557356 10.1109/83.597272 10.1109/ICIP.2017.8296876 10.1145/3343031.3350983 10.1016/j.sigpro.2016.05.031 10.1109/LSP.2012.2227726 10.1145/3130800.3130816 10.1109/CVPR.2018.00347 10.1109/CVPR.2018.00984 10.1109/TIP.2015.2474701 10.1109/TIP.2019.2910412 10.1016/s0734-189x(87)80186-x 10.1109/TCE.2007.381734 10.1109/TNNLS.2017.2649101 10.1109/TCYB.2016.2575544 10.1109/TIP.2014.2324813 10.1109/TIP.2017.2703078 10.1109/CVPR42600.2020.00185 10.1109/LSP.2020.2965824 10.1109/TCSVT.2019.2919310 10.1145/3072959.3073592 10.1038/scientificamerican1277-108 10.1109/CVPR.2019.00701 10.1109/CVPR42600.2020.00340 10.1145/1836845.1836920 10.1109/GlobalSIP.2013.6737082 10.1109/TIP.2018.2810539 10.1109/TCE.2007.4429280 10.1109/TIP.2013.2284059 10.1109/TIP.2016.2639450 10.1109/ACCESS.2018.2812809 10.1109/TCE.2017.014847 10.1023/A:1022314423998 10.1109/83.841534 10.1109/TIP.2011.2157513 10.1109/CVPRW50498.2020.00276 10.1137/100806588 10.1016/j.patcog.2016.06.008 10.1109/TMM.2020.2969790 10.1109/ISCAS.2018.8351427 10.1109/CVPR.2019.01128 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2021.3073371 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 1088 |
ExternalDocumentID | 10_1109_TCSVT_2021_3073371 9405649 |
Genre | orig-research |
GrantInformation_xml | – fundername: Innovation Fund Designated for Graduate of Nanchang Hangkong University grantid: YC2019025 funderid: 10.13039/100009554 – fundername: NSFC of China grantid: 61866027 funderid: 10.13039/501100001809 – fundername: Key Research and Development Program of Jiangxi Province of China grantid: 20171BBE50013 funderid: 10.13039/501100013064 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c361t-1ae6b80e6321e59fa5ea8362917a6582bf1f0582a8bc972355910e8f419bd5f33 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 03:59:58 EDT 2025 Thu Apr 24 23:12:38 EDT 2025 Tue Jul 01 00:41:15 EDT 2025 Wed Aug 27 02:49:21 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c361t-1ae6b80e6321e59fa5ea8362917a6582bf1f0582a8bc972355910e8f419bd5f33 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-7266-148X 0000-0003-3224-650X 0000-0002-7652-5589 |
PQID | 2637440253 |
PQPubID | 85433 |
PageCount | 13 |
ParticipantIDs | proquest_journals_2637440253 crossref_primary_10_1109_TCSVT_2021_3073371 crossref_citationtrail_10_1109_TCSVT_2021_3073371 ieee_primary_9405649 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-03-01 |
PublicationDateYYYYMMDD | 2022-03-01 |
PublicationDate_xml | – month: 03 year: 2022 text: 2022-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref15 ref59 ref14 ref58 Lv (ref49) 2019 ref52 ref11 ref55 ref10 Kimmel (ref36) 2003; 52 ref17 ref16 ref19 ref18 ref51 ref50 ref46 Jung (ref57) 2019 ref45 ref48 ref47 ref42 ref44 ref43 Guan (ref54) 2019 ref8 ref7 ref9 ref4 ref3 Lv (ref41) ref6 ref5 Wei (ref12) 2018 ref35 ref34 ref37 ref31 ref30 ref33 Waqas Zamir (ref53) 2019 ref32 ref1 ref39 ref38 ref24 Zhang (ref2) 2020 ref23 ref26 ref25 ref20 Zhang (ref28) ref64 ref63 ref22 ref21 ref65 ref27 Ying (ref40) 2017 ref29 Jiang (ref56) 2019 ref60 ref62 ref61 |
References_xml | – ident: ref30 doi: 10.1109/ICIP.1996.560995 – ident: ref26 doi: 10.1109/ICIP.2015.7351501 – ident: ref42 doi: 10.1109/TIP.2018.2794218 – year: 2018 ident: ref12 article-title: Deep retinex decomposition for low-light enhancement publication-title: arXiv:1808.04560 – ident: ref62 doi: 10.1109/TCSVT.2017.2763180 – ident: ref33 doi: 10.1109/ICIP.2019.8803197 – ident: ref15 doi: 10.1145/3343031.3350926 – ident: ref7 doi: 10.1109/CVPR.2016.304 – ident: ref44 doi: 10.1109/VCIP.2017.8305143 – ident: ref10 doi: 10.1109/TIP.2013.2261309 – ident: ref18 doi: 10.1109/ICCV.2019.00281 – year: 2019 ident: ref57 article-title: Multi-frame GAN: Image enhancement for stereo visual odometry in low light publication-title: arXiv:1910.06632 – year: 2019 ident: ref56 article-title: EnlightenGAN: Deep light enhancement without paired supervision publication-title: arXiv:1906.06972 – ident: ref20 doi: 10.1109/VBC.1990.109340 – start-page: 2034 volume-title: Proc. 21st Int. Conf. Pattern Recognit. (ICPR) ident: ref28 article-title: Enhancement and noise reduction of very low light level images – ident: ref63 doi: 10.1109/TCSVT.2018.2828141 – ident: ref32 doi: 10.1109/SITIS.2013.19 – year: 2019 ident: ref49 article-title: Attention guided low-light image enhancement with a large scale low-light simulation dataset publication-title: arXiv:1908.00682 – ident: ref46 doi: 10.1016/j.patrec.2018.01.010 – ident: ref29 doi: 10.1109/83.557356 – ident: ref31 doi: 10.1109/83.597272 – year: 2017 ident: ref40 article-title: A bio-inspired multi-exposure fusion framework for low-light image enhancement publication-title: arXiv:1711.00591 – ident: ref13 doi: 10.1109/ICIP.2017.8296876 – ident: ref3 doi: 10.1145/3343031.3350983 – ident: ref39 doi: 10.1016/j.sigpro.2016.05.031 – start-page: 220 volume-title: Proc. BMVC ident: ref41 article-title: MBLLEN: Low-light image/video enhancement using CNNs – ident: ref59 doi: 10.1109/LSP.2012.2227726 – year: 2019 ident: ref53 article-title: Learning digital camera pipeline for extreme low-light imaging publication-title: arXiv:1904.05939 – ident: ref45 doi: 10.1145/3130800.3130816 – ident: ref55 doi: 10.1109/CVPR.2018.00347 – ident: ref16 doi: 10.1109/CVPR.2018.00984 – ident: ref37 doi: 10.1109/TIP.2015.2474701 – ident: ref50 doi: 10.1109/TIP.2019.2910412 – ident: ref19 doi: 10.1016/s0734-189x(87)80186-x – ident: ref22 doi: 10.1109/TCE.2007.381734 – year: 2020 ident: ref2 article-title: Self-supervised image enhancement network: Training with low light images only publication-title: arXiv:2002.11300 – ident: ref60 doi: 10.1109/TNNLS.2017.2649101 – ident: ref61 doi: 10.1109/TCYB.2016.2575544 – ident: ref34 doi: 10.1109/TIP.2014.2324813 – ident: ref38 doi: 10.1109/TIP.2017.2703078 – ident: ref58 doi: 10.1109/CVPR42600.2020.00185 – ident: ref14 doi: 10.1109/LSP.2020.2965824 – ident: ref64 doi: 10.1109/TCSVT.2019.2919310 – ident: ref48 doi: 10.1145/3072959.3073592 – ident: ref11 doi: 10.1038/scientificamerican1277-108 – ident: ref47 doi: 10.1109/CVPR.2019.00701 – ident: ref65 doi: 10.1109/CVPR42600.2020.00340 – ident: ref27 doi: 10.1145/1836845.1836920 – ident: ref9 doi: 10.1109/GlobalSIP.2013.6737082 – ident: ref5 doi: 10.1109/TIP.2018.2810539 – ident: ref21 doi: 10.1109/TCE.2007.4429280 – ident: ref24 doi: 10.1109/TIP.2013.2284059 – ident: ref8 doi: 10.1109/TIP.2016.2639450 – ident: ref51 doi: 10.1109/ACCESS.2018.2812809 – ident: ref6 doi: 10.1109/TCE.2017.014847 – volume: 52 start-page: 7 issue: 1 year: 2003 ident: ref36 article-title: A variational framework for Retinex publication-title: Int. J. Comput. Vis. doi: 10.1023/A:1022314423998 – ident: ref23 doi: 10.1109/83.841534 – ident: ref25 doi: 10.1109/TIP.2011.2157513 – ident: ref52 doi: 10.1109/CVPRW50498.2020.00276 – ident: ref35 doi: 10.1137/100806588 – ident: ref43 doi: 10.1016/j.patcog.2016.06.008 – ident: ref1 doi: 10.1109/TMM.2020.2969790 – ident: ref4 doi: 10.1109/ISCAS.2018.8351427 – ident: ref17 doi: 10.1109/CVPR.2019.01128 – year: 2019 ident: ref54 article-title: NODE: Extreme low light raw image denoising using a noise decomposition network publication-title: arXiv:1909.05249 |
SSID | ssj0014847 |
Score | 2.7075355 |
Snippet | Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1076 |
SubjectTerms | Algorithms Cameras Computer vision Couplings Decomposition deep prior Electronics packaging Histograms Image contrast Image enhancement Lighting Low-light image enhancement Machine learning retinex decomposition Task analysis zero-reference |
Title | RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement |
URI | https://ieeexplore.ieee.org/document/9405649 https://www.proquest.com/docview/2637440253 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4BJzjwrtgWkA-9tV7iOHFsbohlBRVUqCyIW2Q7EyG13V3RXYH49Yy9yQpaVHFKDrZieSYz32fPA-Czx8qTYSx4XmU5J4-vOWlJylVRFa7QyiYuHA1cfFen19m32_x2Ab7Oc2EQMQafYTe8xrv8auSn4ajswBC6UJlZhEUibrNcrfmNQaZjMzGCC4Jr8mNtgkxiDgbHVzcDooKp6AaNloV45YRiV5V_THH0L_01uGhXNgsr-dmdTlzXP_1VtPG9S1-H1QZosqOZZmzAAg43YeVF-cEt6P0I-c742Du7PGRHjOBnTYCU9RDHrN8GbTFCtex89MDPA41nZ7_JALGT4V3QlvDNbbjunwyOT3nTVYF7qcSEC4vK6QSVTAXmprY5Wk1ujHibJTiSulrUCT2tdj60JCPKIRLUdSaMq_Jayg-wNBwNcQeYSWpJ-Mkq4bPMmsSaXEu0qUNXeaOwA6Ld5tI3JcdD54tfZaQeiSmjaMogmrIRTQe-zOeMZwU3_jt6K-z1fGSzzR3YbaVZNv_knzJVMlRDTHP58e1Zn2A5DckNMcJsF5Ym91PcI8gxcftR154Bp1LPEQ |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3Pb9MwFH4a4wAcGDAmChv4wA3cxXHs2NymdVUL7YSgQ7tFtvMiJKCdtlYg_nqe3aQaAyFOycFWLL-X933Pfj8AXgasAxnGkqu6UJwQ33DSkpzrsi59abTLfDwamJ7q0Vnx9lydb8HrTS4MIqbgM-zH13SXXy_CKh6VHVpiF7qwt-A24b4S62ytzZ1BYVI7MSIMghtCsi5FJrOHs-OPn2bkDOaiH3ValuI3GEp9Vf4wxglhhjsw7da2Diz50l8tfT_8vFG28X8X_wDut1STHa114yFs4fwR3LtWgHAXBh9ixjP-GIzfv2FHjAhoQ5SUDRAv2LAL22LEa9lk8Z1PoiPPxt_IBLGT-eeoL_Gbj-FseDI7HvG2rwIPUoslFw61NxlqmQtUtnEKnSEgI8_NESHJfSOajJ7O-BCbkpHTITI0TSGsr1Uj5R5szxdzfALMZo0kBuW0CEXhbOasMhJd7tHXwWrsgei2uQpt0fHY--JrlZyPzFZJNFUUTdWKpgevNnMu1iU3_jl6N-71ZmS7zT3Y76RZtX_lVZVrGesh5ko-_fusF3BnNJtOqsn49N0zuJvHVIcUb7YP28vLFR4QAVn650nvfgGjX9Ja |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=RetinexDIP%3A+A+Unified+Deep+Framework+for+Low-Light+Image+Enhancement&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Zhao%2C+Zunjin&rft.au=Xiong%2C+Bangshu&rft.au=Wang%2C+Lei&rft.au=Ou%2C+Qiaofeng&rft.date=2022-03-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=3&rft.spage=1076&rft_id=info:doi/10.1109%2FTCSVT.2021.3073371&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |