RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement

Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of computer vision algorithms. Among the existing enhancement techniques, Retinex-based and learning-based methods are under the spotlight today. In t...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 32; no. 3; pp. 1076 - 1088
Main Authors Zhao, Zunjin, Xiong, Bangshu, Wang, Lei, Ou, Qiaofeng, Yu, Lei, Kuang, Fa
Format Journal Article
LanguageEnglish
Published New York IEEE 01.03.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of computer vision algorithms. Among the existing enhancement techniques, Retinex-based and learning-based methods are under the spotlight today. In this paper, we bridge the gap between the two methods. First, we propose a novel "generative" strategy for Retinex decomposition, by which the decomposition is cast as a generative problem. Second, based on the strategy, a unified deep framework is proposed to estimate the latent components and perform low-light image enhancement. Third, our method can weaken the coupling relationship between the two components while performing Retinex decomposition. Finally, the RetinexDIP performs Retinex decomposition without any external images, and the estimated illumination can be easily adjusted and is used to perform enhancement. The proposed method is compared with ten state-of-the-art algorithms on seven public datasets, and the experimental results demonstrate the superiority of our method. Code is available at: https://github.com/zhaozunjin/RetinexDIP .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2021.3073371