DICNet: achieve low-light image enhancement with image decomposition, illumination enhancement, and color restoration

Low-light image enhancement (LLIE) is mainly used to restore image degradation caused by environmental noise, lighting effects, and other factors. Despite many relevant works combating environmental interference, LLIE currently still faces multiple limitations, such as noise, unnatural color recover...

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 40; no. 10; pp. 6779 - 6795
Main Authors Pan, Heng, Gao, Bingkun, Wang, Xiufang, Jiang, Chunlei, Chen, Peng
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Low-light image enhancement (LLIE) is mainly used to restore image degradation caused by environmental noise, lighting effects, and other factors. Despite many relevant works combating environmental interference, LLIE currently still faces multiple limitations, such as noise, unnatural color recovery, and severe loss of details, etc. To effectively overcome these limitations, we propose a DICNet based on the Retinex theory. DICNet consists of three components: image decomposition, illumination enhancement, and color restoration. To avoid the influence of noise during the enhancement process, we use feature maps after the image high-frequency component denoising process to guide image decomposition and suppress noise interference. For illumination enhancement, we propose a feature separation method that considering the influence of different lighting intensities and preserves details. In addition, to address the insufficient high-low-level feature fusion of the U-Net used in color restoration, we design a Feature Cross-Fusion Module and propose a feature fusion connection plug-in to ensure natural and realistic color restoration. Based on a large number of experiments on publicly available datasets, our method outperforms existing state-of-the-art methods in both performance and visual quality.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-024-03262-0