Guided Filter-Inspired Network for Low-Light RAW Image Enhancement

Low-light RAW image enhancement (LRIE) has attracted increased attention in recent years due to the demand for practical applications. Various deep learning-based methods have been proposed for dealing with this task, among which the fusion-based ones achieve state-of-the-art performance. However, c...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 25; no. 9; p. 2637
Main Authors Liu, Xinyi, Zhao, Qian
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 22.04.2025
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Low-light RAW image enhancement (LRIE) has attracted increased attention in recent years due to the demand for practical applications. Various deep learning-based methods have been proposed for dealing with this task, among which the fusion-based ones achieve state-of-the-art performance. However, current fusion-based methods do not sufficiently explore the physical correlations between source images and thus fail to sufficiently exploit the complementary information delivered by different sources. To alleviate this issue, we propose a Guided Filter-inspired Network (GFNet) for the LRIE task. The proposed GFNet is designed to fuse sources in a guided filter (GF)-like manner, with the coefficients inferred by the network, within both the image and feature domains. Inheriting the advantages of GF, the proposed method is able to capture more intrinsic correlations between source images and thus better fuse the contextual and textual information extracted from them, facilitating better detail preservation and noise reduction for LRIE. Experiments on benchmark LRIE datasets demonstrate the superiority of the proposed method. Furthermore, the extended applications of GFNet to guided low-light image enhancement tasks indicate its broad applicability.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s25092637