Learning an adaptive model for extreme low-light raw image processing

Low-light images suffer from severe noise and low illumination. In this work, the authors propose an adaptive low-light raw image enhancement network to avoid parameter-handcrafting in current deep learning models and to improve image quality. The proposed method can be divided into two sub-models:...

Full description

Saved in:
Bibliographic Details
Published inIET image processing Vol. 14; no. 14; pp. 3433 - 3443
Main Authors Fu, Qingxu, Di, Xiaoguang, Zhang, Yu
Format Journal Article
LanguageEnglish
Published The Institution of Engineering and Technology 01.12.2020
Subjects
Online AccessGet full text
ISSN1751-9659
1751-9667
DOI10.1049/iet-ipr.2020.0100

Cover

Loading…
More Information
Summary:Low-light images suffer from severe noise and low illumination. In this work, the authors propose an adaptive low-light raw image enhancement network to avoid parameter-handcrafting in current deep learning models and to improve image quality. The proposed method can be divided into two sub-models: brightness prediction and exposure shifting (ES). The former is designed to control the brightness of the resulting image by estimating a guideline exposure time $t_1$t1. The latter learns to approximate an exposure-shifting operator ES, converting a low-light image with real exposure time $t_0$t0 to a noise-free image with guideline exposure time $t_1$t1. Additionally, structural similarity loss and image enhancement vector are introduced to promote image quality, and a new campus image data set (CID) for training the proposed model is proposed to overcome the limitations of the existing data sets. In quantitative tests, it is shown that the proposed method has the lowest noise level estimation score compared with the state-of-the-art low-light algorithms, suggesting a superior denoising performance. Furthermore, those tests illustrate that the proposed method is able to adaptively control the global image brightness according to the content of the image scene. Lastly, the potential application in video processing is briefly discussed.
ISSN:1751-9659
1751-9667
DOI:10.1049/iet-ipr.2020.0100