Unsupervised rapid lowlight enhancement via deep curve and statistic loss
Lowlight images suffer from poor illumination and noise due to the limited information captured by smaller sensor devices such as smartphone cameras. While supervised approaches to lowlight image enhancement have shown promise, they require paired image datasets, which are often expensive and diffic...
Saved in:
Published in | Engineering applications of artificial intelligence Vol. 152; p. 110841 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
15.07.2025
|
Subjects | |
Online Access | Get full text |
ISSN | 0952-1976 |
DOI | 10.1016/j.engappai.2025.110841 |
Cover
Loading…
Summary: | Lowlight images suffer from poor illumination and noise due to the limited information captured by smaller sensor devices such as smartphone cameras. While supervised approaches to lowlight image enhancement have shown promise, they require paired image datasets, which are often expensive and difficult to obtain, limiting their practical applicability. Previous unsupervised network approaches have attempted to address these challenges but often fall short in terms of quality or speed.
To overcome these limitations, we present an unsupervised network specifically designed for lowlight image enhancement. Our method employs diverse strategies within the loss functions to guide the model in generating images with normal lighting that appear natural to the human eye. This approach, combining rapid processing, a lightweight model, and decent-quality outputs trained with unpaired data, makes it an ideal choice for real-world applications such as consumer electronics which is helpful for various kinds of engineering.
Furthermore, to address the common issue of noise amplification in enhanced images, we incorporate a denoising model trained also with unpaired data, which can effectively remove the noises. Our quantitative comparisons demonstrate that our approach achieves superior and comprehensive scores while maintaining a low number of trainable parameters, around 10k. Additionally, our model processes a 512 × 512 color image in just 43 ms, which highlights its efficiency. Using the LOLv2-real (LOw-Light real-world version 2) dataset, it achieved a PSNR (Peak Signal-to-Noise Ratio) of 20.23 dB, which is 1.27 dB higher than the second-best method, a LPIPS (Learned Perceptual Image Patch Similarity) of 0.168, and an SSIM (Structure SIMilarity) of 0.77, demonstrating its effectiveness. |
---|---|
ISSN: | 0952-1976 |
DOI: | 10.1016/j.engappai.2025.110841 |