LLNet: A deep autoencoder approach to natural low-light image enhancement
In surveillance, monitoring and tactical reconnaissance, gathering visual information from a dynamic environment and accurately processing such data are essential to making informed decisions and ensuring the success of a mission. Camera sensors are often cost-limited to capture clear images or vide...
Saved in:
Published in | Pattern recognition Vol. 61; pp. 650 - 662 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.01.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In surveillance, monitoring and tactical reconnaissance, gathering visual information from a dynamic environment and accurately processing such data are essential to making informed decisions and ensuring the success of a mission. Camera sensors are often cost-limited to capture clear images or videos taken in a poorly-lit environment. Many applications aim to enhance brightness, contrast and reduce noise content from the images in an on-board real-time manner. We propose a deep autoencoder-based approach to identify signal features from low-light images and adaptively brighten images without over-amplifying/saturating the lighter parts in images with a high dynamic range. We show that a variant of the stacked-sparse denoising autoencoder can learn from synthetically darkened and noise-added training examples to adaptively enhance images taken from natural low-light environment and/or are hardware-degraded. Results show significant credibility of the approach both visually and by quantitative comparison with various techniques.
•Novel application of stacked sparse denoising autoencoder enhances low-light images.•Simultaneous learning of contrast-enhancement and denoising (LLNet).•Sequential learning of contrast-enhancement and denoising (Staged LLNet).•Synthetically trained model evaluated on natural low-light images.•Learned features visualized to gain insights about the model. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2016.06.008 |