Deep Perceptual Image Enhancement Network for Exposure Restoration
Image restoration techniques process degraded images to highlight obscure details or enhance the scene with good contrast and vivid color for the best possible visibility. Poor illumination condition causes issues, such as high-level noise, unlikely color or texture distortions, nonuniform exposure,...
Saved in:
Published in | IEEE transactions on cybernetics Vol. 53; no. 7; pp. 4718 - 4731 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.07.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Image restoration techniques process degraded images to highlight obscure details or enhance the scene with good contrast and vivid color for the best possible visibility. Poor illumination condition causes issues, such as high-level noise, unlikely color or texture distortions, nonuniform exposure, halo artifacts, and lack of sharpness in the images. This article presents a novel end-to-end trainable deep convolutional neural network called the deep perceptual image enhancement network (DPIENet) to address these challenges. The novel contributions of the proposed work are: 1) a framework to synthesize multiple exposures from a single image and utilizing the exposure variation to restore the image and 2) a loss function based on the approximation of the logarithmic response of the human eye. Extensive computer simulations on the benchmark MIT-Adobe FiveK and user studies performed using Google high dynamic range, DIV2K, and low light image datasets show that DPIENet has clear advantages over state-of-the-art techniques. It has the potential to be useful for many everyday applications such as modernizing traditional camera technologies that currently capture images/videos with under/overexposed regions due to their sensors limitations, to be used in consumer photography to help the users capture appealing images, or for a variety of intelligent systems, including automated driving and video surveillance applications. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2168-2267 2168-2275 |
DOI: | 10.1109/TCYB.2021.3140202 |