Underwater Image Enhancement With a Deep Residual Framework

Owing to refraction, absorption, and scattering of light by suspended particles in water, raw underwater images have low contrast, blurred details, and color distortion. These characteristics can significantly interfere with visual tasks, such as segmentation and tracking. This paper proposes an und...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 7; pp. 94614 - 94629
Main Authors Liu, Peng, Wang, Guoyu, Qi, Hao, Zhang, Chufeng, Zheng, Haiyong, Yu, Zhibin
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Owing to refraction, absorption, and scattering of light by suspended particles in water, raw underwater images have low contrast, blurred details, and color distortion. These characteristics can significantly interfere with visual tasks, such as segmentation and tracking. This paper proposes an underwater image enhancement solution through a deep residual framework. First, the cycle-consistent adversarial networks (CycleGAN) is employed to generate synthetic underwater images as training data for convolution neural network models. Second, the very-deep super-resolution reconstruction model (VDSR) is introduced to underwater resolution applications; with it, the Underwater Resnet model is proposed, which is a residual learning model for underwater image enhancement tasks. Furthermore, the loss function and training mode are improved. A multi-term loss function is formed with mean squared error loss and a proposed edge difference loss. An asynchronous training mode is also proposed to improve the performance of the multi-term loss function. Finally, the impact of batch normalization is discussed. According to the underwater image enhancement experiments and a comparative analysis, the color correction and detail enhancement performance of the proposed methods are superior to that of previous deep learning models and traditional methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2928976