Unpaired Underwater Image Enhancement Based on CycleGAN

Underwater image enhancement recovers degraded underwater images to produce corresponding clear images. Image enhancement methods based on deep learning usually use paired data to train the model, while such paired data, e.g., the degraded images and the corresponding clear images, are difficult to...

Full description

Saved in:
Bibliographic Details
Published inInformation (Basel) Vol. 13; no. 1; p. 1
Main Authors Du, Rong, Li, Weiwei, Chen, Shudong, Li, Congying, Zhang, Yong
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.01.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Underwater image enhancement recovers degraded underwater images to produce corresponding clear images. Image enhancement methods based on deep learning usually use paired data to train the model, while such paired data, e.g., the degraded images and the corresponding clear images, are difficult to capture simultaneously in the underwater environment. In addition, how to retain the detailed information well in the enhanced image is another critical problem. To solve such issues, we propose a novel unpaired underwater image enhancement method via a cycle generative adversarial network (UW-CycleGAN) to recover the degraded underwater images. Our proposed UW-CycleGAN model includes three main modules: (1) A content loss regularizer is adopted into the generator in CycleGAN, which constrains the detailed information existing in one degraded image to remain in the corresponding generated clear image; (2) A blur-promoting adversarial loss regularizer is introduced into the discriminator to reduce the blur and noise in the generated clear images; (3) We add the DenseNet block to the generator to retain more information of each feature map in the training stage. Finally, experimental results on two unpaired underwater image datasets produced satisfactory performance compared to the state-of-the-art image enhancement methods, which proves the effectiveness of the proposed model.
ISSN:2078-2489
2078-2489
DOI:10.3390/info13010001