Double-Channel Guided Generative Adversarial Network for Image Colorization

Image colorization has a widespread application in video and image restoration in the past few years. Recently, automatic colorization methods based on deep learning have shown impressive performance. However, these methods map grayscale image input into multi-channel output directly. In the process...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 9; pp. 21604 - 21617
Main Authors Du, Kangning, Liu, Changtong, Cao, Lin, Guo, Yanan, Zhang, Fan, Wang, Tao
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Image colorization has a widespread application in video and image restoration in the past few years. Recently, automatic colorization methods based on deep learning have shown impressive performance. However, these methods map grayscale image input into multi-channel output directly. In the process, it usually loses detailed information during feature extraction, resulting in abnormal colors in local areas of the colorization image. To overcome abnormal colors and improve colorization quality, we propose a novel Double-Channel Guided Generative Adversarial Network (DCGGAN). It includes two modules: a reference component matching module and a double-channel guided colorization module. The reference component matching module is introduced to select suitable reference color components as auxiliary information of the input. The double-channel guided colorization module is designed to learn the mapping relationship from the grayscale to each color channel with the assistance of reference color components. Experimental results show that the proposed DCGGAN outperforms existing methods on different quality metrics and achieves state-of-the-art performance.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3055575