Double-Channel Guided Generative Adversarial Network for Image Colorization
Image colorization has a widespread application in video and image restoration in the past few years. Recently, automatic colorization methods based on deep learning have shown impressive performance. However, these methods map grayscale image input into multi-channel output directly. In the process...
Saved in:
Published in | IEEE access Vol. 9; pp. 21604 - 21617 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Image colorization has a widespread application in video and image restoration in the past few years. Recently, automatic colorization methods based on deep learning have shown impressive performance. However, these methods map grayscale image input into multi-channel output directly. In the process, it usually loses detailed information during feature extraction, resulting in abnormal colors in local areas of the colorization image. To overcome abnormal colors and improve colorization quality, we propose a novel Double-Channel Guided Generative Adversarial Network (DCGGAN). It includes two modules: a reference component matching module and a double-channel guided colorization module. The reference component matching module is introduced to select suitable reference color components as auxiliary information of the input. The double-channel guided colorization module is designed to learn the mapping relationship from the grayscale to each color channel with the assistance of reference color components. Experimental results show that the proposed DCGGAN outperforms existing methods on different quality metrics and achieves state-of-the-art performance. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2021.3055575 |