Highlight Removal from A Single Grayscale Image Using Attentive GAN

The existence of specular highlights hinders high-level computer algorithms. In this paper, we propose a novel approach to remove specular highlights from a single grayscale image by regarding the problem as an image-to-image translation task between the highlight domain and the diffuse domain. We s...

Full description

Saved in:
Bibliographic Details
Published inApplied artificial intelligence Vol. 36; no. 1
Main Authors Xu, Haitao, Li, Qiang, Chen, Jing
Format Journal Article
LanguageEnglish
Published Philadelphia Taylor & Francis 31.12.2022
Taylor & Francis Ltd
Taylor & Francis Group
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The existence of specular highlights hinders high-level computer algorithms. In this paper, we propose a novel approach to remove specular highlights from a single grayscale image by regarding the problem as an image-to-image translation task between the highlight domain and the diffuse domain. We solve this problem by using the generative adversarial network framework, where a generator removes highlights and discriminator judges whether outputs of the generator are clear and highlight-free. Specular highlight removal is intractable as we should remove specular highlights while keeping as many details as possible. Considering the similarity between the highlight image and diffuse image, we adopt an attention-based submodule that generates a mask image, which we call the highlight intensity mask, to locate pixels that contain specular highlights and help the skip-connected autoencoder to remove highlights. A pixel discriminator and Structural Similarity loss are utilized to ensure that more details can be retained in the output images. For training and testing models, we build a grayscale highlight images dataset. It consists of more than a thousand sets of grayscale highlight images with ground truth. Finally, quantitative and qualitative evaluations demonstrate the effectiveness of our method than other contrast generative adversarial network methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0883-9514
1087-6545
DOI:10.1080/08839514.2021.1988441