Camera-independent color constancy by scene semantics

•Existing learning-based color constancy methods typically exhibit poor generalization between images taken by varying cameras.•Gray-based assumptions hold the natural property of camera-independent.•Gray-based assumptions are less accurate in different scenarios due to their fixity.•Scene semantics...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition letters Vol. 171; pp. 106 - 115
Main Authors Xie, Mengda, Sun, Peng, Lang, Yubo, Fang, Meie
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Existing learning-based color constancy methods typically exhibit poor generalization between images taken by varying cameras.•Gray-based assumptions hold the natural property of camera-independent.•Gray-based assumptions are less accurate in different scenarios due to their fixity.•Scene semantics can contribute to inferring and modifying fixed gray-based assumptions. Current learning-based color constancy methods are typically employed to find camera-specific illuminant mappings. Consequently, these methods exhibit poor generalization to images captured by varying cameras. In this paper, we present a camera-independent learning method based on Scene Semantics, and we call it CISS. Inspired by the camera-independent property of gray-based methods, CISS does not directly estimate camera-specific illuminant by training model as most learning methods do. Instead, the model's output is transformed into camera-independent scene statistics related to gray-based assumptions to avoid being affected by camera variations. Based on these estimated scene statistics, illuminant can be calculated indirectly. To estimate scene statistics accurately, CISS designs illuminant-invariance scene semantics features as input to the model. Then, the model estimates scene statistics for each input image in terms of scene semantics with exemplar-based learning. Experiments show that, on several public datasets, CISS is able to outperform present methods for multi-cameras color constancy, and is flexible enough to be well generalized to the unseen camera without fine-tuning by additional images.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2023.03.027