Very Deep Learning-Based Illumination Estimation Approach With Cascading Residual Network Architecture (CRNA)
For the imaging signal processing (ISP) pipeline of digital image devices, it is of high significance to remove undesirable illuminant effects and obtain color invariance, commonly known as 'computational color constancy'. Achieving the computational color constancy requires going through...
Saved in:
Published in | IEEE access Vol. 9; pp. 133552 - 133560 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | For the imaging signal processing (ISP) pipeline of digital image devices, it is of high significance to remove undesirable illuminant effects and obtain color invariance, commonly known as 'computational color constancy'. Achieving the computational color constancy requires going through two phases: the illumination estimation, which will be the primary focus of this work, and the human visual perception-based chromatic adaptation. At the first phase, illumination estimation is to predict RGB triplets, the numeric representations of incident illuminant colors, by calculating the values of image pixels. How much the network can increase its estimation accuracy is a key to realizing computational color constancy. With recent advances in deep learning (DL), a lot of deep learning-based approaches have been suggested, bringing higher accuracy to computer vision applications, but there are still quite a few obstacles to overcome such as instability of learning. In an attempt to address this ill-posed problem in the illumination estimation space, this article presents a novel deep learning-based approach, the Cascading Residual Network Architecture (CRNA), which incorporates the ResNet and cascading mechanism into the deep convolutional neural network (DCNN). The cascading mechanism enables the proposed network to restrain from suddenly varying in size, serves to mitigate learning instability, and accordingly reduces the quality degradation. This is attributed to the ability of the cascading mechanism that fine-tunes the pre-trained DCNN. Considerable amounts of datasets and comparative experiments highlight that the proposed approach delivers more stable and robust results and imply the potential for generalization of the proposed approach across deep learning applications. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2021.3115942 |