Divide and conquer: Ill-light image enhancement via hybrid deep network

[Display omitted] •Low and ill-light image enhancement.•Lowlight mage enhancement without paired training data supervision.•Image enhancement with a few-shots of training data.•Deep hybrid learning, independent of the type of training and test data.•First Large scale dataset for ill-lighting conditi...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 182; p. 115034
Main Authors Khan, Rizwan, Yang, You, Liu, Qiong, Qaisar, Zahid Hussain
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 15.11.2021
Elsevier BV
Subjects
Online AccessGet full text
ISSN0957-4174
1873-6793
DOI10.1016/j.eswa.2021.115034

Cover

Loading…
More Information
Summary:[Display omitted] •Low and ill-light image enhancement.•Lowlight mage enhancement without paired training data supervision.•Image enhancement with a few-shots of training data.•Deep hybrid learning, independent of the type of training and test data.•First Large scale dataset for ill-lighting conditions. Intelligent system applications in computer vision suffer detection and identification problems in ill lighting conditions (i.e., non-uniform illumination), where under-exposed and over-exposed regions coexist in the captured images. Processing on these images results in over and under enhancement with colour and contrast distortions. The traditional methods design some handcrafted constraints and rely on image pairs and priors, whereas existing deep learning-based methods rely on large scale and even paired training data. But these method’s capacity is limited to specific scenes (i.e., lighting conditions). In this paper, we present a deep-hybrid ill-light image enhancement method and propose a contrast enhancement strategy based on the decomposition of the input images into reflection J and illumination T. A Divide to Glitter network (D2G-Net) is designed to learn from the few-shots of training samples and do not require paired and large quantity training data. D2G-Net is comprised of a multilayer Division-Net for image division and a Glitter-Net to amplify the illumination map. We propose to regularize learning using a correlation consistency of decomposition extracted from the input data itself. Extensive experiments are organized under ill-lighting conditions, where a new test dataset is also proposed with robust lighting variation to evaluate the performance of the proposed method. Experimental results prove that our method has superior performance for preserving structural and texture details compared to state-of-the-art approaches, which suggests that our method is more practical in interactive computer vision and intelligent expert system applications.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2021.115034