AEMS: an attention enhancement network of modules stacking for lowlight image enhancement

Due to the images obtained in lowlight environments often showing low contrast, low brightness and artifacts, it is difficult to distinguish the details of these images for people. In the field of images fusion and target tacking, lowlight images cannot be processed better. In this paper, we propose...

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 38; no. 12; pp. 4203 - 4219
Main Authors Li, Miao, Zhao, Li, Zhou, Dongming, Nie, Rencan, Liu, Yanyu, Wei, Yixue
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.12.2022
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0178-2789
1432-2315
DOI10.1007/s00371-021-02289-x

Cover

More Information
Summary:Due to the images obtained in lowlight environments often showing low contrast, low brightness and artifacts, it is difficult to distinguish the details of these images for people. In the field of images fusion and target tacking, lowlight images cannot be processed better. In this paper, we proposed an end-to-end lowlight image enhancement network, which uses modules stacking methods and attention modules. Firstly, the method of module stacking was applied to extract the different features of images, and then the features are fused on the channel dimension. Finally, the final image was reconstructed with a series of convolutions. In particular, our loss function consists of two parts: the first part of the loss function was calculated using L 1 loss, L 2 loss and the gradient loss, and VGG network was utilized to calculate the second part. Furthermore, we verified the effectiveness of the model via a large number of comparative experiments, and illustrated the comparison results through quantitative and qualitative methods. We additionally show the performance of our network on lowlight video enhancement, which also has better results than the other methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-021-02289-x