Layer Decomposition Learning Based on Discriminative Feature Group Split With Bottom-Up Intergroup Feature Fusion for Single Image Deraining

Rain streaks impede image feature extraction, hindering the performance of computer vision algorithms such as pedestrian and lane detection in adverse weather conditions. Image deraining is crucial for enhancing reliability of such algorithms. However, detail and texture information of objects in ba...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 12; pp. 78024 - 78039
Main Authors Jang, Yunseon, Le, Duc-Tai, Son, Chang-Hwan, Choo, Hyunseung
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Rain streaks impede image feature extraction, hindering the performance of computer vision algorithms such as pedestrian and lane detection in adverse weather conditions. Image deraining is crucial for enhancing reliability of such algorithms. However, detail and texture information of objects in background areas are often lost during the deraining process due to their structural similarity with rain streaks. To remove rain streaks effectively while preserving image details, we propose a novel layer decomposition learning network (LDLNet) to separate rain streaks and object details in rainy images. LDLNet consists of two parts: the discriminative group feature split (DGFS) and the group feature merging (GFM). DGFS utilizes sparse residual attention modules (SRAM) to capture spatial contextual features of rainy images, enhancing the network's ability to understand the complex relationships between rain streaks and object details. In addition, DGFS employs the bottom-up intergroup feature fusion (BIFF) approach to aggregate multi-scale context information from continuous SRAMs, facilitating the decomposition of rainy images into discriminative feature groups. Subsequently, GFM integrates these feature groups by concatenating them, preserving the interdependent characteristics of clean backgrounds and rain layers. Experimental results reveal that the proposed approach achieves superior rain removal and detail preservation in both synthetic datasets and real-world rainy images compared to the state-of-the-art rain removal models.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3407750