A novel low light object detection method based on the YOLOv5 fusion feature enhancement

Low-light object detection is an important research area in computer vision, but it is also a difficult issue. This research offers a low-light target detection network, NLE-YOLO, based on YOLOV5, to address the issues of insufficient illumination and noise interference experienced by target detecti...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 14; no. 1; p. 4486
Main Authors Peng, Daxin, Ding, Wei, Zhen, Tong
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 23.02.2024
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Low-light object detection is an important research area in computer vision, but it is also a difficult issue. This research offers a low-light target detection network, NLE-YOLO, based on YOLOV5, to address the issues of insufficient illumination and noise interference experienced by target detection tasks in low-light environments. The network initially preprocesses the input image with an improvement technique before suppressing high-frequency noise and enhancing essential information with C2fLEFEM, a unique feature extraction module. We also created a multi-scale feature extraction module, AMC2fLEFEM, and an attention mechanism receptive field module, AMRFB, which are utilized to extract features of multiple scales and enhance the receptive field. The C2fLEFEM module, in particular, merges the LEF and FEM modules on top of the C2f module. The LEF module employs a low-frequency filter to remove high-frequency noise; the FEM module employs dual inputs to fuse low-frequency enhanced and original features; and the C2f module employs a gradient retention method to minimize information loss. The AMC2fLEFEM module combines the SimAM attention mechanism and uses the pixel relationship to obtain features of different receptive fields, adapt to brightness changes, capture the difference between the target and the background, improve the network's feature extraction capability, and effectively reduce the impact of noise. The AMRFB module employs atrous convolution to enlarge the receptive field, maintain global information, and adjust to targets of various scales. Finally, for low-light settings, we replaced the original YOLOv5 detection head with a decoupled head. The Exdark dataset experiments show that our method outperforms previous methods in terms of detection accuracy and performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-54428-8