Local Adaptive Illumination-Driven Input-Level Fusion for Infrared and Visible Object Detection

Remote sensing object detection based on the combination of infrared and visible images can effectively adapt to the around-the-clock and changeable illumination conditions. However, most of the existing infrared and visible object detection networks need two backbone networks to extract the feature...

Full description

Saved in:
Bibliographic Details
Published inRemote sensing (Basel, Switzerland) Vol. 15; no. 3; p. 660
Main Authors Wu, Jiawen, Shen, Tao, Wang, Qingwang, Tao, Zhimin, Zeng, Kai, Song, Jian
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Remote sensing object detection based on the combination of infrared and visible images can effectively adapt to the around-the-clock and changeable illumination conditions. However, most of the existing infrared and visible object detection networks need two backbone networks to extract the features of two modalities, respectively. Compared with the single modality detection network, this greatly increases the amount of calculation, which limits its real-time processing on the vehicle and unmanned aerial vehicle (UAV) platforms. Therefore, this paper proposes a local adaptive illumination-driven input-level fusion module (LAIIFusion). The previous methods for illumination perception only focus on the global illumination, ignoring the local differences. In this regard, we design a new illumination perception submodule, and newly define the value of illumination. With more accurate area selection and label design, the module can more effectively perceive the scene illumination condition. In addition, aiming at the problem of incomplete alignment between infrared and visible images, a submodule is designed for the rapid estimation of slight shifts. The experimental results show that the single modality detection algorithm based on LAIIFusion can ensure a large improvement in accuracy with a small loss of speed. On the DroneVehicle dataset, our module combined with YOLOv5L could achieve the best performance.
ISSN:2072-4292
2072-4292
DOI:10.3390/rs15030660