LTFormer: A light-weight transformer-based self-supervised matching network for heterogeneous remote sensing images

Matching visible and near-infrared (NIR) images is a major challenge in remote sensing image fusion due to nonlinear radiometric differences. Deep learning has shown promise in computer vision, but most methods rely on supervised learning with limited annotated data in remote sensing. To address thi...

Full description

Saved in:
Bibliographic Details
Published inInformation fusion Vol. 109; p. 102425
Main Authors Zhang, Wang, Li, Tingting, Zhang, Yuntian, Pei, Gensheng, Jiang, Xiruo, Yao, Yazhou
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Matching visible and near-infrared (NIR) images is a major challenge in remote sensing image fusion due to nonlinear radiometric differences. Deep learning has shown promise in computer vision, but most methods rely on supervised learning with limited annotated data in remote sensing. To address this, we propose a novel keypoint descriptor approach that obtains robust feature descriptors via a self-supervised matching network. Our light-weight transformer network, LTFormer, generates deep-level feature descriptors. Furthermore, we implement an innovative triplet loss function, LT Loss, to enhance the matching performance further. Our approach outperforms conventional hand-crafted local feature descriptors and proves equally competitive compared to state-of-the-art deep learning-based methods, even amidst the shortage of annotated data. Code and pre-trained model are available at https://github.com/NUST-Machine-Intelligence-Laboratory/LTFormer. •Propose a data construction strategy to facilitate the matching process.•Develop a pyramid-based transformer network to generate deep feature descriptors.•Instantiate the LT Loss function grounded on Triplet Loss.
ISSN:1566-2535
1872-6305
DOI:10.1016/j.inffus.2024.102425