Deep learning in multimodal remote sensing data fusion: A comprehensive review

With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity are readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way....

Full description

Saved in:
Bibliographic Details
Published inInternational journal of applied earth observation and geoinformation Vol. 112; p. 102926
Main Authors Li, Jiaxin, Hong, Danfeng, Gao, Lianru, Yao, Jing, Zheng, Ke, Zhang, Bing, Chanussot, Jocelyn
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.08.2022
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity are readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyze and interpret strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyze the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted. •A systematic review of deep learning-based multimodal remote sensing data fusion.•Statistical analysis of relevant literature is conducted.•Seven prevalent sub-fields in multimodal remote sensing data fusion are detailed.•Some available resources, including tutorials, datasets, and codes, are provided.•Deep learning yields great achievements in multimodal remote sensing data fusion.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1569-8432
1872-826X
DOI:10.1016/j.jag.2022.102926