Self-attention for raw optical Satellite Time Series Classification

The amount of available Earth observation data has increased dramatically in recent years. Efficiently making use of the entire body of information is a current challenge in remote sensing; it demands lightweight problem-agnostic models that do not require region- or problem-specific expert knowledg...

Full description

Saved in:
Bibliographic Details
Published inISPRS journal of photogrammetry and remote sensing Vol. 169; pp. 421 - 435
Main Authors Rußwurm, Marc, Körner, Marco
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.11.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The amount of available Earth observation data has increased dramatically in recent years. Efficiently making use of the entire body of information is a current challenge in remote sensing; it demands lightweight problem-agnostic models that do not require region- or problem-specific expert knowledge. End-to-end trained deep learning models can make use of raw sensory data by learning feature extraction and classification in one step, solely from data. Still, many methods proposed in remote sensing research require implicit feature extraction through data preprocessing or explicit design of features. In this work, we compare recent deep learning models on crop type classification on raw and preprocessed Sentinel 2 data. We concentrate on the common neural network architectures for time series, i.e., 1D-convolutions, recurrence, and the novel self-attention architecture. Our central findings are that data preprocessing still increased the overall classification performance for all models while the choice of model was less crucial. Self-attention and recurrent neural networks, by their architecture, outperformed convolutional neural networks on raw satellite time series. We explore this by a feature importance analysis based on gradient backpropagation that exploits the differentiable nature of deep learning models. Further, we qualitatively show how self-attention scores focus selectively on a few classification-relevant observations.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0924-2716
1872-8235
DOI:10.1016/j.isprsjprs.2020.06.006