A Lightweight 1-D Convolution Augmented Transformer with Metric Learning for Hyperspectral Image Classification

Hyperspectral image (HSI) classification is the subject of intense research in remote sensing. The tremendous success of deep learning in computer vision has recently sparked the interest in applying deep learning in hyperspectral image classification. However, most deep learning methods for hypersp...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 21; no. 5; p. 1751
Main Authors Hu, Xiang, Yang, Wenjing, Wen, Hao, Liu, Yu, Peng, Yuanxi
Format Journal Article
LanguageEnglish
Published Switzerland MDPI 03.03.2021
MDPI AG
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Hyperspectral image (HSI) classification is the subject of intense research in remote sensing. The tremendous success of deep learning in computer vision has recently sparked the interest in applying deep learning in hyperspectral image classification. However, most deep learning methods for hyperspectral image classification are based on convolutional neural networks (CNN). Those methods require heavy GPU memory resources and run time. Recently, another deep learning model, the transformer, has been applied for image recognition, and the study result demonstrates the great potential of the transformer network for computer vision tasks. In this paper, we propose a model for hyperspectral image classification based on the transformer, which is widely used in natural language processing. Besides, we believe we are the first to combine the metric learning and the transformer model in hyperspectral image classification. Moreover, to improve the model classification performance when the available training samples are limited, we use the 1-D convolution and Mish activation function. The experimental results on three widely used hyperspectral image data sets demonstrate the proposed model’s advantages in accuracy, GPU memory cost, and running time.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s21051751