QTN: Quaternion Transformer Network for Hyperspectral Image Classification

Numerous state-of-the-art transformer-based techniques with self-attention mechanisms have recently been demonstrated to be quite effective in the classification of hyperspectral images (HSIs). However, traditional transformer-based methods severely suffer from the following problems when processing...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 33; no. 12; p. 1
Main Authors Yang, Xiaofei, Cao, Weijia, Lu, Yao, Zhou, Yicong
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Numerous state-of-the-art transformer-based techniques with self-attention mechanisms have recently been demonstrated to be quite effective in the classification of hyperspectral images (HSIs). However, traditional transformer-based methods severely suffer from the following problems when processing HSIs with three dimensions: (1) processing the HSIs using 1D sequences misses the 3D structure information; (2) too expensive numerous parameters for hyperspectral image classification tasks; (3) only capturing spatial information while lacking the spectral information. To solve these problems, we propose a novel Quaternion Transformer Network (QTN) for recovering self-adaptive and long-range correlations in HSIs. Specially, we first develop a band adaptive selection module (BASM) for producing Quaternion data from HSIs. And then, we propose a new and novel quaternion self-attention (QSA) mechanism to capture the local and global representations. Finally, we propose a new and novel transformer method, i.e ., QTN by stacking a series of QSA for hyperspectral classification. The proposed QTN could exploit computation using Quaternion algebra in hypercomplex spaces. Extensive experiments on three public datasets demonstrate that the QTN outperforms the state-of-the-art vision transformers and convolution neural networks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3283289