TARDB-Net: triple-attention guided residual dense and BiLSTM networks for hyperspectral image classification

Each sample in the hyperspectral remote sensing image has high-dimensional features and contains rich spatial and spectral information, which greatly increases the difficulty of feature selection and mining. In view of these difficulties, we propose a novel Triple-attention Guided Residual Dense and...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 80; no. 7; pp. 11291 - 11312
Main Authors Cai, Weiwei, Liu, Botao, Wei, Zhanguo, Li, Meilin, Kan, Jiangming
Format Journal Article
LanguageEnglish
Published New York Springer US 01.03.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Each sample in the hyperspectral remote sensing image has high-dimensional features and contains rich spatial and spectral information, which greatly increases the difficulty of feature selection and mining. In view of these difficulties, we propose a novel Triple-attention Guided Residual Dense and BiLSTM networks(TARDB-Net) to reduce redundant features while increasing feature fusion capabilities, which ultimately improves the ability to classify hyperspectral images. First, a novel Triple-attention mechanism is proposed to assign different weights to each feature. Then, the residual network is used to perform the residual operation on the features, and the initial features of the multiple residual blocks and the generated deep residual features are intensively fused, retaining a host number of prior features. And use the bidirectional long short-term memory network to integrate the contextual semantics of deep fusion features. Finally, the classification task is completed by Softmax classifier. Experiments on three hyperspectral datasets—Indian Pines, University of Pavia, and Salinas—show that under 10% of the training samples, the overall accuracy of our method is 87%, 96% and 96%, which is superior to several well-known methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-020-10188-x