Hyperspectral image classification using feature fusion fuzzy graph broad network

In recent years, graph convolutional networks (GCNs) have shown strong performance in hyperspectral image (HSI) classification. However, traditional GCN methods often use superpixel-based nodes to reduce computational complexity, which fails to capture pixel-level spectral-spatial features. Addition...

Full description

Saved in:
Bibliographic Details
Published inInformation sciences Vol. 689; p. 121504
Main Authors Chu, Yonghe, Cao, Jun, Ding, Weiping, Huang, Jiashuang, Ju, Hengrong, Cao, Heling, Liu, Guangen
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.01.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, graph convolutional networks (GCNs) have shown strong performance in hyperspectral image (HSI) classification. However, traditional GCN methods often use superpixel-based nodes to reduce computational complexity, which fails to capture pixel-level spectral-spatial features. Additionally, these methods typically focus on matching predicted labels with ground truth, neglecting the relationships between inter-class and intra-class distances, leading to less discriminative features. To address these issues, we propose a feature fusion fuzzy graph broad network (F3GBN) for HSI classification. Our method extracts pixel-level attribute contour features using attribute filters and fuses them with superpixel features through canonical correlation analysis. We employ a broad learning system (BLS) as the classifier, which fully utilizes spectral-spatial information via nonlinear transformations. Furthermore, we construct intra-class and inter-class graphs based on fuzzy set and manifold learning theories to ensure better clustering of samples within the same class and separation between different classes. A novel loss function is introduced in BLS to minimize intra-class distances and maximize inter-class distances, enhancing feature discriminability. The proposed F3GBN model achieved impressive overall accuracy on public datasets: 96.73% on Indian Pines, 98.29% on Pavia University, 98.69% on Salinas, and 99.43% on Kennedy Space Center, outperforming several classical and state-of-the-art methods, thereby demonstrating its effectiveness and feasibility. •Pixel and superpixel feature fusion using canonical correlation enhances representation.•Broad learning system classifier fully utilizes spectral-spatial information.•New loss function improves accuracy by optimizing intra-class and inter-class feature distances.
ISSN:0020-0255
DOI:10.1016/j.ins.2024.121504