MGFormer: A lightweight multi-granular transformer for subject-independent Alzheimer’s classification

Transformer-based models have shown great potential for detecting Alzheimer’s Disease (AD) from EEG signals. However, these models often struggle due to their high computational demands and limited ability to capture EEG features across multiple scales. Most existing methods use shallow feature extr...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 110; p. 108230
Main Authors Rahman, Raiyan, Azad, Abul Kalam al, Momen, Sifat
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.12.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Transformer-based models have shown great potential for detecting Alzheimer’s Disease (AD) from EEG signals. However, these models often struggle due to their high computational demands and limited ability to capture EEG features across multiple scales. Most existing methods use shallow feature extraction techniques, failing to fully consider important spatial relationships and spectral information essential for identifying subtle disease-related changes. We propose MGFormer, a lightweight Convolutional Neural Network (CNN)-Transformer hybrid architecture designed for Electroencephalogram (EEG)-based AD classification. MGFormer features a Multi-Granular Token Encoder (MgTE) that extracts spatial–temporal features at multiple granularities, and a Hybrid Feature Fusion (HFF) module that combines long-range temporal modeling (via self-attention), local feature extraction (via 1D CNN), and Fast Fourier based spectral information from the embeddings. We perform a subject-independent evaluation across 113 subjects, which ensures realistic generalizability in medical domain. MGFormer achieves 70.48% accuracy on the AUT-AD dataset and 97.85% on the FSU-AD dataset, outperforming nine state-of-the-art models across six metrics. Ablation studies confirm the critical contributions of FFT features, 1D CNN, and HFF depth, while model complexity analysis shows superior efficiency in parameters and FLOPs compared to baselines. MGFormer effectively addresses key limitations in EEG-based AD classification, offering strong accuracy, generalizability, and computational efficiency. •We propose a lightweight CNN-Transformer for subject-independent AD EEG detection.•Our model captures spatial-temporal EEG features at multiple granularities.•It unifies self-attention, 1D conv, and FFT for EEG feature extraction.•The model outperforms others with fewer parameters and lower computation cost.
ISSN:1746-8094
DOI:10.1016/j.bspc.2025.108230