A hybrid approach for EEG motor imagery classification using adaptive margin disparity and knowledge transfer in convolutional neural networks

– Motor Imagery (MI) using Electroencephalography (EEG) is essential in Brain-Computer Interface (BCI) technology, enabling interaction with external devices by interpreting brain signals. Recent advancements in Convolutional Neural Networks (CNNs) have significantly improved EEG classification task...

Full description

Saved in:
Bibliographic Details
Published inComputers in biology and medicine Vol. 195; p. 110675
Main Authors Vadivelan.D, Senthil, Sethuramalingam, Prabhu
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.09.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:– Motor Imagery (MI) using Electroencephalography (EEG) is essential in Brain-Computer Interface (BCI) technology, enabling interaction with external devices by interpreting brain signals. Recent advancements in Convolutional Neural Networks (CNNs) have significantly improved EEG classification tasks; however, traditional CNN-based methods rely on fixed convolution modes and kernel sizes, limiting their ability to capture diverse temporal and spatial features from one-dimensional EEG-MI signals. This paper introduces the Adaptive Margin Disparity with Knowledge Transfer 2D Model (AMD-KT2D), a novel framework designed to enhance EEG-MI classification. The process begins by transforming EEG-MI signals into 2D time-frequency representations using the Optimized Short-Time Fourier Transform (OptSTFT), which optimizes windowing functions and time-frequency resolution to preserve dynamic temporal and spatial features. The AMD-KT2D framework integrates a guide-learner architecture where Improved ResNet50 (IResNet50), pre-trained on a large-scale dataset, extracts high-level spatial-temporal features, while a Customized 2D Convolutional Neural Network (C2DCNN) captures multi-scale features. To ensure feature alignment and knowledge transfer, the Adaptive Margin Disparity Discrepancy (AMDD) loss function minimizes domain disparity, facilitating multi-scale feature learning in C2DCNN. The optimized learner model then classifies EEG-MI images into left and right-hand movement motor imagery classes. Experimental results on the real-world EEG-MI dataset collected using the Emotiv Epoc Flex system demonstrated that AMD-KT2D achieved a classification accuracy of 96.75 % for subject-dependent and 92.17 % for subject-independent, showcasing its effectiveness in leveraging domain adaptation, knowledge transfer, and multi-scale feature learning for advanced EEG-based BCI applications. •EEG-MI signals are transformed into 2D images via OptSTFT, preserving spatial-temporal features for better classification.•AMD-KT2D uses a guide-learner setup with IResNet50 for feature extraction and C2DCNN for multi-scale pattern detection.•AMDD loss boosts feature alignment and knowledge transfer, reducing data disparities and enhancing cross-subject generalization.•Optimized C2DCNN accurately classifies EEG images into left/right-hand movements, enhancing motor imagery task performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0010-4825
1879-0534
1879-0534
DOI:10.1016/j.compbiomed.2025.110675