Decoding Two-Class Motor Imagery EEG with Capsule Networks

Recently, deep learning approaches such as convolutional neural networks (CNN) have been widely applied to improve the classification performance of motor imagery-based brain-computer interfaces (BCI). However, CNN is known to have a limitation that its classification performance is degraded when th...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE International Conference on Big Data and Smart Computing (BigComp) pp. 1 - 4
Main Authors Ha, Kwon-Woo, Jeong, Jin-Woo
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, deep learning approaches such as convolutional neural networks (CNN) have been widely applied to improve the classification performance of motor imagery-based brain-computer interfaces (BCI). However, CNN is known to have a limitation that its classification performance is degraded when the target data are distorted. Particularly in case of electroencephalography (EEG), the signals measured from the same user are not consistent. To address this issue, we propose to apply Capsule networks (CapsNet) which implicitly learn various features, thereby achieving more robust and reliable performance than traditional CNN approaches. In this paper, a novel method based on CapsNet to classify two-class motor imagery signals is presented. The motor imagery EEG signals are transformed into time-frequency images using Short-Time Fourier Transform (STFT) and then supplied for training and testing capsule networks. The experimental results on BCI competition IV 2b dataset show that the proposed CapsNet based architecture outperforms previous CNN-based approaches.
ISSN:2375-9356
DOI:10.1109/BIGCOMP.2019.8678917