MAE-MI: masked autoencoder for self-supervised learning on motor imagery EEG data

In this study, we propose a self-supervised method for learning universal neurocognitive representations from motor imagery electroencephalography (EEG) segments. Our model, named MAE-MI, is a masked autoencoder that learns robust and generic embeddings. The aim is to capture the essential features...

Full description

Saved in:
Bibliographic Details
Main Authors Zhang, Yifan, Hu, Xinyu, Feng, Huijie, Wu, Anqi, Li, Hao, Yu, Yang
Format Conference Proceeding
LanguageEnglish
Published SPIE 04.09.2024
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this study, we propose a self-supervised method for learning universal neurocognitive representations from motor imagery electroencephalography (EEG) segments. Our model, named MAE-MI, is a masked autoencoder that learns robust and generic embeddings. The aim is to capture the essential features and recover the masked EEG signals during pretraining, rather than simply interpolating. Therefore, we design a connectivity guidance masking strategy. Furthermore, we optimize the encoder-decoder structure to suit the information density of EEG signals. We also introduce two finetuning modes for downstream tasks that are task-specific and subject-specific; they correspond to cross-subject and singlesubject evaluations, respectively. We assess the generalization performance of MAE-MI on a public motor imagery EEG dataset. The experimental results indicate that MAE-MI consistently outperforms the state-of-the-art methods with an average accuracy increase of 9.4% in single-subject prediction experiments; it also obtains competitive results in crosssubject experiments. We illustrate the feature extraction capability of MAE-MI by visualizing its reconstruction effects on corrupted EEG signals.
Bibliography:Conference Date: 2024-06-07|2024-06-09
Conference Location: Yinchuan, China
ISBN:1510682597
9781510682597
ISSN:0277-786X
DOI:10.1117/12.3039558