Cross-Subject MEG Decoding Using 3D Convolutional Neural Networks

Traditional MEG-based brain decoding approaches require to manual design and extract various features from raw MEG data and often ignore the subtle spatial information contained in the MEG signal. Motivated by this fact, we present a 3D-CNN method to tackle these obstacles. In this approach, a 3-Dim...

Full description

Saved in:
Bibliographic Details
Published in2019 WRC Symposium on Advanced Robotics and Automation (WRC SARA) pp. 354 - 359
Main Authors Huang, Zebin, Yu, Tianyou
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.08.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Traditional MEG-based brain decoding approaches require to manual design and extract various features from raw MEG data and often ignore the subtle spatial information contained in the MEG signal. Motivated by this fact, we present a 3D-CNN method to tackle these obstacles. In this approach, a 3-Dimensional Convolutional Neural Networks (3D-CNN) is applied to classify magnetoencephalography states by effectively learning spatial-temporal representation of raw MEG data. And the 3D data representation, which is used as the data input for the proposed 3D-CNN model, is converted from the multi-channel MEG signal to retain the spatial correlations between physically neighbouring MEG signals. An improved self-training phase is developed to enhance the cross-subject performance of the proposed 3D-CNN approach. Experiments on an MEG dataset of face vs. scramble decoding task demonstrate that the proposed method can achieve promising performance.
DOI:10.1109/WRC-SARA.2019.8931958