Interpretable deep neural networks for single-trial EEG classification

In cognitive neuroscience the potential of deep neural networks (DNNs) for solving complex classification tasks is yet to be fully exploited. The most limiting factor is that DNNs as notorious 'black boxes' do not provide insight into neurophysiological phenomena underlying a decision. Lay...

Full description

Saved in:
Bibliographic Details
Published inJournal of neuroscience methods Vol. 274; pp. 141 - 145
Main Authors Sturm, Irene, Lapuschkin, Sebastian, Samek, Wojciech, Müller, Klaus-Robert
Format Journal Article
LanguageEnglish
Published Netherlands 01.12.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In cognitive neuroscience the potential of deep neural networks (DNNs) for solving complex classification tasks is yet to be fully exploited. The most limiting factor is that DNNs as notorious 'black boxes' do not provide insight into neurophysiological phenomena underlying a decision. Layer-wise relevance propagation (LRP) has been introduced as a novel method to explain individual network decisions. We propose the application of DNNs with LRP for the first time for EEG data analysis. Through LRP the single-trial DNN decisions are transformed into heatmaps indicating each data point's relevance for the outcome of the decision. DNN achieves classification accuracies comparable to those of CSP-LDA. In subjects with low performance subject-to-subject transfer of trained DNNs can improve the results. The single-trial LRP heatmaps reveal neurophysiologically plausible patterns, resembling CSP-derived scalp maps. Critically, while CSP patterns represent class-wise aggregated information, LRP heatmaps pinpoint neural patterns to single time points in single trials. We compare the classification performance of DNNs to that of linear CSP-LDA on two data sets related to motor-imaginary BCI. We have demonstrated that DNN is a powerful non-linear tool for EEG analysis. With LRP a new quality of high-resolution assessment of neural activity can be reached. LRP is a potential remedy for the lack of interpretability of DNNs that has limited their utility in neuroscientific applications. The extreme specificity of the LRP-derived heatmaps opens up new avenues for investigating neural activity underlying complex perception or decision-related processes.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1872-678X
DOI:10.1016/j.jneumeth.2016.10.008