Learning Subject-Invariant Representations from Speech-Evoked EEG Using Variational Autoencoders

The electroencephalogram (EEG) is a powerful method to understand how the brain processes speech. Linear models have recently been replaced for this purpose with deep neural networks and yield promising results. In related EEG classification fields, it is shown that explicitly modeling subject-invar...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 1256 - 1260
Main Authors Bollens, Lies, Francart, Tom, Hamme, Hugo Van
Format Conference Proceeding
LanguageEnglish
Published IEEE 23.05.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The electroencephalogram (EEG) is a powerful method to understand how the brain processes speech. Linear models have recently been replaced for this purpose with deep neural networks and yield promising results. In related EEG classification fields, it is shown that explicitly modeling subject-invariant features improves generalization of models across subjects and benefits classification accuracy. In this work, we adapt factorized hierarchical variational autoencoders to exploit parallel EEG recordings of the same stimuli. We model EEG into two disentangled latent spaces. Subject accuracy reaches 98.96% and 1.60% on respectively the subject and content latent space, whereas binary content classification experiments reach an accuracy of 51.51% and 62.91% on respectively the subject and content latent space.
ISSN:2379-190X
DOI:10.1109/ICASSP43922.2022.9747297