Latent variable method for automatic adaptation to background states in motor imagery BCI

Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states in...

Full description

Saved in:
Bibliographic Details
Published inJournal of neural engineering Vol. 15; no. 1; pp. 16004 - 16017
Main Authors Dagaev, Nikolay, Volkova, Ksenia, Ossadtchi, Alexei
Format Journal Article
LanguageEnglish
Published England IOP Publishing 01.02.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model's parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.
Bibliography:JNE-101859.R1
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1741-2560
1741-2552
1741-2552
DOI:10.1088/1741-2552/aa8065