A Wearable Wireless Brain-Computer Interface Using Steady-State Visual Evoked Potentials

The objective of this study is to investigate the feasibility of a single-electrode electroencephalogram (EEG)-based brain-computer interface (BCI) in differentiating two conditions. This approach has the potential to be implemented as a computer input device for users to express choices (e.g., left...

Full description

Saved in:
Bibliographic Details
Published in2018 3rd International Conference on Control, Robotics and Cybernetics (CRC) pp. 78 - 82
Main Authors Lim, Alfred, Chia, Wai Chong
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2018
Subjects
Online AccessGet full text
DOI10.1109/CRC.2018.00024

Cover

Loading…
More Information
Summary:The objective of this study is to investigate the feasibility of a single-electrode electroencephalogram (EEG)-based brain-computer interface (BCI) in differentiating two conditions. This approach has the potential to be implemented as a computer input device for users to express choices (e.g., left and right, yes and no). The attentional allocation of participants among boxes that each flicker at a different frequency (e.g., 8.6 Hz and 12 Hz) can be distinguished based on EEG alone. Traditionally, steady-state visual evoked potentials (SSVEPs) are studied using multi-channel EEG systems, which greatly hinders the user's mobility. Although SSVEPs are mostly examined in the frequency domain and from the occipital region of the brain, we tested five classifiers with 44 features extracted from the EEG, recorded using an electrode at the frontopolar area (FP1). Apart from using frequency-domain features, such as fast Fourier transform (FFT) coefficients and power spectral density (PSD) features, we also included time-domain features from the pre-frontal region and achieved an average classification accuracy of 74.58% using a random forest (RF) classifier.
DOI:10.1109/CRC.2018.00024