Towards Ultrasound Tongue Image prediction from EEG during speech production

Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive pr...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Tamás Gábor Csapó, Frigyes Viktor Arthur, Nagy, Péter, Boncz, Ádám
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive processes in the brain, the kinematics of the articulatory movement and the resulting speech signal. In this paper, we describe our multimodal (electroencephalography, ultrasound tongue imaging, and speech) analysis and synthesis experiments, as a feasibility study. We extend the analysis of brain signals recorded during speech production with ultrasound-based articulation data. From the brain signal measured with EEG, we predict ultrasound images of the tongue with a fully connected deep neural network. The results show that there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e. the network can differentiate articulated speech and neutral tongue position.
ISSN:2331-8422
DOI:10.48550/arxiv.2306.05374