CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition

Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imag...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 22; no. 13; p. 4679
Main Authors Rusnac, Ana-Luiza, Grigore, Ovidiu
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 21.06.2022
MDPI
Subjects
Online AccessGet full text
ISSN1424-8220
1424-8220
DOI10.3390/s22134679

Cover

Loading…
Abstract Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imaginary speech recognition system to obtain the best performance according to the developed method that can be applied to a low-cost system with limited resources. In developing the system, we used signals from the Kara One database containing recordings acquired for seven phonemes and four words. We used in the feature extraction stage a method based on covariance in the frequency domain that performed better compared to the other time-domain methods. Further, we observed the system performance when using different window lengths for the input signal (0.25 s, 0.5 s and 1 s) to highlight the importance of the short-term analysis of the signals for imaginary speech. The final goal being the development of a low-cost system, we studied several architectures of convolutional neural networks (CNN) and showed that a more complex architecture does not necessarily lead to better results. Our study was conducted on eight different subjects, and it is meant to be a subject’s shared system. The best performance reported in this paper is up to 37% accuracy for all 11 different phonemes and words when using cross-covariance computed over the signal spectrum of a 0.25 s window and a CNN containing two convolutional layers with 64 and 128 filters connected to a dense layer with 64 neurons. The final system qualifies as a low-cost system using limited resources for decision-making and having a running time of 1.8 ms tested on an AMD Ryzen 7 4800HS CPU.
AbstractList Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imaginary speech recognition system to obtain the best performance according to the developed method that can be applied to a low-cost system with limited resources. In developing the system, we used signals from the Kara One database containing recordings acquired for seven phonemes and four words. We used in the feature extraction stage a method based on covariance in the frequency domain that performed better compared to the other time-domain methods. Further, we observed the system performance when using different window lengths for the input signal (0.25 s, 0.5 s and 1 s) to highlight the importance of the short-term analysis of the signals for imaginary speech. The final goal being the development of a low-cost system, we studied several architectures of convolutional neural networks (CNN) and showed that a more complex architecture does not necessarily lead to better results. Our study was conducted on eight different subjects, and it is meant to be a subject’s shared system. The best performance reported in this paper is up to 37% accuracy for all 11 different phonemes and words when using cross-covariance computed over the signal spectrum of a 0.25 s window and a CNN containing two convolutional layers with 64 and 128 filters connected to a dense layer with 64 neurons. The final system qualifies as a low-cost system using limited resources for decision-making and having a running time of 1.8 ms tested on an AMD Ryzen 7 4800HS CPU.
Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imaginary speech recognition system to obtain the best performance according to the developed method that can be applied to a low-cost system with limited resources. In developing the system, we used signals from the Kara One database containing recordings acquired for seven phonemes and four words. We used in the feature extraction stage a method based on covariance in the frequency domain that performed better compared to the other time-domain methods. Further, we observed the system performance when using different window lengths for the input signal (0.25 s, 0.5 s and 1 s) to highlight the importance of the short-term analysis of the signals for imaginary speech. The final goal being the development of a low-cost system, we studied several architectures of convolutional neural networks (CNN) and showed that a more complex architecture does not necessarily lead to better results. Our study was conducted on eight different subjects, and it is meant to be a subject's shared system. The best performance reported in this paper is up to 37% accuracy for all 11 different phonemes and words when using cross-covariance computed over the signal spectrum of a 0.25 s window and a CNN containing two convolutional layers with 64 and 128 filters connected to a dense layer with 64 neurons. The final system qualifies as a low-cost system using limited resources for decision-making and having a running time of 1.8 ms tested on an AMD Ryzen 7 4800HS CPU.Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imaginary speech recognition system to obtain the best performance according to the developed method that can be applied to a low-cost system with limited resources. In developing the system, we used signals from the Kara One database containing recordings acquired for seven phonemes and four words. We used in the feature extraction stage a method based on covariance in the frequency domain that performed better compared to the other time-domain methods. Further, we observed the system performance when using different window lengths for the input signal (0.25 s, 0.5 s and 1 s) to highlight the importance of the short-term analysis of the signals for imaginary speech. The final goal being the development of a low-cost system, we studied several architectures of convolutional neural networks (CNN) and showed that a more complex architecture does not necessarily lead to better results. Our study was conducted on eight different subjects, and it is meant to be a subject's shared system. The best performance reported in this paper is up to 37% accuracy for all 11 different phonemes and words when using cross-covariance computed over the signal spectrum of a 0.25 s window and a CNN containing two convolutional layers with 64 and 128 filters connected to a dense layer with 64 neurons. The final system qualifies as a low-cost system using limited resources for decision-making and having a running time of 1.8 ms tested on an AMD Ryzen 7 4800HS CPU.
Author Rusnac, Ana-Luiza
Grigore, Ovidiu
AuthorAffiliation Department of Applied Electronics and Information Engineering, Faculty of Electronics, Telecommunications and Information Technology, Polytechnic University of Bucharest, 060042 Bucharest, Romania
AuthorAffiliation_xml – name: Department of Applied Electronics and Information Engineering, Faculty of Electronics, Telecommunications and Information Technology, Polytechnic University of Bucharest, 060042 Bucharest, Romania
Author_xml – sequence: 1
  givenname: Ana-Luiza
  orcidid: 0000-0002-5966-2944
  surname: Rusnac
  fullname: Rusnac, Ana-Luiza
– sequence: 2
  givenname: Ovidiu
  orcidid: 0000-0002-6381-5296
  surname: Grigore
  fullname: Grigore, Ovidiu
BookMark eNptkU1vEzEQhleoiH7AgX9giQscQv2xtncvSFWUtpFKkYCerVl7NnG0sYO9qeDf422qilacbM88fvTac1odhRiwqt4z-lmIlp5nzpmolW5fVSes5vWs4Zwe_bM_rk5z3lDKhRDNm-pYyIY2TIuT6m5-e0sukl37Ee24T5gJBEcuEaYDWfweE9jRx0C-4riOLpM-JrJYXJHlFlY-QPpDfuwQ7Zp8RxtXwU_w2-p1D0PGd4_rWXV3ufg5v57dfLtazi9uZrau1ThDbmWPHWjXMHAAFJnsdMu4trLr284KabVWSkLpdWB73suWSdnUnQJprTirlgevi7Axu-S3JY-J4M1DIaaVgTR6O6BhXCqrJFNOq9r1xehk7xQDpijyGorry8G123dbdBZDefrwTPq8E_zarOK9ablqtNRF8PFRkOKvPebRbH22OAwQMO6zmTDNVCtlQT-8QDdxn0L5qolSrGmpYoX6dKBsijkn7J_CMGqmwZunwRf2_AVr_QjTLEpWP_znxl8sarBg
CitedBy_id crossref_primary_10_1109_TCDS_2024_3431224
crossref_primary_10_3389_fnhum_2024_1398065
crossref_primary_10_3390_s23125575
crossref_primary_10_1088_1741_2552_acc976
crossref_primary_10_1007_s11356_023_25509_4
crossref_primary_10_1016_j_bspc_2023_105154
crossref_primary_10_3390_app122211873
crossref_primary_10_3390_s23104853
crossref_primary_10_3390_s24030877
crossref_primary_10_1016_j_smhl_2024_100477
crossref_primary_10_1007_s12021_024_09698_y
crossref_primary_10_1007_s10462_023_10662_6
crossref_primary_10_1088_2399_6528_ad0197
crossref_primary_10_3390_s22218122
Cites_doi 10.1007/s12065-020-00540-3
10.1109/ICASSP.2015.7178118
10.3389/fnbot.2019.00037
10.1016/j.compbiomed.2018.05.019
10.21437/Interspeech.2019-3041
10.1109/EMBC46164.2021.9629732
10.1109/ICASSP.2019.8683572
10.3389/fnins.2021.774857
10.3389/fnins.2019.01267
10.1145/3107411.3107513
10.1109/ACCESS.2021.3091399
10.3389/fnins.2020.00290
10.1093/brain/awh233
10.1109/ICASSP.2019.8682330
10.1038/s41467-021-27725-3
10.1109/CSCC49995.2020.00040
10.1109/ICSPCC.2017.8242581
10.1016/j.jneumeth.2021.109282
10.1007/s11063-019-09981-z
10.1109/86.847815
10.3389/fnins.2020.578126
10.1109/ISSC.2018.8585291
10.3390/s21206744
ContentType Journal Article
Copyright 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2022 by the authors. 2022
Copyright_xml – notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2022 by the authors. 2022
DBID AAYXX
CITATION
3V.
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
DOI 10.3390/s22134679
DatabaseName CrossRef
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
ProQuest Central
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
Health & Medical Collection (Alumni)
PML(ProQuest Medical Library)
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central China
ProQuest Central
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList

MEDLINE - Academic
Publicly Available Content Database
CrossRef
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1424-8220
ExternalDocumentID oai_doaj_org_article_1256c6516d764dfaaad5fd61a160e24a
PMC9268757
10_3390_s22134679
GroupedDBID ---
123
2WC
53G
5VS
7X7
88E
8FE
8FG
8FI
8FJ
AADQD
AAHBH
AAYXX
ABDBF
ABUWG
ACUHS
ADBBV
ADMLS
AENEX
AFKRA
AFZYC
ALIPV
ALMA_UNASSIGNED_HOLDINGS
BENPR
BPHCQ
BVXVI
CCPQU
CITATION
CS3
D1I
DU5
E3Z
EBD
ESX
F5P
FYUFA
GROUPED_DOAJ
GX1
HH5
HMCUK
HYE
IAO
ITC
KQ8
L6V
M1P
M48
MODMG
M~E
OK1
OVT
P2P
P62
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
PSQYO
RNS
RPM
TUS
UKHRP
XSB
~8M
3V.
7XB
8FK
AZQEC
DWQXO
K9.
PJZUB
PKEHL
PPXIY
PQEST
PQUKI
PRINS
7X8
5PM
PUEGO
ID FETCH-LOGICAL-c446t-e2c5feba7d81adaa0e15b79127c5bf9bc35c77665aaa0bacf2f5915584b6a5cc3
IEDL.DBID 7X7
ISSN 1424-8220
IngestDate Wed Aug 27 01:28:46 EDT 2025
Thu Aug 21 14:31:17 EDT 2025
Mon Jul 21 10:41:07 EDT 2025
Fri Jul 25 20:34:14 EDT 2025
Tue Jul 01 02:42:00 EDT 2025
Thu Apr 24 23:11:20 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 13
Language English
License https://creativecommons.org/licenses/by/4.0
Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c446t-e2c5feba7d81adaa0e15b79127c5bf9bc35c77665aaa0bacf2f5915584b6a5cc3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-6381-5296
0000-0002-5966-2944
OpenAccessLink https://www.proquest.com/docview/2686189061?pq-origsite=%requestingapplication%
PMID 35808173
PQID 2686189061
PQPubID 2032333
ParticipantIDs doaj_primary_oai_doaj_org_article_1256c6516d764dfaaad5fd61a160e24a
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9268757
proquest_miscellaneous_2687716955
proquest_journals_2686189061
crossref_primary_10_3390_s22134679
crossref_citationtrail_10_3390_s22134679
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20220621
PublicationDateYYYYMMDD 2022-06-21
PublicationDate_xml – month: 6
  year: 2022
  text: 20220621
  day: 21
PublicationDecade 2020
PublicationPlace Basel
PublicationPlace_xml – name: Basel
PublicationTitle Sensors (Basel, Switzerland)
PublicationYear 2022
Publisher MDPI AG
MDPI
Publisher_xml – name: MDPI AG
– name: MDPI
References Dewan (ref_2) 1967; 214
ref_13
Milanes (ref_26) 2021; 9
ref_12
ref_11
Huang (ref_25) 2021; 15
ref_10
Sarvamangala (ref_23) 2021; 15
ref_32
ref_31
ref_30
ref_19
Herff (ref_15) 2019; 13
Dronkers (ref_1) 2004; 127
Proix (ref_16) 2022; 13
Dash (ref_14) 2020; 14
ref_24
Lee (ref_28) 2021; 361
ref_22
ref_21
Xu (ref_20) 2020; 14
Lin (ref_29) 2019; 50
ref_27
Jayabhavani (ref_4) 2013; 5
ref_9
Tsiouris (ref_17) 2018; 99
Xing (ref_18) 2019; 13
ref_8
Kennedy (ref_3) 2000; 8
ref_5
ref_7
ref_6
References_xml – ident: ref_30
– ident: ref_5
– ident: ref_32
– volume: 15
  start-page: 1
  year: 2021
  ident: ref_23
  article-title: Convolutional neural networks in medical image understanding: A survey
  publication-title: Evol. Intel.
  doi: 10.1007/s12065-020-00540-3
– ident: ref_7
  doi: 10.1109/ICASSP.2015.7178118
– volume: 13
  start-page: 37
  year: 2019
  ident: ref_18
  article-title: SAE + LSTM: A New Framework for Emotion Recognition From Multi-Channel EEG
  publication-title: Front. Neurorobot.
  doi: 10.3389/fnbot.2019.00037
– volume: 99
  start-page: 24
  year: 2018
  ident: ref_17
  article-title: A Long Short-Term Memory deep learning network for the prediction of epileptic seizures using EEG signals
  publication-title: Comput. Biol. Med.
  doi: 10.1016/j.compbiomed.2018.05.019
– ident: ref_12
  doi: 10.21437/Interspeech.2019-3041
– ident: ref_24
  doi: 10.1109/EMBC46164.2021.9629732
– ident: ref_21
– ident: ref_22
  doi: 10.1109/ICASSP.2019.8683572
– volume: 15
  start-page: 774857
  year: 2021
  ident: ref_25
  article-title: Electroencephalogram-Based Motor Imagery Classification Using Deep Residual Convolutional Networks
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2021.774857
– volume: 13
  start-page: 1267
  year: 2019
  ident: ref_15
  article-title: Generating Natural, Intelligible Speech From Brain Activity in Motor, Premotor, and Inferior Frontal Cortices
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2019.01267
– ident: ref_19
  doi: 10.1145/3107411.3107513
– volume: 9
  start-page: 98275
  year: 2021
  ident: ref_26
  article-title: Shallow Convolutional Network Excel for Classifying Motor Imagery EEG in BCI Applications
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3091399
– ident: ref_6
– ident: ref_8
– volume: 14
  start-page: 290
  year: 2020
  ident: ref_14
  article-title: Decoding Imagined and Spoken Phrases from Non-invasive Neural (MEG) Signals
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2020.00290
– volume: 127
  start-page: 7
  year: 2004
  ident: ref_1
  article-title: Brain areas involved in speech production
  publication-title: Brain
  doi: 10.1093/brain/awh233
– ident: ref_31
– ident: ref_9
  doi: 10.1109/ICASSP.2019.8682330
– volume: 13
  start-page: 48
  year: 2022
  ident: ref_16
  article-title: Imagined speech can be decoded from low- and cross-frequency intracranial EEG features
  publication-title: Nat. Commun.
  doi: 10.1038/s41467-021-27725-3
– volume: 214
  start-page: 975
  year: 1967
  ident: ref_2
  article-title: Occipital Alpha Rhythm Eye Position and Lens Accommodation
  publication-title: Nat. Publ. Group
– volume: 5
  start-page: 1
  year: 2013
  ident: ref_4
  article-title: Brain enabled mechanized speech synthesizer using Brain Mobile Interface
  publication-title: Int. J. Eng. Technol.
– ident: ref_11
  doi: 10.1109/CSCC49995.2020.00040
– ident: ref_27
  doi: 10.1109/ICSPCC.2017.8242581
– volume: 361
  start-page: 109282
  year: 2021
  ident: ref_28
  article-title: A convolutional-recurrent neural network approach to resting-state EEG classification in Parkinson’s disease
  publication-title: J. Neurosci. Methods
  doi: 10.1016/j.jneumeth.2021.109282
– volume: 50
  start-page: 1951
  year: 2019
  ident: ref_29
  article-title: A Fast Algorithm for Convolutional Neural Networks Using Tile-based Fast Fourier Transforms
  publication-title: Neural Process Lett.
  doi: 10.1007/s11063-019-09981-z
– volume: 8
  start-page: 2
  year: 2000
  ident: ref_3
  article-title: Direct control of a computer from the human central nervous system
  publication-title: IEEE Trans. Rehab. Eng.
  doi: 10.1109/86.847815
– volume: 14
  start-page: 578126
  year: 2020
  ident: ref_20
  article-title: A One-Dimensional CNN-LSTM Model for Epileptic Seizure Recognition Using EEG Signal Analysis
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2020.578126
– ident: ref_10
  doi: 10.1109/ISSC.2018.8585291
– ident: ref_13
  doi: 10.3390/s21206744
SSID ssj0023338
Score 2.4518309
Snippet Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly...
SourceID doaj
pubmedcentral
proquest
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Enrichment Source
Index Database
StartPage 4679
SubjectTerms Accuracy
Amyotrophic lateral sclerosis
Classification
Communication
convolutional neural network
Electrodes
Electroencephalography
imaginary speech
Kara One database
Methods
Neural networks
Researchers
signal processing
Speaking
Speech
Voice recognition
Wavelet transforms
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1La9wwEBYhp-RQmkep2yQoIYdeTCxZrz0mYfOC7CHpQm5mJMuk0HrD7gaaf58Z27vYEOilR1tjY89IzPeh0TeMnWYgoxO5SZXwMlVRQeoUqDRY6ooScVL5Ru1zYm6m6u5JP_VafVFNWCsP3DruDBOwCUYLU1qjygoASl2VRoAwWZSqgUaY81ZkqqNaOTKvVkcoR1J_tpAkXGaoXquXfRqR_gGyHNZF9hLN1Wf2qUOI_Lz9sh22Eetdtt3TDdxj08vJhJ_3tgAWHOqSE5zDCz7-u5y35xX4fdMgesERmvLx-Jrf_qGuRDB_448vMYZn_rAqIJrV-2x6Nf55eZN2_RHSgCRumUZJpWIebOkElABZFNrbkZA2aF-NfMh1sNYYjU7LPIRKVprk4J3yBnQI-Re2Wc_q-JXx0guLb9DOkACc8w6BiEfqESRIXLaQsB8rvxWhEw-nHha_CyQR5OJi7eKEnaxNX1rFjI-MLsj5awMSuW5uYOiLLvTFv0KfsINV6Ipu5S0KaZwRboQwJWHH62FcM7QRAnWcvTY2llSCtE6YHYR88EHDkfrXc6O-PaKntf32P_7gO9uSdJwiM6kUB2xzOX-Nhwhylv6omc_vB7L8jA
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: Scholars Portal Journals: Open Access
  dbid: M48
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1NT9wwEB1RemkPVaGtuoUiU_XAJe3a668cqgrQAq3EHoCVuEW240AlmoXsIsG_ZyabRBsJcUw8SZyxR36jsd8D-D50Ilo-0onkXiQySpdY6WQSDKmiRJxUvmb7nOiTqfx7qS7XoNXYbBw4fza1Iz2paXXz4-Hu8TcG_C_KODFl_zkXREumTfoKXuOCZCg-T2VXTBAjTMOWpEJ9895SVDP292Bmf5Pkyqpz9B7eNXCR7S_HdwPWYrkJb1dIBD_A9HAyYfsr9YA5c2XOCNvhBRs_LKrl4QV2WqtFzxniVDYeH7M__0miyFWP7Pw2xnDNztrdRLPyI0yPxheHJ0kjlpAEzOgWSRS0b8w7k1vucueGkStvUi5MUL5IfRipYIzWymGbd6EQhSJueCu9diqE0SdYL2dl_Aws99zgG5TVxAZnvUVU4jEPCcIJjGE3gL3Wb1lomMRJ0OImw4yCXJx1Lh7At870dkmf8ZzRATm_MyDG6_rGrLrKmgDKEIjpoBXXudEyL_AvclXkmjuuh1FI7NR2O3RZO4syoa3mNkXMMoDdrhkDiKoiroyz-9rGEGWQUgMwvSHvdajfUv67rqm4U3pamS8vf3wL3gg6NTHUieDbsL6o7uNXxDILv1PP1CeeSPUl
  priority: 102
  providerName: Scholars Portal
Title CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
URI https://www.proquest.com/docview/2686189061
https://www.proquest.com/docview/2687716955
https://pubmed.ncbi.nlm.nih.gov/PMC9268757
https://doaj.org/article/1256c6516d764dfaaad5fd61a160e24a
Volume 22
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3db9MwELdge4EHxKcojMpDPPBirXb81Se0Tek2pFVoUKlvke04GxIkpekk9t_vzk27RkK8WEp8SZyzz3dnn39HyKeRE9HyTDPJvWAySsesdJIFg1lRIgwqn9A-p_p8Jr_O1bxbcGu7sMrNnJgm6rIJuEZ-JLTV3I5B_XxZ_GGYNQp3V7sUGo_JPkKXYUiXmT84XBn4X2s0oQxc-6NWIHyZxqitHR2UoPp79mU_OnJH3Uyek2ednUiP1x37gjyK9UvydAc98BWZnU6n9HhnI6Clri4pGnVwQfO_q-X61AK9TGmiWwoGKs3zM3rxG3MTueUd_b6IMdzQq00YUVO_JrNJ_uP0nHVZElgAV27FosCAMe9MabkrnRtFrrwZc2GC8tXYh0wFY7RWDuq8C5WoFILCW-m1UyFkb8he3dTxLaGl5wbeoKxGGDjrLZgjHhyQIJwA4XUD8nnDtyJ0EOKYyeJXAa4EsrjYsnhAPm5JF2vcjH8RnSDztwQIdZ1uNMvropOcAiwwHbTiujRalhX8RamqUnPH9SgKCY062HRd0clfWzyMlgE53FaD5OB2iKtjc5toDGIFKTUgptflvQb1a-qfNwmDe4xPK_Pu_x9_T54IPC4x0kzwA7K3Wt7GD2DErPwwjVQo7eRsSPZP8um3q2FaEIDyUtp7hGX4Dg
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1fb9MwED-N8QA8IP6KwgCDQOIlWuzEdvKA0BgdLdv6AKvUt8x2HIYESWk6wb4Un5G7pOkaCfG2x8QXxznf-e7i8-8AXoVG-IRHKoi5FUHsYxMksYkDp6kqikehsg3a50SNpvGnmZxtwZ_uLAylVXZrYrNQ55Wjf-S7QiWKJyman3fznwFVjaLd1a6ERisWh_7iF4Zs9dvxB5zf10IcDE_2R8GqqkDgMPRZBl5QgpU1Ok-4yY0JPZdWp1xoJ22RWhdJp7VS0mCbNa4QhSQQ9SS2ykjnIuz3GlxHwxuSRunZZYAXYbzXohdFURru1oLg0hRliW3YvKY0QM-f7Wdjbpi3gztwe-WXsr1WkO7Cli_vwa0NtML7MN2fTNjexsZDzUyZM3Ii8YINfy8X7SkJdtyUpa4ZOsRsOPzIxj-oFpJZXLAvc-_dGfvcpS1V5QOYXgn_HsJ2WZX-EbDcco09yEQR7FxiE3R_LAY8ThiBi4UZwJuOb5lbQZZT5YzvGYYuxOJszeIBvFyTzlucjn8RvSfmrwkIWru5US2-ZitNzdDjU05JrnKt4rzAr8hlkStuuAq9iHFQO93UZSt9r7NL6RzAi3Uzaiptv5jSV-cNjSZsIikHoHtT3htQv6X8dtZgfqf0tNSP___y53BjdHJ8lB2NJ4dP4KagoxqhCgTfge3l4tw_RQdqaZ81Usvg9KrV5C8aGDL1
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwED-NISF4QHyKwgCDQOIlauzEdvqA0NhaVgYVAir1LdiOw5AgKU0n2L_GX8ddPrpGQrztMbHjOOc7-y4-_34Az0IjfMIjFcTciiD2sQmS2MSB08SK4lGpbI32OVNH8_jtQi524E93FobSKrs5sZ6os9LRP_KhUIniyQiXn2HepkV8OJy8Wv4MiEGKdlo7Oo1GRY792S8M36qX00Mc6-dCTMafD46ClmEgcBgGrQMvKNnKGp0l3GTGhJ5Lq0dcaCdtPrIukk5rpaTBMmtcLnJJgOpJbJWRzkXY7iW4rCPJycb04jzYizD2a5CMomgUDitB0GmKMsa21r-aJqDn2_YzM7eWuskNuN76qGy_UaqbsOOLW3BtC7nwNswPZjO2v7UJUTFTZIwcSrxg49_rVXNigr2vKaorhs4xG4_fsOkP4kUyqzP2aem9O2EfuxSmsrgD8wuR313YLcrC3wOWWa6xBZkogqBLbIKukMXgxwkjcOIwA3jRyS11LXw5sWh8TzGMIRGnGxEP4Omm6rLB7PhXpdck_E0Fgtmub5Srr2lrtSl6f8opyVWmVZzl-BWZzDPFDVehFzF2aq8burS1_So919QBPNkUo9XSVowpfHla19GEUyTlAHRvyHsd6pcU305q_O8RPS31_f-__DFcQQNJ301nxw_gqqBTG6EKBN-D3fXq1D9EX2ptH9VKy-DLRVvJX_ioNys
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=CNN+Architectures+and+Feature+Extraction+Methods+for+EEG+Imaginary+Speech+Recognition&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Ana-Luiza+Rusnac&rft.au=Grigore%2C+Ovidiu&rft.date=2022-06-21&rft.pub=MDPI+AG&rft.eissn=1424-8220&rft.volume=22&rft.issue=13&rft.spage=4679&rft_id=info:doi/10.3390%2Fs22134679&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon