Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks

For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-f...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 31; no. 10; pp. 3839 - 3852
Main Authors Kwon, O-Yeon, Lee, Min-Ho, Guan, Cuntai, Lee, Seong-Whan
Format Journal Article
LanguageEnglish
Published United States IEEE 01.10.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].
AbstractList For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].
For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].
Author Kwon, O-Yeon
Lee, Min-Ho
Guan, Cuntai
Lee, Seong-Whan
Author_xml – sequence: 1
  givenname: O-Yeon
  orcidid: 0000-0001-5498-0540
  surname: Kwon
  fullname: Kwon, O-Yeon
  email: data0311@gmail.com
  organization: Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
– sequence: 2
  givenname: Min-Ho
  orcidid: 0000-0002-5730-1715
  surname: Lee
  fullname: Lee, Min-Ho
  email: minho.lee@nu.edu.kz
  organization: Department of Computer Science, Nazarbayev University, Astana, Kazakhstan
– sequence: 3
  givenname: Cuntai
  orcidid: 0000-0002-0872-3276
  surname: Guan
  fullname: Guan, Cuntai
  email: ctguan@ntu.edu.sg
  organization: School of Computer Science and Engineering, Nanyang Technological University, Singapore
– sequence: 4
  givenname: Seong-Whan
  orcidid: 0000-0002-6249-4996
  surname: Lee
  fullname: Lee, Seong-Whan
  email: sw.lee@korea.ac.kr
  organization: Department of Artificial Intelligence, Korea University, Seoul, South Korea
BackLink https://www.ncbi.nlm.nih.gov/pubmed/31725394$$D View this record in MEDLINE/PubMed
BookMark eNp9kb1u2zAUhYkiQZMmfoEWKARkySKHf6LIsXbSxoDhDPbQTgJFXQFyZVIlqRR9-9Cx6yFDONxLkN-5JM75hM6ss4DQZ4KnhGB1t1mtluspxURNqeJCCvUBXVIiaE6ZlGenffnzAk1C2OK0BC4EVx_RBSMlLZjil-jXeqy3YGK-sA0MkIqN2czrzuZztxvGCD5b2FRbbSBkMx2gyZzN7gGGbO7ss-vH2Dmr-2wFo39t8a_zv8M1Om91H2By7Fdo8_1hM3_Ml08_FvNvy9wwVcS8baipW0kEEyUxQFipWcmVaBsoSIPF_kjTdFPWYDgwWbecUgINJgWoll2h28PYwbs_I4RY7bpgoO-1BTeGijJSYJUs4gm9eYNu3ejT1xPFuWIEcyUT9fVIjfUOmmrw3U77f9V_zxJAD4DxLgQP7QkhuNpnU71mU-2zqY7ZJJF8IzJd1HvnYjK7f1_65SDtAOD0lpSqLCljL0xInAI
CODEN ITNNAL
CitedBy_id crossref_primary_10_1109_TCYB_2024_3410844
crossref_primary_10_1109_TNSRE_2023_3339179
crossref_primary_10_1109_TNSRE_2020_3001990
crossref_primary_10_3390_s24196466
crossref_primary_10_1109_TNNLS_2023_3243339
crossref_primary_10_3390_s23239588
crossref_primary_10_1016_j_bspc_2021_103101
crossref_primary_10_1186_s13040_023_00336_y
crossref_primary_10_1007_s00521_021_06352_5
crossref_primary_10_1109_TBME_2021_3137184
crossref_primary_10_31083_j_jin2308153
crossref_primary_10_1109_TNSRE_2024_3457502
crossref_primary_10_1186_s12984_023_01181_0
crossref_primary_10_1109_ACCESS_2024_3406736
crossref_primary_10_1080_2326263X_2024_2347790
crossref_primary_10_1109_JBHI_2023_3337072
crossref_primary_10_1038_s44222_024_00185_2
crossref_primary_10_1016_j_neunet_2023_03_039
crossref_primary_10_3390_s24123755
crossref_primary_10_1109_ACCESS_2020_2983182
crossref_primary_10_1109_LSP_2021_3095761
crossref_primary_10_3390_s22093331
crossref_primary_10_1016_j_neunet_2021_08_019
crossref_primary_10_1016_j_neuroimage_2022_119754
crossref_primary_10_1038_s41597_024_03838_4
crossref_primary_10_1109_TNNLS_2022_3209925
crossref_primary_10_1109_JTEHM_2020_2999725
crossref_primary_10_1038_s41597_022_01509_w
crossref_primary_10_1109_JBHI_2023_3238421
crossref_primary_10_1109_ACCESS_2025_3528539
crossref_primary_10_1109_MCI_2021_3061875
crossref_primary_10_1142_S0218001422500410
crossref_primary_10_1007_s42600_023_00321_8
crossref_primary_10_1109_TNSRE_2022_3173724
crossref_primary_10_3389_fnhum_2021_646915
crossref_primary_10_1109_TIM_2020_3041099
crossref_primary_10_3390_s21165436
crossref_primary_10_3389_fnins_2023_1292724
crossref_primary_10_1109_JSEN_2022_3171808
crossref_primary_10_1088_1361_6579_ad4e95
crossref_primary_10_1109_TCYB_2021_3122969
crossref_primary_10_1371_journal_pone_0309706
crossref_primary_10_1016_j_bspc_2023_105063
crossref_primary_10_1109_ACCESS_2021_3049191
crossref_primary_10_1109_ACCESS_2022_3171906
crossref_primary_10_1016_j_asoc_2022_108416
crossref_primary_10_1109_JBHI_2024_3392564
crossref_primary_10_1016_j_bbe_2024_12_003
crossref_primary_10_1109_TNNLS_2023_3269949
crossref_primary_10_1109_TNSRE_2023_3307481
crossref_primary_10_1016_j_bspc_2022_104456
crossref_primary_10_1109_TNNLS_2023_3303308
crossref_primary_10_3389_fnhum_2022_898300
crossref_primary_10_1109_TBME_2024_3458389
crossref_primary_10_1109_TBME_2022_3140246
crossref_primary_10_1016_j_knosys_2025_113315
crossref_primary_10_1007_s11042_025_20605_8
crossref_primary_10_1109_JBHI_2024_3467090
crossref_primary_10_3390_brainsci13071109
crossref_primary_10_3389_fnsys_2021_578875
crossref_primary_10_1109_JBHI_2021_3091187
crossref_primary_10_1142_S012918312350047X
crossref_primary_10_1186_s40708_021_00124_6
crossref_primary_10_1016_j_neucom_2024_129010
crossref_primary_10_1088_1741_2552_ac7257
crossref_primary_10_1016_j_compbiomed_2024_109260
crossref_primary_10_1016_j_aiia_2023_04_002
crossref_primary_10_1038_s41598_025_93047_9
crossref_primary_10_1109_JSEN_2023_3270281
crossref_primary_10_1007_s42979_022_01515_0
crossref_primary_10_1088_1741_2552_ac5eb7
crossref_primary_10_1109_TNSRE_2023_3254551
crossref_primary_10_1016_j_patrec_2023_10_011
crossref_primary_10_1016_j_compbiomed_2024_108973
crossref_primary_10_1109_JIOT_2024_3402254
crossref_primary_10_1016_j_bspc_2024_106206
crossref_primary_10_1109_ACCESS_2021_3089998
crossref_primary_10_3389_fnins_2021_667907
crossref_primary_10_3389_fnins_2022_824471
crossref_primary_10_1007_s11042_023_16278_w
crossref_primary_10_1016_j_bspc_2022_103496
crossref_primary_10_3390_app11052070
crossref_primary_10_1109_JSEN_2024_3510059
crossref_primary_10_1109_TNNLS_2023_3236635
crossref_primary_10_3390_s21062173
crossref_primary_10_1155_2022_3893866
crossref_primary_10_1016_j_imu_2023_101352
crossref_primary_10_1063_5_0054912
crossref_primary_10_3389_fnhum_2021_643386
crossref_primary_10_1109_TNNLS_2023_3287181
crossref_primary_10_1088_1741_2552_ac4430
crossref_primary_10_1088_1741_2552_ab6df3
crossref_primary_10_1016_j_bspc_2024_106771
crossref_primary_10_1109_TNSRE_2023_3323509
crossref_primary_10_3389_fnins_2020_00918
crossref_primary_10_1109_TITS_2024_3522308
crossref_primary_10_3390_bdcc8120169
crossref_primary_10_3390_brainsci15020129
crossref_primary_10_3389_fnins_2023_1219988
crossref_primary_10_1038_s41597_021_01094_4
crossref_primary_10_1109_TCDS_2021_3079712
crossref_primary_10_1088_1741_2552_ac463a
crossref_primary_10_1109_TNSRE_2022_3186442
crossref_primary_10_1109_TSMC_2024_3382828
crossref_primary_10_1109_TASE_2022_3183589
crossref_primary_10_1109_TNSRE_2022_3143836
crossref_primary_10_1371_journal_pone_0274101
crossref_primary_10_1109_TAFFC_2022_3169001
crossref_primary_10_1109_TNSRE_2021_3125386
crossref_primary_10_1088_1741_2552_ad0c61
crossref_primary_10_3389_fnins_2023_1124089
crossref_primary_10_1016_j_engappai_2022_105347
crossref_primary_10_1109_TII_2024_3450010
crossref_primary_10_3390_s24186110
crossref_primary_10_1093_pnasnexus_pgae145
crossref_primary_10_3389_fnhum_2020_00338
crossref_primary_10_1109_TNNLS_2020_3048385
crossref_primary_10_1038_s41598_024_73755_4
crossref_primary_10_1109_JBHI_2024_3454097
crossref_primary_10_1016_j_neunet_2024_106108
crossref_primary_10_3390_bios14050211
crossref_primary_10_1038_s41467_022_28451_0
crossref_primary_10_1109_TNNLS_2020_3027984
crossref_primary_10_1109_TNSRE_2022_3228216
crossref_primary_10_1016_j_compbiomed_2023_107135
crossref_primary_10_3390_s21196672
crossref_primary_10_1016_j_compbiomed_2023_107254
crossref_primary_10_1109_TNSRE_2023_3265304
crossref_primary_10_1007_s11596_024_2927_6
crossref_primary_10_1109_TNNLS_2022_3210384
crossref_primary_10_14789_jmj_JMJ23_0011_R
crossref_primary_10_1109_JBHI_2023_3304646
crossref_primary_10_1109_ACCESS_2022_3204758
crossref_primary_10_1007_s00521_021_06202_4
crossref_primary_10_1109_TNSRE_2020_2981659
crossref_primary_10_1109_TNSRE_2020_3027004
crossref_primary_10_1364_BOE_489179
crossref_primary_10_1109_ACCESS_2022_3195513
crossref_primary_10_1093_gigascience_giaa098
crossref_primary_10_1016_j_artmed_2023_102738
crossref_primary_10_1016_j_eswa_2021_116101
crossref_primary_10_1016_j_dsp_2022_103816
crossref_primary_10_3233_JCM_247551
crossref_primary_10_1109_THMS_2022_3168421
crossref_primary_10_1109_TII_2020_3044310
crossref_primary_10_3390_signals4010013
crossref_primary_10_1080_27706710_2023_2258938
crossref_primary_10_1109_TNSRE_2024_3502135
crossref_primary_10_3390_s21134293
crossref_primary_10_3390_s21082750
crossref_primary_10_1080_2326263X_2021_2009654
crossref_primary_10_16984_saufenbilder_1190493
crossref_primary_10_1016_j_neunet_2023_11_037
crossref_primary_10_1109_TCDS_2024_3376433
crossref_primary_10_1109_ACCESS_2023_3320561
crossref_primary_10_1109_TNNLS_2021_3053576
crossref_primary_10_3390_s25051399
crossref_primary_10_1007_s11571_022_09906_y
crossref_primary_10_1016_j_matpr_2021_06_409
crossref_primary_10_26599_BSA_2022_9050011
crossref_primary_10_1088_1741_2552_ad19ea
crossref_primary_10_1109_THMS_2022_3189576
crossref_primary_10_1109_ACCESS_2024_3430838
crossref_primary_10_1109_TNSRE_2024_3454088
crossref_primary_10_1109_TNSRE_2020_2966826
crossref_primary_10_1080_10255842_2023_2275244
crossref_primary_10_1109_TNSRE_2023_3299355
crossref_primary_10_1088_1741_2552_ad788e
crossref_primary_10_1109_ACCESS_2022_3190967
crossref_primary_10_1109_JSEN_2021_3101684
crossref_primary_10_3390_app112210906
Cites_doi 10.1016/j.neuroimage.2010.03.022
10.1109/TNSRE.2007.914468
10.1088/1741-2560/14/1/016003
10.1109/IEMBS.2009.5334126
10.1016/j.patcog.2011.04.034
10.1109/TNNLS.2016.2628878
10.1109/TCYB.2019.2924237
10.1088/1741-2560/10/4/046003
10.1371/journal.pone.0146282
10.1109/TPAMI.2016.2572683
10.1109/TPAMI.2012.69
10.1002/ana.23879
10.1109/TNSRE.2016.2597854
10.1109/TNSRE.2017.2681963
10.3389/fnhum.2019.00039
10.1109/JPROC.2015.2415800
10.1016/j.patcog.2015.03.010
10.1109/TNSRE.2018.2839116
10.3389/fnins.2018.00340
10.1109/TNNLS.2014.2302898
10.1371/journal.pone.0172578
10.1038/nature11076
10.1016/j.neuroimage.2007.01.051
10.1109/TNSRE.2014.2375879
10.1109/IEMBS.2011.6091139
10.1109/TPAMI.2005.159
10.1093/gigascience/giz002
10.1109/TNNLS.2015.2496284
10.1016/j.neunet.2009.06.003
10.1038/srep38565
10.1109/CVPR.2016.319
10.1109/TBME.2005.851521
10.1109/JPROC.2015.2404941
10.1109/TNSRE.2016.2601240
10.1109/5.939829
10.1088/1741-2560/13/1/016014
10.1080/00140139.2012.661083
10.1088/1741-2552/aa785c
10.1109/ISCAS.2010.5537907
10.3390/s120201211
10.1088/1741-2560/4/2/R01
10.1162/NECO_a_00592
10.1561/9781601988157
10.1109/TNNLS.2018.2789927
10.1109/IWW-BCI.2016.7457440
10.1109/BSN.2016.7516270
10.1002/hbm.23730
10.1109/86.895946
10.1016/j.jneumeth.2009.01.015
10.1109/TKDE.2009.191
10.1109/TBME.2010.2093133
10.1109/TNNLS.2016.2636227
10.3389/fnbeh.2015.00269
10.1109/TBME.2013.2253608
10.1109/CVPR.2016.213
10.1109/MCI.2015.2501545
10.1371/journal.pone.0111157
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
ESBDL
RIA
RIE
AAYXX
CITATION
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2019.2946869
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE Xplore Open Access Journals
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList
PubMed
MEDLINE - Academic
Materials Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 3852
ExternalDocumentID 31725394
10_1109_TNNLS_2019_2946869
8897723
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: Institute for Information and Communications Technology Planning and Evaluation (IITP) Grant
– fundername: Korea Government (MSIT) (Development of Intelligent Pattern Recognition Softwares for Ambulatory Brain-Computer Interface) and (Development of BCI based Brain and Cognitive Computing Technology for Recognizing User‘s Intentions using Deep Learning)
  grantid: 2015-0-00185; 2017-0-00451
– fundername: Samsung Research Funding Center of Samsung Electronics
  grantid: SRFC-TC1603-02
  funderid: 10.13039/100004358
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
ESBDL
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c395t-fd2cbf8163671ce137a37496fde51d06ce13a21ce7bec4e38bf4221ed015e9f3
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Fri Jul 11 05:30:55 EDT 2025
Mon Jun 30 05:34:59 EDT 2025
Thu Jan 02 22:57:48 EST 2025
Thu Apr 24 23:08:59 EDT 2025
Tue Jul 01 00:27:31 EDT 2025
Wed Aug 27 02:31:19 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 10
Language English
License https://creativecommons.org/licenses/by/4.0/legalcode
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c395t-fd2cbf8163671ce137a37496fde51d06ce13a21ce7bec4e38bf4221ed015e9f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-5730-1715
0000-0001-5498-0540
0000-0002-6249-4996
0000-0002-0872-3276
OpenAccessLink https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/document/8897723
PMID 31725394
PQID 2449310498
PQPubID 85436
PageCount 14
ParticipantIDs ieee_primary_8897723
proquest_miscellaneous_2315092944
proquest_journals_2449310498
crossref_primary_10_1109_TNNLS_2019_2946869
pubmed_primary_31725394
crossref_citationtrail_10_1109_TNNLS_2019_2946869
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-10-01
PublicationDateYYYYMMDD 2020-10-01
PublicationDate_xml – month: 10
  year: 2020
  text: 2020-10-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
ref56
ref12
ref59
ref15
ref58
ref14
ref53
ref52
ref55
ref11
sun (ref41) 2014
ref54
ref10
ref17
ref16
alamgir (ref34) 2010
ref19
ref18
krizhevsky (ref44) 2012
ref51
ref50
ref46
ref45
ref48
blankertz (ref26) 2008
ref42
ref43
ref49
ref8
ref7
ref9
ref4
gaurav (ref3) 2016; 6
ref6
ref5
ang (ref28) 2008
ref35
ref37
ref36
ref31
ref30
ref33
ref32
ref1
ref39
shi (ref47) 2018; 29
ref38
ref24
ref23
ref25
ref64
ref20
ref63
ref66
ref22
ref65
ref21
ref27
leigh (ref2) 2012; 485
ref29
yosinski (ref60) 2015
arnold (ref40) 2011
ref62
ref61
References_xml – ident: ref63
  doi: 10.1016/j.neuroimage.2010.03.022
– ident: ref27
  doi: 10.1109/TNSRE.2007.914468
– ident: ref48
  doi: 10.1088/1741-2560/14/1/016003
– ident: ref32
  doi: 10.1109/IEMBS.2009.5334126
– ident: ref57
  doi: 10.1016/j.patcog.2011.04.034
– ident: ref45
  doi: 10.1109/TNNLS.2016.2628878
– ident: ref15
  doi: 10.1109/TCYB.2019.2924237
– ident: ref12
  doi: 10.1088/1741-2560/10/4/046003
– start-page: 113
  year: 2008
  ident: ref26
  article-title: Invariant common spatial patterns: Alleviating nonstationarities in brain-computer interfacing
  publication-title: Proc Neural Inf Process Syst (NIPS)
– ident: ref59
  doi: 10.1371/journal.pone.0146282
– ident: ref62
  doi: 10.1109/TPAMI.2016.2572683
– ident: ref30
  doi: 10.1109/TPAMI.2012.69
– ident: ref5
  doi: 10.1002/ana.23879
– start-page: 1988
  year: 2014
  ident: ref41
  article-title: Deep learning face representation by joint identification-verification
  publication-title: Proc Neural Inf Process Syst (NIPS)
– start-page: 1
  year: 2015
  ident: ref60
  article-title: Understanding neural networks through deep visualization
  publication-title: Proc 32nd Int Conf Mach Learn (ICML)
– ident: ref9
  doi: 10.1109/TNSRE.2016.2597854
– ident: ref58
  doi: 10.1109/TNSRE.2017.2681963
– ident: ref7
  doi: 10.3389/fnhum.2019.00039
– ident: ref4
  doi: 10.1109/JPROC.2015.2415800
– ident: ref17
  doi: 10.1016/j.patcog.2015.03.010
– ident: ref11
  doi: 10.1109/TNSRE.2018.2839116
– ident: ref6
  doi: 10.3389/fnins.2018.00340
– start-page: 477
  year: 2011
  ident: ref40
  article-title: An introduction to deep learning
  publication-title: Proc Eur Symp Artif Neural Netw Comput Intell Mach Learn (ESANN)
– ident: ref21
  doi: 10.1109/TNNLS.2014.2302898
– ident: ref19
  doi: 10.1371/journal.pone.0172578
– volume: 485
  start-page: 372
  year: 2012
  ident: ref2
  article-title: Reach and grasp by people with tetraplegia using a neurally controlled robotic arm
  publication-title: Nature
  doi: 10.1038/nature11076
– ident: ref25
  doi: 10.1016/j.neuroimage.2007.01.051
– ident: ref8
  doi: 10.1109/TNSRE.2014.2375879
– ident: ref38
  doi: 10.1109/IEMBS.2011.6091139
– ident: ref54
  doi: 10.1109/TPAMI.2005.159
– volume: 29
  start-page: 2872
  year: 2018
  ident: ref47
  article-title: Improving CNN performance accuracies with min-max objective
  publication-title: IEEE Trans Neural Netw Learn Syst
– start-page: 2391
  year: 2008
  ident: ref28
  article-title: Filter bank common spatial pattern (FBCSP) in brain-computer interface
  publication-title: Proc Int Joint Conf Neural Netw (IJCNN)
– ident: ref64
  doi: 10.1093/gigascience/giz002
– start-page: 17
  year: 2010
  ident: ref34
  article-title: Multitask learning for brain-computer interfaces
  publication-title: Proc 14th Int Conf Artif Intell Statist (AISTATS)
– ident: ref20
  doi: 10.1109/TNNLS.2015.2496284
– ident: ref33
  doi: 10.1016/j.neunet.2009.06.003
– ident: ref14
  doi: 10.1038/srep38565
– ident: ref66
  doi: 10.1109/CVPR.2016.319
– ident: ref29
  doi: 10.1109/TBME.2005.851521
– ident: ref22
  doi: 10.1109/JPROC.2015.2404941
– ident: ref49
  doi: 10.1109/TNSRE.2016.2601240
– volume: 6
  year: 2016
  ident: ref3
  article-title: Using an artificial neural bypass to restore cortical control of rhythmic movements in a human with quadriplegia
  publication-title: Sci Rep
– ident: ref1
  doi: 10.1109/5.939829
– ident: ref10
  doi: 10.1088/1741-2560/13/1/016014
– ident: ref13
  doi: 10.1080/00140139.2012.661083
– ident: ref56
  doi: 10.1088/1741-2552/aa785c
– ident: ref43
  doi: 10.1109/ISCAS.2010.5537907
– ident: ref16
  doi: 10.3390/s120201211
– ident: ref24
  doi: 10.1088/1741-2560/4/2/R01
– ident: ref39
  doi: 10.1162/NECO_a_00592
– ident: ref42
  doi: 10.1561/9781601988157
– ident: ref51
  doi: 10.1109/TNNLS.2018.2789927
– ident: ref53
  doi: 10.1109/IWW-BCI.2016.7457440
– ident: ref65
  doi: 10.1109/BSN.2016.7516270
– ident: ref50
  doi: 10.1002/hbm.23730
– ident: ref23
  doi: 10.1109/86.895946
– ident: ref52
  doi: 10.1016/j.jneumeth.2009.01.015
– ident: ref55
  doi: 10.1109/TKDE.2009.191
– ident: ref37
  doi: 10.1109/TBME.2010.2093133
– ident: ref46
  doi: 10.1109/TNNLS.2016.2636227
– start-page: 1097
  year: 2012
  ident: ref44
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Neural Inf Process Syst (NIPS)
– ident: ref31
  doi: 10.3389/fnbeh.2015.00269
– ident: ref36
  doi: 10.1109/TBME.2013.2253608
– ident: ref61
  doi: 10.1109/CVPR.2016.213
– ident: ref35
  doi: 10.1109/MCI.2015.2501545
– ident: ref18
  doi: 10.1371/journal.pone.0111157
SSID ssj0000605649
Score 2.6916058
Snippet For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure...
For a brain–computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 3839
SubjectTerms Artificial neural networks
Band theory
Bayesian analysis
Brain
Brain modeling
Brain–computer interface (BCI)
Calibration
convolutional neural networks (CNNs)
Covariance matrix
deep learning (DL)
EEG
Electrodes
Electroencephalography
electroencephalography (EEG)
Embedding
Feature extraction
Filter banks
Frequencies
Frequency dependence
Human-computer interface
Imagery
Information theory
Interfaces
Mathematical models
Mental task performance
Model accuracy
motor imagery (MI)
Neural networks
Optimization
Representations
Spectra
subject-independent
Task analysis
Title Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks
URI https://ieeexplore.ieee.org/document/8897723
https://www.ncbi.nlm.nih.gov/pubmed/31725394
https://www.proquest.com/docview/2449310498
https://www.proquest.com/docview/2315092944
Volume 31
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4tnHopjy3tAkWpxI16cezJw0cWimjV3UsXaTlFTuxcQAmCLAd-PWPnIVS1iFOieBI7mXHmm_F4BuA45ZgrjJAl3EQMTSqZykvOtObG1StE4x1u80V8dY2_VtFqBN-HvTDWWh98Zqfu1K_lm7pYO1fZaZoSWhFyAzbIcGv3ag3-FE64PPZoV4SxYEImq36PDFeny8Xi9x8XyKWmQmGcuvjmV3rIF1b5P8b0uuZyC-b9KNsQk9vpusmnxfNfCRzf-xrb8LEDncFZKyU7MLLVLmz1BR2Cbn6P4YZ-I84vw34OxXGbYOaKSLCB2LsQSxfIFcxIA5qgroILa--D87p66uSY-nJJP_zBR5k_foLl5Y_l-RXrai-wQqqoYaURRV6mhNbiJCxsKBMtE1RxaWwUGh67S1pQS0JCgFameYlChNYQvLCqlHuwWdWV_QIBN6EsCKRo1AlirnVBOtFKMitLjEODEwh7RmRFl5fclce4y7x9wlXmmZc55mUd8yZwMtxz32bleJN67JgwUHbffwKHPb-zbuI-ZoR2FCFeVOkEvg3NNOXcOoqubL0mGkkommAl0tg_t3IyPJvgmIikwv1_93kAH4Qz2H004CFsNg9r-5VQTZMfeXF-ARaU8jw
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcoALBcpjoUCQ4IS8dezJwwcOtKXapdu9sEjLyXJi5wJKKjYLgr_CX-HHMXYeQgi4VeKUKHbi2DP2fDMezwA8yzkWChNkGbcJQ5tLpoqKM2O49fkK0QaD2_kynb3DN-tkvQPfx7MwzrngfOam_jbs5dum3HpT2WGeE1oRQ6rqM_f1Cylom5fzE6LmcyFOX6-OZ6zPIcBKqZKWVVaURZUT6kizuHSxzIzMUKWVdUlseeofGUElGXUGncyLCoWInSUx6VQl6bNX4CrBjER0h8NGAw4nRSAN8FrEqWBCZuvhUA5Xh6vlcvHWe46pqVCY5t6h-hfBFzK5_B3UBuF2ugc_hmHpfFo-TLdtMS2__RYx8j8dt5twowfV0atuFtyCHVffhr0hYUXUr1_78J6WSW93YvMx-W8bHfkkGWysHEyklXdUi45IwtuoqaMT5y6i46b-3M9TassHNQmX4EW_uQOry-jfXditm9rdh4jbWJYEwgyaDLEwpiSZ7ySpzRWmscUJxAPdddnHXffpPz7qoH9xpQOvaM8ruueVCbwY37nooo78s_a-p_lYsyf3BA4G9tL9wrTRhOYUIXpU-QSejsW0pPh9IlO7Zkt1JGkJBJuR_v1ex5bjtwluikQqfPDnNp_AtdnqfKEX8-XZQ7guvHEieD4ewG77aeseEYJri8dhJkWgL5kDfwLukFB7
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Subject-Independent+Brain%E2%80%93Computer+Interfaces+Based+on+Deep+Convolutional+Neural+Networks&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=O-Yeon+Kwon&rft.au=Min-Ho%2C+Lee&rft.au=Guan%2C+Cuntai&rft.au=Lee%2C+Seong-Whan&rft.date=2020-10-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=31&rft.issue=10&rft.spage=3839&rft_id=info:doi/10.1109%2FTNNLS.2019.2946869&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon