Thinker invariance: enabling deep neural networks for BCI across more people
Objective. Most deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively shallow compared to the state-of-the-art in the wider machine learning literature. The goal of this work is to frame these as a unified challe...
Saved in:
Published in | Journal of neural engineering Vol. 17; no. 5; pp. 56008 - 56029 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
England
IOP Publishing
13.10.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Objective. Most deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively shallow compared to the state-of-the-art in the wider machine learning literature. The goal of this work is to frame these as a unified challenge and reconsider how transfer learning is used to overcome these difficulties. Approach. We present two variations of a holistic approach to transfer learning with DNNs for BCI that rely on a deeper network called TIDNet. Our approaches use multiple subjects for training in the interest of creating a more universal classifier that is applicable for new (unseen) subjects. The first approach is purely subject-invariant and the second targets specific subjects, without loss of generality. We use five publicly accessible datasets covering a range of tasks and compare our approaches to state-of-the-art alternatives in detail. Main results. We observe that TIDNet in conjunction with our training augmentations is more consistent when compared to shallower baselines, and in some cases exhibits large and significant improvements, for instance motor imagery classification improvements of over 8%. Furthermore, we show that our suggested multi-domain learning (MDL) strategy strongly outperforms simply fine-tuned general models when targeting specific subjects, while remaining more generalizable to still unseen subjects. Significance. TIDNet in combination with a data alignment-based training augmentation proves to be a consistent classification approach of single raw trials and can be trained even with the inclusion of corrupted trials. Our MDL strategy calls into question the intuition to fine-tune trained classifiers to new subjects, as it proves simpler and more accurate while remaining general. Furthermore, we show evidence that augmented TIDNet training makes better use of additional subjects, showing continued and greater performance improvement over shallower alternatives, indicating promise for a new subject-invariant paradigm rather than a subject-specific one. |
---|---|
AbstractList | Objective. Most deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively shallow compared to the state-of-the-art in the wider machine learning literature. The goal of this work is to frame these as a unified challenge and reconsider how transfer learning is used to overcome these difficulties. Approach. We present two variations of a holistic approach to transfer learning with DNNs for BCI that rely on a deeper network called TIDNet. Our approaches use multiple subjects for training in the interest of creating a more universal classifier that is applicable for new (unseen) subjects. The first approach is purely subject-invariant and the second targets specific subjects, without loss of generality. We use five publicly accessible datasets covering a range of tasks and compare our approaches to state-of-the-art alternatives in detail. Main results. We observe that TIDNet in conjunction with our training augmentations is more consistent when compared to shallower baselines, and in some cases exhibits large and significant improvements, for instance motor imagery classification improvements of over 8%. Furthermore, we show that our suggested multi-domain learning (MDL) strategy strongly outperforms simply fine-tuned general models when targeting specific subjects, while remaining more generalizable to still unseen subjects. Significance. TIDNet in combination with a data alignment-based training augmentation proves to be a consistent classification approach of single raw trials and can be trained even with the inclusion of corrupted trials. Our MDL strategy calls into question the intuition to fine-tune trained classifiers to new subjects, as it proves simpler and more accurate while remaining general. Furthermore, we show evidence that augmented TIDNet training makes better use of additional subjects, showing continued and greater performance improvement over shallower alternatives, indicating promise for a new subject-invariant paradigm rather than a subject-specific one. Most deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively shallow compared to the state-of-the-art in the wider machine learning literature. The goal of this work is to frame these as a unified challenge and reconsider how transfer learning is used to overcome these difficulties. We present two variations of a holistic approach to transfer learning with DNNs for BCI that rely on a deeper network called TIDNet. Our approaches use multiple subjects for training in the interest of creating a more universal classifier that is applicable for new (unseen) subjects. The first approach is purely subject-invariant and the second targets specific subjects, without loss of generality. We use five publicly accessible datasets covering a range of tasks and compare our approaches to state-of-the-art alternatives in detail. We observe that TIDNet in conjunction with our training augmentations is more consistent when compared to shallower baselines, and in some cases exhibits large and significant improvements, for instance motor imagery classification improvements of over 8%. Furthermore, we show that our suggested multi-domain learning (MDL) strategy strongly outperforms simply fine-tuned general models when targeting specific subjects, while remaining more generalizable to still unseen subjects. TIDNet in combination with a data alignment-based training augmentation proves to be a consistent classification approach of single raw trials and can be trained even with the inclusion of corrupted trials. Our MDL strategy calls into question the intuition to fine-tune trained classifiers to new subjects, as it proves simpler and more accurate while remaining general. Furthermore, we show evidence that augmented TIDNet training makes better use of additional subjects, showing continued and greater performance improvement over shallower alternatives, indicating promise for a new subject-invariant paradigm rather than a subject-specific one. Most deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively shallow compared to the state-of-the-art in the wider machine learning literature. The goal of this work is to frame these as a unified challenge and reconsider how transfer learning is used to overcome these difficulties.OBJECTIVEMost deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively shallow compared to the state-of-the-art in the wider machine learning literature. The goal of this work is to frame these as a unified challenge and reconsider how transfer learning is used to overcome these difficulties.We present two variations of a holistic approach to transfer learning with DNNs for BCI that rely on a deeper network called TIDNet. Our approaches use multiple subjects for training in the interest of creating a more universal classifier that is applicable for new (unseen) subjects. The first approach is purely subject-invariant and the second targets specific subjects, without loss of generality. We use five publicly accessible datasets covering a range of tasks and compare our approaches to state-of-the-art alternatives in detail.APPROACHWe present two variations of a holistic approach to transfer learning with DNNs for BCI that rely on a deeper network called TIDNet. Our approaches use multiple subjects for training in the interest of creating a more universal classifier that is applicable for new (unseen) subjects. The first approach is purely subject-invariant and the second targets specific subjects, without loss of generality. We use five publicly accessible datasets covering a range of tasks and compare our approaches to state-of-the-art alternatives in detail.We observe that TIDNet in conjunction with our training augmentations is more consistent when compared to shallower baselines, and in some cases exhibits large and significant improvements, for instance motor imagery classification improvements of over 8%. Furthermore, we show that our suggested multi-domain learning (MDL) strategy strongly outperforms simply fine-tuned general models when targeting specific subjects, while remaining more generalizable to still unseen subjects.MAIN RESULTSWe observe that TIDNet in conjunction with our training augmentations is more consistent when compared to shallower baselines, and in some cases exhibits large and significant improvements, for instance motor imagery classification improvements of over 8%. Furthermore, we show that our suggested multi-domain learning (MDL) strategy strongly outperforms simply fine-tuned general models when targeting specific subjects, while remaining more generalizable to still unseen subjects.TIDNet in combination with a data alignment-based training augmentation proves to be a consistent classification approach of single raw trials and can be trained even with the inclusion of corrupted trials. Our MDL strategy calls into question the intuition to fine-tune trained classifiers to new subjects, as it proves simpler and more accurate while remaining general. Furthermore, we show evidence that augmented TIDNet training makes better use of additional subjects, showing continued and greater performance improvement over shallower alternatives, indicating promise for a new subject-invariant paradigm rather than a subject-specific one.SIGNIFICANCETIDNet in combination with a data alignment-based training augmentation proves to be a consistent classification approach of single raw trials and can be trained even with the inclusion of corrupted trials. Our MDL strategy calls into question the intuition to fine-tune trained classifiers to new subjects, as it proves simpler and more accurate while remaining general. Furthermore, we show evidence that augmented TIDNet training makes better use of additional subjects, showing continued and greater performance improvement over shallower alternatives, indicating promise for a new subject-invariant paradigm rather than a subject-specific one. |
Author | Kostas, Demetres Rudzicz, Frank |
Author_xml | – sequence: 1 givenname: Demetres orcidid: 0000-0001-9516-8145 surname: Kostas fullname: Kostas, Demetres organization: University of Toronto, Vector Institute for Artificial Intelligence ; Toronto, Canada – sequence: 2 givenname: Frank surname: Rudzicz fullname: Rudzicz, Frank organization: University of Toronto, Li Ka Shing Knowledge Institute , St Michael's Hospital; Toronto Canada |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/32916675$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kDtPwzAURi1URB-wMyGPDJTaThy7bFDxqFSJpcyW496A29QOdgLi35PSx4AE0_10db4r3dNHHecdIHROyTUlUo6oSOmQcc5GOs-FFkeod1h1DjkjXdSPcUlIQsWYnKBuwsY0ywTvodn8zboVBGzdhw5WOwM3GJzOS-te8QKgwg6aoMt21J8-rCIufMB3kynWJvgY8doHwBX4qoRTdFzoMsLZbg7Qy8P9fPI0nD0_Tie3s6FJhKyHlEEh2sTHlEu2KLShnEhRLCg3ErjMTMKpSTlheTJmLZJSkUGSaaolSQQkA3S5vVsF_95ArNXaRgNlqR34JiqWpoxRmknWohc7tMnXsFBVsGsdvtTeQAtkW-DnmwCFMrbWtfWuDtqWihK1Ua02LtXGq9qqbovkV3F_-5_K1bZifaWWvgmutfQ3_g3-0ozt |
CODEN | JNEIEZ |
CitedBy_id | crossref_primary_10_3389_fnhum_2021_653659 crossref_primary_10_3390_brainsci13020221 crossref_primary_10_1080_2326263X_2021_2002004 crossref_primary_10_3390_brainsci13020268 crossref_primary_10_1002_hbm_26500 crossref_primary_10_1016_j_neuroimage_2022_119521 crossref_primary_10_1109_TNSRE_2023_3259991 crossref_primary_10_1016_j_compstruct_2024_118188 crossref_primary_10_1109_TBME_2024_3432934 crossref_primary_10_1088_1741_2552_ad4f18 crossref_primary_10_3389_fnhum_2022_898300 crossref_primary_10_1007_s12204_022_2488_4 crossref_primary_10_1109_TNSRE_2022_3186442 crossref_primary_10_1016_j_neucom_2024_128889 crossref_primary_10_1080_27706710_2024_2447576 crossref_primary_10_1109_TNSRE_2023_3300961 crossref_primary_10_26599_BSA_2023_9050013 crossref_primary_10_1109_TIM_2024_3366285 crossref_primary_10_1016_j_eswa_2024_126362 crossref_primary_10_1016_j_eswa_2024_126047 crossref_primary_10_1016_j_eswa_2024_125452 crossref_primary_10_1016_j_neunet_2022_06_008 crossref_primary_10_1360_nso_20220023 crossref_primary_10_1088_1741_2552_ac1ed2 crossref_primary_10_1109_TNSRE_2022_3211881 crossref_primary_10_1016_j_bspc_2023_105214 crossref_primary_10_1109_RBME_2023_3296938 crossref_primary_10_1016_j_bspc_2023_105155 crossref_primary_10_1109_TNSRE_2024_3451010 crossref_primary_10_1016_j_neucom_2024_128577 crossref_primary_10_1109_JBHI_2023_3303494 crossref_primary_10_1109_TNSRE_2023_3268751 crossref_primary_10_3389_fnhum_2021_643386 crossref_primary_10_1007_s11432_022_3548_2 crossref_primary_10_1109_JBHI_2023_3304646 |
Cites_doi | 10.1038/nature21056 10.1109/TBME.2004.827072 https://doi.org/10.1007/978-3-030-04021-5_1 10.1007/s12021-012-9171-0 10.3389/fnhum.2017.00334 10.1016/S0893-6080(00)00026-5 10.1016/j.tics.2019.01.009 10.1109/TBME.2018.2889512 10.1109/TNNLS.2018.2789927 10.1109/ACCESS.2019.2930958 10.1109/TPAMI.2010.125 10.1109/TBME.2017.2742541 10.1109/MLSP.2019.8918693 10.1109/TNSRE.2016.2627016 10.1088/1741-2552/aaf3f6 10.3217/978-3-85125-533-1-54 10.1161/01.CIR.101.23.e215 10.1002/hbm.23730 10.1038/s41598-019-38612-9 10.1038/s41591-018-0171-y 10.3389/fnhum.2019.00201 10.1016/j.bspc.2018.12.027 10.3389/fnins.2012.00055 10.1109/NER.2019.8716897 10.1016/j.eswa.2018.08.031 10.1088/1741-2560/7/5/056006 10.1109/TBME.2019.2955354 10.1109/ACCESS.2019.2919143 10.1088/1741-2552/aace8c 10.1007/978-3-030-01424-7_27 10.5555/3060832.3061003 10.1371/journal.pone.0178498 10.1109/BigMM.2019.00-23 10.1109/CVPR.2016.90 10.24963/ijcai.2018/222 |
ContentType | Journal Article |
Copyright | 2020 The Author(s). Published by IOP Publishing Ltd |
Copyright_xml | – notice: 2020 The Author(s). Published by IOP Publishing Ltd |
DBID | O3W TSCCA AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
DOI | 10.1088/1741-2552/abb7a7 |
DatabaseName | Institute of Physics Open Access Journal Titles IOPscience (Open Access) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: O3W name: Institute of Physics Open Access Journal Titles url: http://iopscience.iop.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Anatomy & Physiology |
DocumentTitleAlternate | Thinker invariance: enabling deep neural networks for BCI across more people |
EISSN | 1741-2552 |
ExternalDocumentID | 32916675 10_1088_1741_2552_abb7a7 jneabb7a7 |
Genre | Research Support, Non-U.S. Gov't Journal Article |
GrantInformation_xml | – fundername: Electronics and Telecommunications Research Institute funderid: http://dx.doi.org/10.13039/501100003696 – fundername: Natural Sciences and Engineering Research Council of Canada funderid: http://dx.doi.org/10.13039/501100000038 |
GroupedDBID | --- 1JI 4.4 53G 5B3 5GY 5VS 5ZH 7.M 7.Q AAGCD AAJIO AAJKP AATNI ABHWH ABJNI ABQJV ABVAM ACAFW ACGFS ACHIP AEFHF AENEX AFYNE AKPSB ALMA_UNASSIGNED_HOLDINGS AOAED ASPBG ATQHT AVWKF AZFZN CEBXE CJUJL CRLBU CS3 DU5 EBS EDWGO EMSAF EPQRW EQZZN F5P HAK IHE IJHAN IOP IZVLO KOT LAP M45 N5L N9A O3W P2P PJBAE RIN RO9 ROL RPA SY9 TSCCA W28 XPP AAYXX ADEQX CITATION CGR CUY CVF ECM EIF NPM 7X8 AEINN |
ID | FETCH-LOGICAL-c378t-12ef7378591582dfac15087fd15c8e586c351c4502b3925824176e36a1a8037e3 |
IEDL.DBID | IOP |
ISSN | 1741-2560 1741-2552 |
IngestDate | Tue Aug 05 10:41:50 EDT 2025 Mon Jul 21 05:38:21 EDT 2025 Tue Jul 01 01:58:41 EDT 2025 Thu Apr 24 23:03:09 EDT 2025 Wed Aug 21 03:33:29 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 5 |
Language | English |
License | Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c378t-12ef7378591582dfac15087fd15c8e586c351c4502b3925824176e36a1a8037e3 |
Notes | JNE-103567.R3 ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0001-9516-8145 |
OpenAccessLink | https://proxy.k.utb.cz/login?url=https://iopscience.iop.org/article/10.1088/1741-2552/abb7a7 |
PMID | 32916675 |
PQID | 2442211682 |
PQPubID | 23479 |
PageCount | 22 |
ParticipantIDs | crossref_citationtrail_10_1088_1741_2552_abb7a7 crossref_primary_10_1088_1741_2552_abb7a7 pubmed_primary_32916675 iop_journals_10_1088_1741_2552_abb7a7 proquest_miscellaneous_2442211682 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2020-10-13 |
PublicationDateYYYYMMDD | 2020-10-13 |
PublicationDate_xml | – month: 10 year: 2020 text: 2020-10-13 day: 13 |
PublicationDecade | 2020 |
PublicationPlace | England |
PublicationPlace_xml | – name: England |
PublicationTitle | Journal of neural engineering |
PublicationTitleAbbrev | JNE |
PublicationTitleAlternate | J. Neural Eng |
PublicationYear | 2020 |
Publisher | IOP Publishing |
Publisher_xml | – name: IOP Publishing |
References | 44 45 46 47 48 Bronstein M (36) 2017 Völker M (16) 2018 Citi L (43) 2010; 7 Shankar S (57) 2018 Li D (38) 2019 Bai S (28) 2018 51 Craik A (5) 2018; 16 10 11 55 Lawhern V J (3) 2018; 15 12 Zhang H (24) 2017 14 58 15 17 18 Bashivan P (53) 2016 19 Xu B (27) 2015 2 Carlucci F M (56) 2019 6 Roy Y (4) 2019; 16 Gemein L A W (29) 2020 20 21 22 25 26 Lotte F (1) 2018; 15 Wang Y (23) 2019 30 31 32 Ioffe S (34) 2015 35 37 39 Devlin J (8) 2018 Hartmann K G (7) 2018 Glorot X (54) 2010; 9 Blankertz B (9) 2008 Fahimi F (13) 2019; 16 Kemker R (52) 2018 Reddi S J (49) 2018 He T (50) 2018 Huang G (33) 2018 40 41 42 |
References_xml | – volume: 15 issn: 1741-2552 year: 2018 ident: 1 publication-title: J. Neural Eng. – ident: 22 doi: 10.1038/nature21056 – year: 2018 ident: 57 publication-title: CoRR – ident: 42 doi: 10.1109/TBME.2004.827072 – ident: 31 doi: https://doi.org/10.1007/978-3-030-04021-5_1 – ident: 37 doi: 10.1007/s12021-012-9171-0 – ident: 17 doi: 10.3389/fnhum.2017.00334 – volume: 16 issn: 1741-2552 year: 2019 ident: 4 publication-title: J. Neural Eng. – ident: 58 doi: 10.1016/S0893-6080(00)00026-5 – ident: 55 doi: 10.1016/j.tics.2019.01.009 – ident: 32 doi: 10.1109/TBME.2018.2889512 – ident: 41 doi: 10.1109/TNNLS.2018.2789927 – ident: 18 doi: 10.1109/ACCESS.2019.2930958 – start-page: 1 year: 2018 ident: 49 publication-title: ICLR – year: 2018 ident: 33 publication-title: Tech. Rep. – ident: 47 doi: 10.1109/TPAMI.2010.125 – year: 2019 ident: 56 publication-title: CoRR – ident: 11 doi: 10.1109/TBME.2017.2742541 – ident: 21 doi: 10.1109/MLSP.2019.8918693 – ident: 35 doi: 10.1109/TNSRE.2016.2627016 – volume: 16 issn: 1741-2552 year: 2019 ident: 13 publication-title: J. Neural Eng. doi: 10.1088/1741-2552/aaf3f6 – ident: 44 doi: 10.3217/978-3-85125-533-1-54 – year: 2019 ident: 38 publication-title: CoR – ident: 39 doi: 10.1161/01.CIR.101.23.e215 – year: 2018 ident: 50 publication-title: CoRR – ident: 2 doi: 10.1002/hbm.23730 – ident: 26 doi: 10.1038/s41598-019-38612-9 – ident: 14 doi: 10.1038/s41591-018-0171-y – ident: 45 doi: 10.3389/fnhum.2019.00201 – ident: 25 doi: 10.1016/j.bspc.2018.12.027 – ident: 40 doi: 10.3389/fnins.2012.00055 – year: 2020 ident: 29 publication-title: Tech. Rep. – start-page: 1 year: 2016 ident: 53 publication-title: ICLR 2016 – ident: 6 doi: 10.1109/NER.2019.8716897 – start-page: 3390 year: 2018 ident: 52 publication-title: 32nd Conf. on Artificial Intelligence, AAAI 2018 – ident: 15 doi: 10.1016/j.eswa.2018.08.031 – volume: 7 issn: 1741-2552 year: 2010 ident: 43 publication-title: J. Neural Eng. doi: 10.1088/1741-2560/7/5/056006 – start-page: 1 year: 2018 ident: 16 – volume: 9 start-page: 249 year: 2010 ident: 54 publication-title: J. Mach. Learn. Res. – year: 2018 ident: 7 – start-page: 113 year: 2008 ident: 9 publication-title: Adv. Neural Inf. Proc. Syst. 20 – ident: 10 doi: 10.1109/TBME.2019.2955354 – year: 2017 ident: 24 publication-title: CoRR – volume: 16 issn: 1741-2552 year: 2018 ident: 5 publication-title: J. Neural Eng. – ident: 19 doi: 10.1109/ACCESS.2019.2919143 – volume: 15 start-page: aace8c issn: 1741-2552 year: 2018 ident: 3 publication-title: J. Neural Eng. doi: 10.1088/1741-2552/aace8c – ident: 51 doi: 10.1007/978-3-030-01424-7_27 – year: 2019 ident: 23 – year: 2018 ident: 8 publication-title: CoRR – ident: 12 doi: 10.5555/3060832.3061003 – year: 2015 ident: 34 publication-title: CoRR – ident: 48 doi: 10.1371/journal.pone.0178498 – year: 2018 ident: 28 publication-title: CoRR – ident: 20 doi: 10.1109/BigMM.2019.00-23 – ident: 30 doi: 10.1109/CVPR.2016.90 – year: 2015 ident: 27 publication-title: CoRR – ident: 46 doi: 10.24963/ijcai.2018/222 – start-page: 18 year: 2017 ident: 36 publication-title: NIPS 2017 |
SSID | ssj0031790 |
Score | 2.4737659 |
Snippet | Objective. Most deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively... Most deep neural networks (DNNs) used as brain computer interfaces (BCI) classifiers are rarely viable for more than one person and are relatively shallow... |
SourceID | proquest pubmed crossref iop |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 56008 |
SubjectTerms | BCI brain computer interface brain machine interface Brain-Computer Interfaces deep neural networks domain generalization Electroencephalography fine-tuning Humans Machine Learning Neural Networks, Computer transfer learning |
Title | Thinker invariance: enabling deep neural networks for BCI across more people |
URI | https://iopscience.iop.org/article/10.1088/1741-2552/abb7a7 https://www.ncbi.nlm.nih.gov/pubmed/32916675 https://www.proquest.com/docview/2442211682 |
Volume | 17 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3db9MwED-15YWXja1s6z6QkQCJh3SxHcfX7alUqwoCygPV-oAUOYkjobG06gcS--s5x2klEFSIl8gPFzs-n32_i-8D4EVaEOyOlA4yZTGIUoGBUUoEeSELzlOUeRUk9uFjPJpE76Zq2oDrbSzMbF4f_V1q-kTBnoW1QxxeEobmASFh4cptaaOb8EgiKU4XvTf-tDmGpUs95aMhHXUc1neUf-rhF53UpHH_DjcrtTPchy-bD_beJnfd9SrtZg-_5XL8zxk9gb0ajrK-Jz2Ahi0Pod0vyRS__8FescpBtPrz3ob3rsbnnV2wr-V3srCduFwx62KvSP2x3No5c9kxqbvS-5YvGSFi9mbwlplq4sw59TLvs_4UJsObz4NRUBdjCDKpcRVwYQtNLdXjCkVemMxlktdFzlWGVmGcScWzSIUiJchFJBHXsZWx4QZDqa08glY5K-0JMBGijTEveJyayMoeGk00qdBY5KFVYQcuN8uRZHWmclcw41tS3ZgjJo5hiWNY4hnWgdfbN-Y-S8cO2pe0Dkm9VZc76J5vZCChLefuUUxpZ-tlQohIkN0co-jAsReO7ahSEN4mI-z0H0c5g8fCWfDOR0aeQ2u1WNsLgjmr9FklzvQcy9uff-PzUg |
linkProvider | IOP Publishing |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LSwMxEA4-QLyIb-szggoe1m6SzSb1Vqulaq0eLPYWsrsJiLotthX89042W0HQ4i2HyYOZJPNNMg-EjhILsDviIki5kUGUUBlozmmQWWYJSSTLiiCxu07c6kY3Pd4r65wWsTD9QXn1n0HTJwr2LCwd4mQVMDQJAAlTV25LaFEdZHYWzXMGqgY29D17mlzFzKWf8hGRrkcclv-Uv43yQy_Nwtx_Q85C9TSX0VKJGXHdr3AFzZh8Fa3Vc7CX3z7xCS68OIvn8TXUdoU4X8w7fs4_wAx2Mj3HxgVIgY7CmTED7FJYwnC5dwAfYoCt-KJxjXWxMuw8b7F3LF9H3ebVY6MVlBUTgpQJOQoINVZAi9cIlzSzOnXp3oXNCE-l4TJOGSdpxEOaAC4CkoiI2LBYEy1DJgzbQHN5PzdbCNNQmlhmlsSJjgyrSS2AJqFC2iw0PKyg6oRfKi3TibuqFq-q-NaWUjkOK8dh5TlcQaffPQY-lcYU2mMQgSrP03AK3eFESArOhfvs0Lnpj4cKYAsF4zaWtII2vfS-Z2UUQDFYStv_nOUALTxcNlX7unO7gxaps7idTwvbRXOj97HZA1gySvaLrfcFYKnW7g |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Thinker+invariance%3A+enabling+deep+neural+networks+for+BCI+across+more+people&rft.jtitle=Journal+of+neural+engineering&rft.au=Kostas%2C+Demetres&rft.au=Rudzicz%2C+Frank&rft.date=2020-10-13&rft.pub=IOP+Publishing&rft.issn=1741-2560&rft.eissn=1741-2552&rft.volume=17&rft.issue=5&rft_id=info:doi/10.1088%2F1741-2552%2Fabb7a7&rft.externalDocID=jneabb7a7 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1741-2560&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1741-2560&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1741-2560&client=summon |