Efficient Multi-View Graph Convolutional Network with Self-Attention for Multi-Class Motor Imagery Decoding
Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which is an important issue in the field of brain–computer interface (BCI). Existing deep-learning-based classification methods have not been able...
Saved in:
Published in | Bioengineering (Basel) Vol. 11; no. 9; p. 926 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Switzerland
MDPI AG
01.09.2024
MDPI |
Subjects | |
Online Access | Get full text |
ISSN | 2306-5354 2306-5354 |
DOI | 10.3390/bioengineering11090926 |
Cover
Abstract | Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which is an important issue in the field of brain–computer interface (BCI). Existing deep-learning-based classification methods have not been able to entirely employ the topological information among brain regions, and thus, the classification performance needs further improving. In this paper, we propose a multi-view graph convolutional attention network (MGCANet) with residual learning structure for multi-class MI decoding. Specifically, we design a multi-view graph convolution spatial feature extraction method based on the topological relationship of brain regions to achieve more comprehensive information aggregation. During the modeling, we build an adaptive weight fusion (Awf) module to adaptively merge feature from different brain views to improve classification accuracy. In addition, the self-attention mechanism is introduced for feature selection to expand the receptive field of EEG signals to global dependence and enhance the expression of important features. The proposed model is experimentally evaluated on two public MI datasets and achieved a mean accuracy of 78.26% (BCIC IV 2a dataset) and 73.68% (OpenBMI dataset), which significantly outperforms representative comparative methods in classification accuracy. Comprehensive experiment results verify the effectiveness of our proposed method, which can provide novel perspectives for MI decoding. |
---|---|
AbstractList | Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which is an important issue in the field of brain–computer interface (BCI). Existing deep-learning-based classification methods have not been able to entirely employ the topological information among brain regions, and thus, the classification performance needs further improving. In this paper, we propose a multi-view graph convolutional attention network (MGCANet) with residual learning structure for multi-class MI decoding. Specifically, we design a multi-view graph convolution spatial feature extraction method based on the topological relationship of brain regions to achieve more comprehensive information aggregation. During the modeling, we build an adaptive weight fusion (Awf) module to adaptively merge feature from different brain views to improve classification accuracy. In addition, the self-attention mechanism is introduced for feature selection to expand the receptive field of EEG signals to global dependence and enhance the expression of important features. The proposed model is experimentally evaluated on two public MI datasets and achieved a mean accuracy of 78.26% (BCIC IV 2a dataset) and 73.68% (OpenBMI dataset), which significantly outperforms representative comparative methods in classification accuracy. Comprehensive experiment results verify the effectiveness of our proposed method, which can provide novel perspectives for MI decoding. Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which is an important issue in the field of brain-computer interface (BCI). Existing deep-learning-based classification methods have not been able to entirely employ the topological information among brain regions, and thus, the classification performance needs further improving. In this paper, we propose a multi-view graph convolutional attention network (MGCANet) with residual learning structure for multi-class MI decoding. Specifically, we design a multi-view graph convolution spatial feature extraction method based on the topological relationship of brain regions to achieve more comprehensive information aggregation. During the modeling, we build an adaptive weight fusion (Awf) module to adaptively merge feature from different brain views to improve classification accuracy. In addition, the self-attention mechanism is introduced for feature selection to expand the receptive field of EEG signals to global dependence and enhance the expression of important features. The proposed model is experimentally evaluated on two public MI datasets and achieved a mean accuracy of 78.26% (BCIC IV 2a dataset) and 73.68% (OpenBMI dataset), which significantly outperforms representative comparative methods in classification accuracy. Comprehensive experiment results verify the effectiveness of our proposed method, which can provide novel perspectives for MI decoding.Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which is an important issue in the field of brain-computer interface (BCI). Existing deep-learning-based classification methods have not been able to entirely employ the topological information among brain regions, and thus, the classification performance needs further improving. In this paper, we propose a multi-view graph convolutional attention network (MGCANet) with residual learning structure for multi-class MI decoding. Specifically, we design a multi-view graph convolution spatial feature extraction method based on the topological relationship of brain regions to achieve more comprehensive information aggregation. During the modeling, we build an adaptive weight fusion (Awf) module to adaptively merge feature from different brain views to improve classification accuracy. In addition, the self-attention mechanism is introduced for feature selection to expand the receptive field of EEG signals to global dependence and enhance the expression of important features. The proposed model is experimentally evaluated on two public MI datasets and achieved a mean accuracy of 78.26% (BCIC IV 2a dataset) and 73.68% (OpenBMI dataset), which significantly outperforms representative comparative methods in classification accuracy. Comprehensive experiment results verify the effectiveness of our proposed method, which can provide novel perspectives for MI decoding. |
Audience | Academic |
Author | Tan, Xiyue Wang, Dan Chen, Jiaming Wu, Shuhan Xu, Meng |
AuthorAffiliation | College of Computer Science, Beijing University of Technology, Beijing 100124, China; tanxy@emails.bjut.edu.cn (X.T.) |
AuthorAffiliation_xml | – name: College of Computer Science, Beijing University of Technology, Beijing 100124, China; tanxy@emails.bjut.edu.cn (X.T.) |
Author_xml | – sequence: 1 givenname: Xiyue surname: Tan fullname: Tan, Xiyue – sequence: 2 givenname: Dan surname: Wang fullname: Wang, Dan – sequence: 3 givenname: Meng orcidid: 0000-0001-5594-4410 surname: Xu fullname: Xu, Meng – sequence: 4 givenname: Jiaming surname: Chen fullname: Chen, Jiaming – sequence: 5 givenname: Shuhan surname: Wu fullname: Wu, Shuhan |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39329668$$D View this record in MEDLINE/PubMed |
BookMark | eNptkstuEzEUhi1UREvoK1QjsWEzxfeMVygKpURqYcFla3l8mTid2MHjadS3xyGhNKjywtY53_mPj_2_BichBgvABYKXhAj4vvXRhs4Ha5MPHUJQQIH5C3CGCeQ1I4yePDmfgvNhWEEIEcEMc_oKnBJBsOC8OQN3V8557W3I1e3YZ1__9HZbXSe1WVbzGO5jP2Yfg-qrLzZvY7qrtj4vq2-2d_Us51JXspWL6VA-79UwVLcxl8hirTqbHqqPVkdTLvoGvHSqH-z5YZ-AH5-uvs8_1zdfrxfz2U2tGSa5Zsw4ahpkCWkbKqATLYNY4ZYLBhk2xhlIGXFCIIO1bhSiiokpa3EDFUGCTMBir2uiWslN8muVHmRUXv4JxNRJlbLXvZWMYtEIh7WlmDao9MMQG94iN22YmfKi9WGvtRnbtTW6DJxUfyR6nAl-Kbt4LxGiuBFop_DuoJDir9EOWa79oG3fq2DjOEhS_o_CKRS0oG__Q1dxTOXx9xThnMPpP6pTZQIfXCyN9U5UzpriBYFJeZ4JuHyGKsvYtdfFT86X-FHBxdNJH0f865UC8D2gUxyGZN0jgqDc-VI-70vyG_Rw10w |
Cites_doi | 10.1088/1741-2552/ab260c 10.1007/s41095-022-0271-y 10.1109/JBHI.2021.3083525 10.1109/CVPR.2016.90 10.3389/fnins.2012.00055 10.1109/TNSRE.2022.3230250 10.3389/fnhum.2021.788036 10.1093/gigascience/giz002 10.1109/CVPR.2018.00745 10.1016/j.bbe.2021.10.001 10.1109/TNSRE.2020.3037326 10.3389/fbioe.2021.706229 10.1109/JBHI.2020.2967128 10.1109/TNSRE.2021.3076234 10.1016/j.neunet.2020.05.032 10.1109/TNN.2008.2005605 10.1109/CVPR42600.2020.01155 10.1016/j.bspc.2021.103342 10.1016/j.bspc.2021.103241 10.3390/bioengineering9120768 10.1109/APWC-on-CSE.2016.017 10.1109/RBME.2020.2969915 10.1002/hbm.23730 10.1109/ICASSP39728.2021.9414568 10.1016/j.aiopen.2021.01.001 10.1109/TNNLS.2020.3019893 10.1109/SPMB.2017.8257015 10.1088/1741-2552/ab405f 10.1088/1741-2552/aace8c 10.1109/TNNLS.2022.3202569 10.1088/1741-2552/ac1d36 10.3390/s23146434 10.1007/s11831-021-09684-6 10.1109/LSP.2021.3049683 10.1016/j.bspc.2022.103618 10.3389/frai.2024.1290491 |
ContentType | Journal Article |
Copyright | COPYRIGHT 2024 MDPI AG 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2024 by the authors. 2024 |
Copyright_xml | – notice: COPYRIGHT 2024 MDPI AG – notice: 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2024 by the authors. 2024 |
DBID | AAYXX CITATION NPM 8FE 8FG 8FH ABJCF ABUWG AFKRA AZQEC BBNVY BENPR BGLVJ BHPHI CCPQU DWQXO GNUQQ HCIFZ L6V LK8 M7P M7S PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS 7X8 5PM DOA |
DOI | 10.3390/bioengineering11090926 |
DatabaseName | CrossRef PubMed ProQuest SciTech Collection ProQuest Technology Collection ProQuest Natural Science Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials - QC Biological Science Collection ProQuest Central ProQuest Technology Collection Natural Science Collection ProQuest One ProQuest Central Korea ProQuest Central Student SciTech Premium Collection ProQuest Engineering Collection Biological Sciences Biological Science Database Engineering Database ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef PubMed Publicly Available Content Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Natural Science Collection ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection Natural Science Collection ProQuest Central Korea Biological Science Collection ProQuest Central (New) Engineering Collection Engineering Database ProQuest Biological Science Collection ProQuest One Academic Eastern Edition ProQuest Technology Collection Biological Science Database ProQuest SciTech Collection ProQuest One Academic UKI Edition Materials Science & Engineering Collection ProQuest One Academic ProQuest One Academic (New) MEDLINE - Academic |
DatabaseTitleList | CrossRef Publicly Available Content Database MEDLINE - Academic PubMed |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 2306-5354 |
ExternalDocumentID | oai_doaj_org_article_542989f2ce42481b84202d6b1f785d76 PMC11428916 A810992345 39329668 10_3390_bioengineering11090926 |
Genre | Journal Article |
GrantInformation_xml | – fundername: Postdoctoral Fellowship Program of China Postdoctoral Science Foundation grantid: GZC20230189 – fundername: National Natural Science Foundation of China grantid: 12275295 – fundername: Project of Construction and Support for high-level teaching Teams of Beijing Municipal Institutions – fundername: Postdoctoral Fellowship Program of the China Postdoctoral Science Foundatio grantid: GZC20230189 – fundername: Natural Science Foundation of China grantid: 12275295 |
GroupedDBID | 53G 5VS 8FE 8FG 8FH AAFWJ AAYXX ABDBF ABJCF ACUHS ADBBV AFKRA AFPKN ALMA_UNASSIGNED_HOLDINGS AOIJS BBNVY BCNDV BENPR BGLVJ BHPHI CCPQU CITATION GROUPED_DOAJ HCIFZ HYE IAO IHR INH ITC KQ8 L6V LK8 M7P M7S MODMG M~E OK1 PGMZT PHGZM PHGZT PIMPY PROAC PTHSS RPM NPM PMFND ABUWG AZQEC DWQXO GNUQQ PKEHL PQEST PQGLB PQQKQ PQUKI PRINS 7X8 PUEGO 5PM |
ID | FETCH-LOGICAL-c523t-55df4d81e33b8490f9b502a2b695052ddfd0453f991d2cc8a14a5975b280a3193 |
IEDL.DBID | 8FG |
ISSN | 2306-5354 |
IngestDate | Wed Aug 27 01:27:39 EDT 2025 Thu Aug 21 18:31:13 EDT 2025 Fri Sep 05 07:11:55 EDT 2025 Fri Jul 25 12:00:54 EDT 2025 Tue Jun 17 22:03:58 EDT 2025 Tue Jun 10 21:03:12 EDT 2025 Thu Apr 03 07:04:15 EDT 2025 Tue Jul 01 04:35:43 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 9 |
Keywords | deep learning brain–computer interface motor imagery self-attention graph convolutional networks |
Language | English |
License | https://creativecommons.org/licenses/by/4.0 Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c523t-55df4d81e33b8490f9b502a2b695052ddfd0453f991d2cc8a14a5975b280a3193 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0001-5594-4410 |
OpenAccessLink | https://www.proquest.com/docview/3110366607?pq-origsite=%requestingapplication% |
PMID | 39329668 |
PQID | 3110366607 |
PQPubID | 2055440 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_542989f2ce42481b84202d6b1f785d76 pubmedcentral_primary_oai_pubmedcentral_nih_gov_11428916 proquest_miscellaneous_3110407094 proquest_journals_3110366607 gale_infotracmisc_A810992345 gale_infotracacademiconefile_A810992345 pubmed_primary_39329668 crossref_primary_10_3390_bioengineering11090926 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-09-01 |
PublicationDateYYYYMMDD | 2024-09-01 |
PublicationDate_xml | – month: 09 year: 2024 text: 2024-09-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Switzerland |
PublicationPlace_xml | – name: Switzerland – name: Basel |
PublicationTitle | Bioengineering (Basel) |
PublicationTitleAlternate | Bioengineering (Basel) |
PublicationYear | 2024 |
Publisher | MDPI AG MDPI |
Publisher_xml | – name: MDPI AG – name: MDPI |
References | Galassi (ref_21) 2020; 32 ref_36 Laurens (ref_40) 2008; 9 ref_12 ref_11 ref_33 ref_32 ref_31 Izzuddin (ref_14) 2021; 41 Hou (ref_20) 2022; 35 Liu (ref_27) 2021; 26 ref_17 ref_38 Zhang (ref_24) 2021; 18 Song (ref_39) 2023; 31 Eldele (ref_28) 2021; 29 Guo (ref_22) 2022; 8 Ioffe (ref_30) 2015; 37 Lee (ref_34) 2019; 8 Schirrmeister (ref_35) 2017; 38 Scarselli (ref_16) 2008; 20 Zhang (ref_18) 2020; 24 ref_43 ref_42 ref_41 Aggarwal (ref_1) 2022; 29 Zhou (ref_15) 2020; 1 ref_3 Li (ref_23) 2020; 28 ref_2 ref_29 Borra (ref_37) 2020; 129 Sun (ref_19) 2021; 28 Dai (ref_10) 2020; 17 Hosseini (ref_5) 2020; 14 ref_26 ref_8 Lawhern (ref_13) 2018; 15 Liu (ref_25) 2021; 18 Yannick (ref_9) 2019; 16 ref_4 ref_7 ref_6 |
References_xml | – ident: ref_7 – volume: 16 start-page: 051001 year: 2019 ident: ref_9 article-title: Deep learning-based electroencephalography analysis: A systematic review publication-title: J. Neural Eng. doi: 10.1088/1741-2552/ab260c – volume: 8 start-page: 331 year: 2022 ident: ref_22 article-title: Attention mechanisms in computer vision: A survey publication-title: Comput. Vis. Media doi: 10.1007/s41095-022-0271-y – ident: ref_32 – volume: 26 start-page: 5321 year: 2021 ident: ref_27 article-title: 3DCANN: A spatio-temporal convolution attention neural network for EEG emotion recognition publication-title: IEEE J. Biomed. Health Inform. doi: 10.1109/JBHI.2021.3083525 – ident: ref_29 doi: 10.1109/CVPR.2016.90 – ident: ref_33 doi: 10.3389/fnins.2012.00055 – volume: 31 start-page: 710 year: 2023 ident: ref_39 article-title: EEG conformer: Convolutional transformer for EEG decoding and visualization publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/TNSRE.2022.3230250 – ident: ref_3 doi: 10.3389/fnhum.2021.788036 – volume: 9 start-page: 2579 year: 2008 ident: ref_40 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – volume: 8 start-page: giz002 year: 2019 ident: ref_34 article-title: EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy publication-title: GigaScience doi: 10.1093/gigascience/giz002 – ident: ref_41 doi: 10.1109/CVPR.2018.00745 – volume: 41 start-page: 1629 year: 2021 ident: ref_14 article-title: Compact convolutional neural network (CNN) based on SincNet for end-to-end motor imagery decoding and analysis publication-title: Biocybern. Biomed. Eng. doi: 10.1016/j.bbe.2021.10.001 – volume: 28 start-page: 2615 year: 2020 ident: ref_23 article-title: A multi-scale fusion convolutional neural network based on attention mechanism for the visualization analysis of EEG signals decoding publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/TNSRE.2020.3037326 – ident: ref_38 doi: 10.3389/fbioe.2021.706229 – volume: 24 start-page: 2570 year: 2020 ident: ref_18 article-title: Motor imagery classification via temporal attention cues of graph embedded EEG signals publication-title: IEEE J. Biomed. Health Inform. doi: 10.1109/JBHI.2020.2967128 – volume: 29 start-page: 809 year: 2021 ident: ref_28 article-title: An attention-based deep learning approach for sleep stage classification with single-channel EEG publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/TNSRE.2021.3076234 – volume: 129 start-page: 55 year: 2020 ident: ref_37 article-title: Interpretable and lightweight convolutional neural network for EEG decoding: Application to movement execution and imagination publication-title: Neural Netw. doi: 10.1016/j.neunet.2020.05.032 – volume: 20 start-page: 61 year: 2008 ident: ref_16 article-title: The graph neural network model publication-title: IEEE Trans. Neural Netw. doi: 10.1109/TNN.2008.2005605 – ident: ref_42 doi: 10.1109/CVPR42600.2020.01155 – volume: 37 start-page: 448 year: 2015 ident: ref_30 article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift publication-title: JMLR Org. – ident: ref_11 doi: 10.1016/j.bspc.2021.103342 – ident: ref_6 doi: 10.1016/j.bspc.2021.103241 – ident: ref_2 doi: 10.3390/bioengineering9120768 – ident: ref_8 doi: 10.1109/APWC-on-CSE.2016.017 – volume: 14 start-page: 204 year: 2020 ident: ref_5 article-title: A review on machine learning for EEG signal processing in bioengineering publication-title: IEEE Rev. Biomed. Eng. doi: 10.1109/RBME.2020.2969915 – volume: 38 start-page: 5391 year: 2017 ident: ref_35 article-title: Deep learning with convolutional neural networks for EEG decoding and visualization publication-title: Hum. Brain Mapp. doi: 10.1002/hbm.23730 – ident: ref_43 doi: 10.1109/ICASSP39728.2021.9414568 – volume: 1 start-page: 57 year: 2020 ident: ref_15 article-title: Graph neural networks: A review of methods and applications publication-title: AI Open doi: 10.1016/j.aiopen.2021.01.001 – volume: 32 start-page: 4291 year: 2020 ident: ref_21 article-title: Attention in natural language processing publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2020.3019893 – ident: ref_12 doi: 10.1109/SPMB.2017.8257015 – volume: 17 start-page: 016025 year: 2020 ident: ref_10 article-title: HS-CNN: A CNN with hybrid convolution scale for EEG motor imagery classification publication-title: J. Neural Eng. doi: 10.1088/1741-2552/ab405f – ident: ref_17 – ident: ref_36 – volume: 18 start-page: 016004 year: 2021 ident: ref_24 article-title: Motor imagery recognition with automatic EEG channel selection and deep learning publication-title: J. Neural Eng. – volume: 15 start-page: 056013 year: 2018 ident: ref_13 article-title: EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces publication-title: J. Neural Eng. doi: 10.1088/1741-2552/aace8c – volume: 35 start-page: 7312 year: 2022 ident: ref_20 article-title: GCNs-net: A graph convolutional neural network approach for decoding time-resolved eeg motor imagery signals publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2022.3202569 – volume: 18 start-page: 0460e4 year: 2021 ident: ref_25 article-title: Distinguishable spatial-spectral feature learning neural network framework for motor imagery-based brain–computer interface publication-title: J. Neural Eng. doi: 10.1088/1741-2552/ac1d36 – ident: ref_4 doi: 10.3390/s23146434 – volume: 29 start-page: 3001 year: 2022 ident: ref_1 article-title: Review of machine learning techniques for EEG based brain computer interface publication-title: Arch. Comput. Methods Eng. doi: 10.1007/s11831-021-09684-6 – volume: 28 start-page: 219 year: 2021 ident: ref_19 article-title: Adaptive spatiotemporal graph convolutional networks for motor imagery classification publication-title: IEEE Signal Process. Lett. doi: 10.1109/LSP.2021.3049683 – ident: ref_26 doi: 10.1016/j.bspc.2022.103618 – ident: ref_31 doi: 10.3389/frai.2024.1290491 |
SSID | ssj0001325264 |
Score | 2.2663546 |
Snippet | Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which... |
SourceID | doaj pubmedcentral proquest gale pubmed crossref |
SourceType | Open Website Open Access Repository Aggregation Database Index Database |
StartPage | 926 |
SubjectTerms | Accuracy Analysis Artificial neural networks Attention Biochips Brain Brain research brain–computer interface Classification Computer applications Datasets Deep learning EEG Electrodes Electroencephalography Euclidean space Feature extraction Feature selection graph convolutional networks Graph representations Human-computer interface Imagery Implants Machine learning Medical research Mental task performance motor imagery Neural networks Receptive field self-attention Signal classification Topology |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwEB6hnuCAeBMolZGQOEXr9SPrHJfSUpDaCxT1ZtmOI1aUpCopiH_PjJ1uE4HEhWtsR45nJjNfMvMNwCsThQ_o5sogg6EWZqp0VYilivWqbrl2PNVWHZ9UR6fqw5k-m7T6opywTA-cD25B_ZRM3YoQlVAYYxmFcL2p_LJdGd2sEtk2r_kETKWvK1JodPW5JFgirl_4TR9vGP6IZ5PXxKgw8UaJtP_PV_PEN83zJieO6PAe3B0jSLbOO78Pt2L3AO5MeAUfwteDRAyB61kqsC0_b-JP9o64qdl-3_0Y1Q3vcpKzwBl9jmUf43lbrochZ0AyDGfH5alzJjvuEZ-z99-I9eIXe4u4lfzeIzg9PPi0f1SOXRXKgKBzKLVuWtWYZZQSD7Pmbe01F074qqamdk3TNhjmyRYDx0aEYNxSOUQd2gvDHRqsfAw7Xd_Fp8AweKL3E3eucmoZvcdokwdVBee1ki4WsLg-XXuRyTMsgg6Sh_27PAp4Q0LYziby63QBVcKOKmH_pRIFvCYRWjJRlFNwY6UBbprIruza0O9AIZUuYHc2E00rzIevlcCOpv3dStyrRNDHVwW83A7TSkpX62J_lecgUkboXMCTrDPbR5IYMSPGNAWYmTbNnnk-0m2-JOJvqns2GM8_-x-n9Bxu4wWV8-V2YWe4vIovMMAa_F6ypd-66CNe priority: 102 providerName: Directory of Open Access Journals |
Title | Efficient Multi-View Graph Convolutional Network with Self-Attention for Multi-Class Motor Imagery Decoding |
URI | https://www.ncbi.nlm.nih.gov/pubmed/39329668 https://www.proquest.com/docview/3110366607 https://www.proquest.com/docview/3110407094 https://pubmed.ncbi.nlm.nih.gov/PMC11428916 https://doaj.org/article/542989f2ce42481b84202d6b1f785d76 |
Volume | 11 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwELagvcCh4k2grIyExClax4-sc0K77W4LUlcIKOotsh2nVNCktCkS_54Zx_uIQJwixXYUZ2Y83zjjbwh5oz23Dtxc6oTTWMJMpiZ3PpW-mBQ1U4aFs1Uny_z4VH44U2dxw-0mplWu1sSwUFetwz3ysQA_JQBrs8m7q58pVo3Cv6uxhMZdspuBp0E914ujzR6L4Aocfn8wWEB0P7YXrd_w_CHbJiuQV2HLJwXq_r8X6C0PNcye3HJHiwdkL-JIOu0F_5Dc8c0jcn-LXfAx-T4P9BAwnoZjtulXmCE9QoZqetA2v6LSwVOWfS44xU1Z-tn_qNNp1_V5kBRAbRwe6mfSkxaidPr-ErkvftNDiF7R-z0hp4v5l4PjNNZWSB2Enl2qVFXLSmdeCKtlwerCKsYNt3mBpe2qqq4A7Ika4GPFndMmkwZiD2W5ZgbMVjwlO03b-OeEAoTCVYoZkxuZeWsBczInc2esksL4hIxXX7e86ik0Sgg9UB7lv-WRkBkKYd0bKbDDjfb6vIwWVWKhLV3U3HnJJYBvLTnjVW6zeqJVNYGHvEURlmioICdn4nkDeGmkvCqnGn8KciFVQvYHPcHA3LB5pQRlNPCbcqOOCXm9bsaRmLTW-Pa27wPxMgTQCXnW68x6SgJwM0SaOiF6oE2DOQ9bmotvgf4bTz9rQPUv_v9eL8k9-CCyz4fbJzvd9a1_BQCqs6NgJSOyO50dzhZwnc2XHz-NwnbEH6GtIBE |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6V7QE4IN4EChgJxCmq13ayzgGhbbtll3ZXCFrUW7AdByogKW0K6p_iNzKTZB8RiFuv8UOJZ-z5xpn5BuC59sI6NHOhk05TCTMVmtj5UPlkkOQ8MrzOrZrO4vGhensUHa3B73kuDIVVzs_E-qDOSkd35JsS7ZRErM0Hr09-hFQ1iv6uzktoNGqx5y9-oct29mqyg_J9IcTu6GB7HLZVBUKHTlcVRlGWq0z3vZRWq4TniY24MMLGCRV1y7I8Q5gjcwROmXBOm74yiLojKzQ3qLAS570C64oyWnuwvjWavXu_vNWRIkKI0aQiS5nwTXtc-iWzIPF78oSYHFasYF0s4G-TsGITu_GaKwZw9ybcaJErGzaqdgvWfHEbrq_wGd6Br6OakALHszqxN_yIa8reECc22y6Ln62a4yyzJvqc0TUw--C_5eGwqprIS4Ywuh1eV-xk07LCJ5PvxLZxwXbQXyZ7excOL2Xd70GvKAv_ABiCNjoXuTGxUX1vLaJc7lTsjI2UND6AzfnqpicNaUeKzg7JI_23PALYIiEsehPpdv2gPP2ctns4pdJeOsmF80oohPtaCS6y2PbzgY6yAU7ykkSY0tGAcnKmzXDAlyaSrXSo6TekkCoKYKPTE7e06zbPlSBtj5SzdLkBAni2aKaRFCZX-PK86YMeOrrsAdxvdGbxSRKROvq2OgDd0abON3dbiuMvNeE45Vtr9CMe_v-9nsLV8cF0P92fzPYewTVcHNVE421Arzo9948RvlX2SbtnGHy67G36By5xWQU |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VIiE4IN4EChgJxClar-0kzgGhpdttl9IVEhT1FmzHaSsgKW0K6l_j1zGTZB8RiFuv8UOJ5_WNMw-AF9oL69DMhU46TS3MVGhi50Pl0yQteGR4k1u1N4t39tW7g-hgDX7Pc2EorHKuExtFnVeO7sgHEu2URKzNk0HRhUV8GE_enPwIqYMU_Wmdt9NoWWTXX_xC9-3s9XSMtH4pxGTr0-ZO2HUYCB06YHUYRXmhcj30UlqtUl6kNuLCCBun1OAtz4scIY8sEETlwjlthsogAo-s0Nwg80rc9wpcTWSSkuOnJ9vL-x0pIgQbbVKylCkf2OPKL2sMUqVPnlJNhxV72LQN-Ns4rFjHfuTmiimc3IKbHYZlo5bpbsOaL-_AjZXKhnfh61ZTmgLXsybFN_yMp8u2qTo226zKnx3D4y6zNg6d0YUw--i_FeGortsYTIaAulve9O5ke1WNT6bfqe7GBRuj50yW9x7sX8qp34f1sir9Q2AI30hDcmNio4beWsS73KnYGRspaXwAg_npZidt-Y4M3R6iR_ZvegTwloiwmE3lt5sH1elh1klzRk2-dFoI55VQCPy1ElzksR0WiY7yBDd5RSTMSEkgnZzpch3wpancVjbS9ENSSBUFsNGbicLt-sNzJsg65XKWLUUhgOeLYVpJAXOlr87bOeiro_MewIOWZxafJBGzo5erA9A9bup9c3-kPD5qSo9T5rVGj-LR_9_rGVxD4czeT2e7j-E6no1qw_I2YL0-PfdPEMfV9mkjMAy-XLaE_gFW3VvV |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Efficient+Multi-View+Graph+Convolutional+Network+with+Self-Attention+for+Multi-Class+Motor+Imagery+Decoding&rft.jtitle=Bioengineering+%28Basel%29&rft.au=Tan%2C+Xiyue&rft.au=Wang%2C+Dan&rft.au=Xu%2C+Meng&rft.au=Chen%2C+Jiaming&rft.date=2024-09-01&rft.pub=MDPI+AG&rft.issn=2306-5354&rft.eissn=2306-5354&rft.volume=11&rft.issue=9&rft_id=info:doi/10.3390%2Fbioengineering11090926&rft.externalDocID=A810992345 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2306-5354&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2306-5354&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2306-5354&client=summon |