DDNet: a hybrid network based on deep adaptive multi-head attention and dynamic graph convolution for EEG emotion recognition
Emotion recognition plays a crucial role in cognitive science and human-computer interaction. Existing techniques tend to ignore the significant differences between different subjects, resulting in limited accuracy and generalization ability. In addition, existing methods suffer from difficulties in...
Saved in:
Published in | Signal, image and video processing Vol. 19; no. 4 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
London
Springer London
01.04.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 1863-1703 1863-1711 |
DOI | 10.1007/s11760-025-03876-4 |
Cover
Loading…
Abstract | Emotion recognition plays a crucial role in cognitive science and human-computer interaction. Existing techniques tend to ignore the significant differences between different subjects, resulting in limited accuracy and generalization ability. In addition, existing methods suffer from difficulties in capturing the complex relationships among the channels of electroencephalography signals. A hybrid network is proposed to overcome the limitations. The proposed network is comprised of a deep adaptive multi-head attention (DAM) branch and a dynamic graph convolution (DGC) branch. The DAM branch uses residual convolution and adaptive multi-head attention mechanism. It can focus on multi-dimensional information from different representational subspaces at different locations. The DGC branch uses a dynamic graph convolutional neural network that learns topological features among the channels. The synergistic effect of these two branches enhances the model’s adaptability to subject differences. The extraction of local features and the understanding of global patterns are also optimized in the proposed network. Subject independent experiments were conducted on SEED and SEED-IV datasets. The average accuracy of SEED was 92.63% and the average F1-score was 92.43%. The average accuracy of SEED-IV was 85.03%, and the average F1-score was 85.01%. The results show that the proposed network has significant advantages in cross-subject emotion recognition, and can improve the accuracy and generalization ability in emotion recognition tasks. |
---|---|
AbstractList | Emotion recognition plays a crucial role in cognitive science and human-computer interaction. Existing techniques tend to ignore the significant differences between different subjects, resulting in limited accuracy and generalization ability. In addition, existing methods suffer from difficulties in capturing the complex relationships among the channels of electroencephalography signals. A hybrid network is proposed to overcome the limitations. The proposed network is comprised of a deep adaptive multi-head attention (DAM) branch and a dynamic graph convolution (DGC) branch. The DAM branch uses residual convolution and adaptive multi-head attention mechanism. It can focus on multi-dimensional information from different representational subspaces at different locations. The DGC branch uses a dynamic graph convolutional neural network that learns topological features among the channels. The synergistic effect of these two branches enhances the model’s adaptability to subject differences. The extraction of local features and the understanding of global patterns are also optimized in the proposed network. Subject independent experiments were conducted on SEED and SEED-IV datasets. The average accuracy of SEED was 92.63% and the average F1-score was 92.43%. The average accuracy of SEED-IV was 85.03%, and the average F1-score was 85.01%. The results show that the proposed network has significant advantages in cross-subject emotion recognition, and can improve the accuracy and generalization ability in emotion recognition tasks. |
ArticleNumber | 293 |
Author | Zhang, Xin Sun, Baiwei Zhang, Xiu Wang, Yujie Xu, Bingyue |
Author_xml | – sequence: 1 givenname: Bingyue surname: Xu fullname: Xu, Bingyue organization: Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University – sequence: 2 givenname: Xin surname: Zhang fullname: Zhang, Xin email: ecemark@tjnu.edu.cn organization: College of Artificial Intelligence, Tianjin Normal University – sequence: 3 givenname: Xiu surname: Zhang fullname: Zhang, Xiu organization: College of Electronic and Communication Engineering, Tianjin Normal University – sequence: 4 givenname: Baiwei surname: Sun fullname: Sun, Baiwei organization: Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University – sequence: 5 givenname: Yujie surname: Wang fullname: Wang, Yujie organization: Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University |
BookMark | eNp9UE1PAjEQbYwmIvIHPDXxvNqP3bZ4M4BoQvSi56ZsB1hk27XtYjj4313A6M25zJu8j0neBTp13gFCV5TcUELkbaRUCpIRVmSEKymy_AT1qBI8o5LS019M-DkaxLgm3XAmlVA99DUeP0O6wwavdvNQWewgffrwjucmgsXeYQvQYGNNk6ot4LrdpCpbgbHYpAQuVZ3EOIvtzpm6KvEymGaFS--2ftMe2IUPeDKZYqj94Q5Q-qWr9vgSnS3MJsLgZ_fR28PkdfSYzV6mT6P7WVYyQlKmCjXkVEgFBePMcFKUQuZWloUoZMGkoYIxBkO6UB1FYWhEbrmcK6OsoRx4H10fc5vgP1qISa99G1z3Uu9zKSU54Z2KHVVl8DEGWOgmVLUJO02J3jetj03rrml9aFrnnYkfTbETuyWEv-h_XN-i84JF |
Cites_doi | 10.1109/TAFFC.2017.2712143 10.1109/TCYB.2018.2797176 10.1016/j.ins.2022.01.013 10.1109/ACCESS.2019.2921451 10.1007/s11760-022-02399-6 10.1109/TAMD.2015.2431497 10.1109/BIBM58861.2023.10385711 10.1109/TAFFC.2017.2714671 10.1109/TAFFC.2023.3288118 10.1109/TAFFC.2022.3170428 10.1007/s11760-024-03178-1 10.1007/s11760-023-02965-6 10.1109/RCAR.2016.7784015 10.1109/ACCESS.2024.3384303 10.1109/TII.2022.3170422 10.1109/TNNLS.2022.3225855 10.1109/TNSRE.2024.3355750 10.1016/j.bspc.2023.105875 10.3389/fnins.2021.778488 10.1007/s10044-019-00860-w 10.1109/TAFFC.2020.2994159 10.1007/s11571-024-10127-8 10.1109/TAFFC.2018.2885474 10.1007/s11760-023-02920-5 10.1016/j.inffus.2018.09.008 10.1109/TNSRE.2022.3173724 10.1109/TAFFC.2018.2817622 10.1007/s11760-022-02447-1 10.1007/s11571-022-09890-3 10.1109/ACCESS.2023.3270317 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Copyright Springer Nature B.V. 2025 |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. – notice: Copyright Springer Nature B.V. 2025 |
DBID | AAYXX CITATION |
DOI | 10.1007/s11760-025-03876-4 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1863-1711 |
ExternalDocumentID | 10_1007_s11760_025_03876_4 |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62371341; 62371341 funderid: http://dx.doi.org/10.13039/501100001809 |
GroupedDBID | -Y2 .VR 06D 0R~ 123 1N0 203 29~ 2J2 2JN 2JY 2KG 2KM 2LR 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5VS 67Z 6NX 875 8TC 95- 95. 95~ AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAPKM AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBXA ABDBE ABDZT ABECU ABFTV ABHQN ABJNI ABJOX ABKCH ABMNI ABMQK ABNWP ABQBU ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACSNA ACZOJ ADHHG ADHIR ADKFA ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFQL AEGAL AEGNC AEJHL AEJRE AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFGCZ AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHPBZ AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARMRJ AXYYD AYFIA AYJHY B-. BA0 BDATZ BGNMA BSONS CAG COF CS3 CSCUP DDRTE DNIVK DPUIP EBLON EBS EIOEI EJD ESBYG FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ7 GQ8 GXS H13 HF~ HG5 HG6 HLICF HMJXF HQYDN HRMNR HZ~ IJ- IKXTQ IWAJR IXC IXD IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV LLZTM M4Y MA- NPVJJ NQJWS NU0 O9- O93 O9J OAM P9O PF0 PT4 QOS R89 R9I RIG ROL RPX RSV S16 S1Z S27 S3B SAP SDH SEG SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W48 YLTOR Z45 ZMTXR ~A9 AAYXX ABBRH ABFSG ACSTC AEZWR AFDZB AFHIU AFOHR AHWEU AIXLP ATHPR CITATION ABRTQ |
ID | FETCH-LOGICAL-c200t-858931678e5232a305c674d7c5657527a16222e91f805c1e9a64d37b8a8da13e3 |
IEDL.DBID | U2A |
ISSN | 1863-1703 |
IngestDate | Fri Jul 25 21:08:41 EDT 2025 Tue Jul 01 05:16:31 EDT 2025 Tue Apr 01 01:16:29 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Keywords | Deep learning Emotion recognition EEG Graph neural network Multi-head attention |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c200t-858931678e5232a305c674d7c5657527a16222e91f805c1e9a64d37b8a8da13e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 3167110403 |
PQPubID | 2044169 |
ParticipantIDs | proquest_journals_3167110403 crossref_primary_10_1007_s11760_025_03876_4 springer_journals_10_1007_s11760_025_03876_4 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2025-04-01 |
PublicationDateYYYYMMDD | 2025-04-01 |
PublicationDate_xml | – month: 04 year: 2025 text: 2025-04-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | London |
PublicationPlace_xml | – name: London – name: Heidelberg |
PublicationTitle | Signal, image and video processing |
PublicationTitleAbbrev | SIViP |
PublicationYear | 2025 |
Publisher | Springer London Springer Nature B.V |
Publisher_xml | – name: Springer London – name: Springer Nature B.V |
References | M Li (3876_CR12) 2022; 17 R Zhou (3876_CR30) 2024; 15 Y Li (3876_CR29) 2021; 12 MS Hossain (3876_CR3) 2019; 49 T Song (3876_CR16) 2020; 11 H Zeng (3876_CR18) 2022; 71 D Bian (3876_CR10) 2024; 18 3876_CR23 3876_CR20 L Shen (3876_CR13) 2022; 30 3876_CR6 SM Alarcão (3876_CR4) 2019; 10 3876_CR7 3876_CR8 3876_CR24 W Guo (3876_CR33) 2024; 283 F Wu (3876_CR26) 2022; 591 C Li (3876_CR22) 2023; 19 P Zhong (3876_CR17) 2022; 13 D Klepl (3876_CR14) 2024; 32 S Bagherzadeh (3876_CR32) 2024; 90 W-L Zheng (3876_CR27) 2015; 7 W-L Zheng (3876_CR28) 2019; 49 H Abbasi (3876_CR5) 2023; 18 T Li (3876_CR9) 2023; 11 S Bagherzadeh (3876_CR19) 2024; 12 M Wahdow (3876_CR1) 2022; 17 P Liu (3876_CR11) 2019; 7 H Wang (3876_CR25) 2021; 70 W-L Zheng (3876_CR34) 2019; 10 3876_CR15 S Hwang (3876_CR21) 2019; 23 Y Zhang (3876_CR2) 2022; 17 Y Li (3876_CR31) 2023; 14 |
References_xml | – volume: 10 start-page: 417 year: 2019 ident: 3876_CR34 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2017.2712143 – volume: 49 start-page: 1110 year: 2019 ident: 3876_CR28 publication-title: IEEE Trans. Cybern. doi: 10.1109/TCYB.2018.2797176 – volume: 591 start-page: 142 year: 2022 ident: 3876_CR26 publication-title: Inf. Sci. doi: 10.1016/j.ins.2022.01.013 – volume: 7 start-page: 74973 year: 2019 ident: 3876_CR11 publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2921451 – volume: 17 start-page: 1883 year: 2022 ident: 3876_CR1 publication-title: Signal Image Video Process. doi: 10.1007/s11760-022-02399-6 – volume: 7 start-page: 162 year: 2015 ident: 3876_CR27 publication-title: IEEE Trans. Auton. Mental Dev. doi: 10.1109/TAMD.2015.2431497 – ident: 3876_CR23 doi: 10.1109/BIBM58861.2023.10385711 – volume: 10 start-page: 374 year: 2019 ident: 3876_CR4 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2017.2714671 – volume: 15 start-page: 657 year: 2024 ident: 3876_CR30 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2023.3288118 – volume: 14 start-page: 2512 year: 2023 ident: 3876_CR31 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2022.3170428 – ident: 3876_CR7 doi: 10.1007/s11760-024-03178-1 – ident: 3876_CR20 – volume: 18 start-page: 2991 year: 2024 ident: 3876_CR10 publication-title: Signal Image Video Process doi: 10.1007/s11760-023-02965-6 – ident: 3876_CR6 doi: 10.1109/RCAR.2016.7784015 – volume: 12 start-page: 50949 year: 2024 ident: 3876_CR19 publication-title: IEEE Access doi: 10.1109/ACCESS.2024.3384303 – volume: 19 start-page: 6016 year: 2023 ident: 3876_CR22 publication-title: IEEE Trans. Ind. Inf. doi: 10.1109/TII.2022.3170422 – ident: 3876_CR15 doi: 10.1109/TNNLS.2022.3225855 – volume: 32 start-page: 493 year: 2024 ident: 3876_CR14 publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/TNSRE.2024.3355750 – volume: 90 year: 2024 ident: 3876_CR32 publication-title: Biomed. Signal Process. Control. doi: 10.1016/j.bspc.2023.105875 – ident: 3876_CR8 doi: 10.3389/fnins.2021.778488 – volume: 23 start-page: 1323 year: 2019 ident: 3876_CR21 publication-title: Pattern Anal. Appl. doi: 10.1007/s10044-019-00860-w – volume: 13 start-page: 1290 year: 2022 ident: 3876_CR17 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2020.2994159 – volume: 71 start-page: 1 year: 2022 ident: 3876_CR18 publication-title: IEEE Trans. Instrum. Meas. – ident: 3876_CR24 doi: 10.1007/s11571-024-10127-8 – volume: 12 start-page: 494 year: 2021 ident: 3876_CR29 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2018.2885474 – volume: 18 start-page: 2453 year: 2023 ident: 3876_CR5 publication-title: Signal Image Video Process. doi: 10.1007/s11760-023-02920-5 – volume: 49 start-page: 69 year: 2019 ident: 3876_CR3 publication-title: Inf. Fus. doi: 10.1016/j.inffus.2018.09.008 – volume: 30 start-page: 1191 year: 2022 ident: 3876_CR13 publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/TNSRE.2022.3173724 – volume: 11 start-page: 532 year: 2020 ident: 3876_CR16 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2018.2817622 – volume: 17 start-page: 2305 year: 2022 ident: 3876_CR2 publication-title: Signal Image Video Process. doi: 10.1007/s11760-022-02447-1 – volume: 70 start-page: 1 year: 2021 ident: 3876_CR25 publication-title: IEEE Trans. Instrum. Meas. – volume: 17 start-page: 1271 year: 2022 ident: 3876_CR12 publication-title: Cogn. Neurodyn. doi: 10.1007/s11571-022-09890-3 – volume: 283 year: 2024 ident: 3876_CR33 publication-title: Knowl. Based Syst. – volume: 11 start-page: 41859 year: 2023 ident: 3876_CR9 publication-title: IEEE Access doi: 10.1109/ACCESS.2023.3270317 |
SSID | ssj0000327868 |
Score | 2.3362365 |
Snippet | Emotion recognition plays a crucial role in cognitive science and human-computer interaction. Existing techniques tend to ignore the significant differences... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Index Database Publisher |
SubjectTerms | Accuracy Artificial neural networks Attention Channels Computer Imaging Computer Science Emotion recognition Emotions Graph neural networks Image Processing and Computer Vision Multidimensional methods Multimedia Information Systems Original Paper Pattern Recognition and Graphics Signal,Image and Speech Processing Subspaces Synergistic effect Vision |
Title | DDNet: a hybrid network based on deep adaptive multi-head attention and dynamic graph convolution for EEG emotion recognition |
URI | https://link.springer.com/article/10.1007/s11760-025-03876-4 https://www.proquest.com/docview/3167110403 |
Volume | 19 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07b9swED7ksSRDm6QJ6sYNbujWErD4kOhsdvwIUsBTDbiTQIk0MslG4wwZ8t9zR0l2GyRDJwkgQQi6N--7O4BvZMVlqUoldKGN0MbwNEDpRCBVqAqzpPeI8p2lt3N9tzCLpijsoUW7tynJqKl3xW5JlvYEj1_llGsq9D4cGo7diYvncrC9Wekpmdm6Bs6m3H-zp5pqmbeP-dci7dzMV5nRaHAmJ_Ch8RRxUJP2FPZCdQYf2ykM2AjlGRz_1VLwEzyPRrOwuUaH909cjIVVjfNGNlceVxX6ENbovFuzosMIKBSkkT1yp82IfURXefT1qHqMHa2RwekNkyK5uTgeTzHUE4Bwi0FaVecwn4x_3dyKZsSCKEk8NsIa8lcSMliBAlLpSPjLNNM-KzkbamTmkpQciNBPlpaWktB3qfYqK6yz3vEF6gUcVKsqfAakuIjCN7WU1imd2cKy9pBF35BWkMvMdOB7-5vzdd1JI9_1TGai5ESUPBIl1x3otpTIG6l6yPlLyV3RPdWBHy11dsvvn_bl_7ZfwpGMDMIAnS4cbP48hq_ke2yKKzgcTIbDGT-nv3-OryLrvQDJ_c_v |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED7xGICBN6JQ4AY2sNT4kbhsCAoFSicqsUVO7IoprSAMDPx3zk5CAcHAFsmRFeVen33f3QEcUxTnucgFk5lUTCrlpwFywxy5QpGpMT0Hlu8w7o_k7aN6rIvCXhq2e5OSDJ56VuwWJXGH-fGrPuUaMzkPiwQGtCdyjfj5581KR_BEVzVwOvb9Nzuirpb5fZvvEWkGM39kRkPAuVqH1Rop4nkl2g2Yc8UmrDVTGLA2yk1Y-dJScAveLy-HrjxDg09vvhgLi4rnjT5cWZwUaJ2borFm6h0dBkIhI49s0XfaDNxHNIVFW42qx9DRGj05vVZSJJiLvd41umoCEH5ykCbFNoyueg8XfVaPWGA5mUfJtCK8ElHAcnQg5YaMP48TaZPcZ0MVT0wUE4Bw3WisaSlyXRNLK5JMG22Nv0DdgYViUrhdQDoX0fFNjLk2QiY609578KyryCvwcaJacNL85nRaddJIZz2TvVBSEkoahJLKFrQbSaS1Vb2k_ksJrsiOaMFpI53Z8t-77f3v9SNY6j_cD9LBzfBuH5Z5UBZP1mnDQvn86g4Ih5TZYVC7D7Scz9I |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB61VELlQAu0Ygtt58ANLDZ-JF5uqLtbCmjFgZW4RU7siFN2VcKBA_-9M07CAmoPvUVyZEWZ12fPNzMABxTFZalKJXShjdDG8DRA6UQgV6gKU9FzZPnO0rO5Pr8xN8-q-CPbvU9JtjUN3KWpbo6XvjpeFb4lWToUPIqV06-p0G_hHbnjhPV6Lk-fblmGSma2rYezKffiHKqucubv27yMTivI-SpLGoPP9CNsdqgRT1sxb8GbUG_Dh34iA3YGug0bz9oL7sDjeDwLzQk6vH3gwiysW843cujyuKjRh7BE592SnR5GcqEg7-yRu25GHiS62qNvx9Zj7G6NTFTvFBYJ8uJk8hNDOw0In_hIi_oTzKeT6x9nohu3IEoylUZYQ9gloeAV6HAqHTmCMs20z0rOjBqZuSQlMBFGSWVpKQkjl2qvssI66x1fpn6GtXpRh11AOiPRUU5V0jqlM1tY9iSyGBnyELLKzAAO-9-cL9uuGvmqfzILJSeh5FEouR7Afi-JvLOwu5y_lKCLHqoBHPXSWS3_e7cv__f6d1i_Gk_zy1-ziz14L6OuMG9nH9aa3_fhK0GSpvgWte4PCqLUDg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DDNet%3A+a+hybrid+network+based+on+deep+adaptive+multi-head+attention+and+dynamic+graph+convolution+for+EEG+emotion+recognition&rft.jtitle=Signal%2C+image+and+video+processing&rft.au=Xu%2C+Bingyue&rft.au=Zhang%2C+Xin&rft.au=Zhang%2C+Xiu&rft.au=Sun%2C+Baiwei&rft.date=2025-04-01&rft.pub=Springer+London&rft.issn=1863-1703&rft.eissn=1863-1711&rft.volume=19&rft.issue=4&rft_id=info:doi/10.1007%2Fs11760-025-03876-4&rft.externalDocID=10_1007_s11760_025_03876_4 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1863-1703&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1863-1703&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1863-1703&client=summon |