Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN-F2CA model
Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of spa...
Saved in:
Published in | PloS one Vol. 20; no. 5; p. e0322583 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
San Francisco
Public Library of Science
27.05.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN- F 2 C A network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based F 2 C A attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN- F 2 C A model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method’s superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%. |
---|---|
AbstractList | Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN- network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN- model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method’s superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%. Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN-F2CA network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based F2CA attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN-F2CA model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method's superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%. Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN- F 2 C A network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based F 2 C A attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN- F 2 C A model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method’s superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%. Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN- F 2 C A network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based F 2 C A attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN- F 2 C A model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method’s superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%. |
Audience | Academic |
Author | Yang, Xu Fan, Mengzhao Wang, Chenxiao Xiong, Fan Zhou, Jinli |
AuthorAffiliation | 1 Zhongyuan University of Technology, Zhengzhou, China Nanyang Technological University, SINGAPORE 2 Shengda Economics Trade and Management College of Zhengzhou, Zhengzhou, China |
AuthorAffiliation_xml | – name: Nanyang Technological University, SINGAPORE – name: 1 Zhongyuan University of Technology, Zhengzhou, China – name: 2 Shengda Economics Trade and Management College of Zhengzhou, Zhengzhou, China |
Author_xml | – sequence: 1 givenname: Fan orcidid: 0000-0003-3802-7068 surname: Xiong fullname: Xiong, Fan – sequence: 2 givenname: Mengzhao surname: Fan fullname: Fan, Mengzhao – sequence: 3 givenname: Xu surname: Yang fullname: Yang, Xu – sequence: 4 givenname: Chenxiao surname: Wang fullname: Wang, Chenxiao – sequence: 5 givenname: Jinli surname: Zhou fullname: Zhou, Jinli |
BookMark | eNqNkm-L1DAQxouc4N3pNxAsCIIvujZJm7avZFn2zsNjD84_b0OaTtssaVIzrei3t72tcgUFyYsZZn7zDEyei-DMOgtB8JLEG8Iy8u7oRm-l2fRTeRMzStOcPQnOScFoxGnMzh7lz4ILxGMcpyzn_Dzo7wFBetWGzobQuUFP0YNyjdUP-YjaNiH20iOE-_11qFppLRgMpa1C5R1ihGN5BDWEnavAzHgpEapZcXc4RB-3h-iK7ran9vPgaS0NwoslXgZfrvafdx-i27vrm932NlKMpywqsiRjkFZ1qQqoZFmkVVJmQIuyzJMkU1nJOZW0TrjiPEtJTKGWvM6pgpQrKdll8Oqk2xuHYrkQCkYJ5SknST4R7xdiLDuoFNjBSyN6rzvpfwontVh3rG5F474LQgkhPJ8VXi8K3n0bAYd_7FmoRhoQ2tZuUlOdRiW2eUJ4kfBspjZ_oaZXQafV9LG1nuqrgbergYkZ4MfQyBFR3Hy6_3_27uuaffOIbUGaoUVnxtkNuAaTE_hgAg_1n9ORWMy-_H0NMftSLL5kvwDUi9fF |
Cites_doi | 10.7551/mitpress/1140.003.0008 10.1109/FSKD.2016.7603152 10.1109/TAFFC.2023.3336531 10.1016/j.bspc.2023.104799 10.1007/978-3-030-77977-1_7 10.1007/s11571-022-09851-w 10.1109/T-AFFC.2011.15 10.1145/3380688.3380694 10.1109/NER.2013.6695876 10.1016/j.asoc.2020.106954 10.1007/978-3-662-43790-2_11 10.1109/TIM.2023.3240230 10.1088/2057-1976/ad43f1 10.3389/fnins.2022.878146 10.1109/ICBME.2018.8703559 10.1016/j.dsm.2024.12.004 10.1016/j.asoc.2024.111635 10.1109/ICCV48922.2021.00082 10.1109/ACCESS.2020.3012900 |
ContentType | Journal Article |
Copyright | COPYRIGHT 2025 Public Library of Science 2025 Xiong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2025 Xiong et al 2025 Xiong et al 2025 Xiong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: COPYRIGHT 2025 Public Library of Science – notice: 2025 Xiong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2025 Xiong et al 2025 Xiong et al – notice: 2025 Xiong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | AAYXX CITATION IOV ISR 3V. 7QG 7QL 7QO 7RV 7SN 7SS 7T5 7TG 7TM 7U9 7X2 7X7 7XB 88E 8AO 8C1 8FD 8FE 8FG 8FH 8FI 8FJ 8FK ABJCF ABUWG AEUYN AFKRA ARAPS ATCPS AZQEC BBNVY BENPR BGLVJ BHPHI C1K CCPQU D1I DWQXO FR3 FYUFA GHDGH GNUQQ H94 HCIFZ K9. KB. KB0 KL. L6V LK8 M0K M0S M1P M7N M7P M7S NAPCQ P5Z P62 P64 PATMY PDBOC PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQGLB PQQKQ PQUKI PRINS PTHSS PYCSY RC3 5PM |
DOI | 10.1371/journal.pone.0322583 |
DatabaseName | CrossRef Gale In Context: Opposing Viewpoints Gale In Context: Science ProQuest Central (Corporate) Animal Behavior Abstracts Bacteriology Abstracts (Microbiology B) Biotechnology Research Abstracts Nursing & Allied Health Database Ecology Abstracts Entomology Abstracts (Full archive) Immunology Abstracts Meteorological & Geoastrophysical Abstracts Nucleic Acids Abstracts Virology and AIDS Abstracts Agricultural Science Collection Health & Medical Collection ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) ProQuest Pharma Collection Public Health Database Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Natural Science Journals ProQuest Hospital Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Materials Science & Engineering ProQuest Central (Alumni) ProQuest One Sustainability ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection Agricultural & Environmental Science Collection ProQuest Central Essentials Biological Science Collection ProQuest Central Technology Collection Natural Science Collection Environmental Sciences and Pollution Management ProQuest One Community College ProQuest Materials Science Collection ProQuest Central Korea Engineering Research Database Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Central Student AIDS and Cancer Research Abstracts SciTech Premium Collection ProQuest Health & Medical Complete (Alumni) Materials Science Database Nursing & Allied Health Database (Alumni Edition) Meteorological & Geoastrophysical Abstracts - Academic ProQuest Engineering Collection Biological Sciences Agricultural Science Database ProQuest Health & Medical Collection Medical Database ProQuest Algology Mycology and Protozoology Abstracts (Microbiology C) Biological Science Database Engineering Database Nursing & Allied Health Premium Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection Biotechnology and BioEngineering Abstracts Environmental Science Database Materials Science Collection ProQuest Central Premium ProQuest One Academic ProQuest Publicly Available Content ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection Environmental Science Collection Genetics Abstracts PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef Agricultural Science Database Publicly Available Content Database ProQuest Central Student ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials Nucleic Acids Abstracts SciTech Premium Collection ProQuest Central China Environmental Sciences and Pollution Management ProQuest One Applied & Life Sciences ProQuest One Sustainability Health Research Premium Collection Meteorological & Geoastrophysical Abstracts Natural Science Collection Health & Medical Research Collection Biological Science Collection ProQuest Central (New) ProQuest Medical Library (Alumni) Engineering Collection Advanced Technologies & Aerospace Collection Engineering Database Virology and AIDS Abstracts ProQuest Biological Science Collection ProQuest One Academic Eastern Edition Agricultural Science Collection ProQuest Hospital Collection ProQuest Technology Collection Health Research Premium Collection (Alumni) Biological Science Database Ecology Abstracts ProQuest Hospital Collection (Alumni) Biotechnology and BioEngineering Abstracts Environmental Science Collection Entomology Abstracts Nursing & Allied Health Premium ProQuest Health & Medical Complete ProQuest One Academic UKI Edition Environmental Science Database ProQuest Nursing & Allied Health Source (Alumni) Engineering Research Database ProQuest One Academic Meteorological & Geoastrophysical Abstracts - Academic ProQuest One Academic (New) Technology Collection Technology Research Database ProQuest One Academic Middle East (New) Materials Science Collection ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Natural Science Collection ProQuest Pharma Collection ProQuest Central ProQuest Health & Medical Research Collection Genetics Abstracts ProQuest Engineering Collection Biotechnology Research Abstracts Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Bacteriology Abstracts (Microbiology B) Algology Mycology and Protozoology Abstracts (Microbiology C) Agricultural & Environmental Science Collection AIDS and Cancer Research Abstracts Materials Science Database ProQuest Materials Science Collection ProQuest Public Health ProQuest Nursing & Allied Health Source ProQuest SciTech Collection Advanced Technologies & Aerospace Database ProQuest Medical Library Animal Behavior Abstracts Materials Science & Engineering Collection Immunology Abstracts ProQuest Central (Alumni) |
DatabaseTitleList | Agricultural Science Database CrossRef |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Sciences (General) |
DocumentTitleAlternate | Sparse EEG emotion recognition via CNN-KAN-F2CA model |
EISSN | 1932-6203 |
ExternalDocumentID | 3212656148 PMC12111688 A841694678 10_1371_journal_pone_0322583 |
GeographicLocations | China |
GeographicLocations_xml | – name: China |
GroupedDBID | --- 123 29O 2WC 53G 5VS 7RV 7X2 7X7 7XC 88E 8AO 8C1 8CJ 8FE 8FG 8FH 8FI 8FJ A8Z AAFWJ AAUCC AAWOE AAYXX ABDBF ABIVO ABJCF ABUWG ACGFO ACIHN ACIWK ACPRK ACUHS ADBBV AEAQA AENEX AEUYN AFKRA AFPKN AFRAH AHMBA ALIPV ALMA_UNASSIGNED_HOLDINGS AOIJS APEBS ARAPS ATCPS BAWUL BBNVY BCNDV BENPR BGLVJ BHPHI BKEYQ BPHCQ BVXVI BWKFM CCPQU CITATION CS3 D1I D1J D1K DIK DU5 E3Z EAP EAS EBD EMOBN ESX EX3 F5P FPL FYUFA GROUPED_DOAJ GX1 HCIFZ HH5 HMCUK HYE IAO IEA IGS IHR IHW INH INR IOV IPY ISE ISR ITC K6- KB. KQ8 L6V LK5 LK8 M0K M1P M48 M7P M7R M7S M~E NAPCQ O5R O5S OK1 OVT P2P P62 PATMY PDBOC PHGZM PHGZT PIMPY PQQKQ PROAC PSQYO PTHSS PV9 PYCSY RNS RPM RZL SV3 TR2 UKHRP WOQ WOW ~02 ~KM BBORY 3V. 7QG 7QL 7QO 7SN 7SS 7T5 7TG 7TM 7U9 7XB 8FD 8FK AZQEC C1K DWQXO FR3 GNUQQ H94 K9. KL. M7N P64 PJZUB PKEHL PPXIY PQEST PQGLB PQUKI PRINS RC3 5PM PUEGO |
ID | FETCH-LOGICAL-c3653-97473e5dfbc9edab95d4b7e29bb8447c7b662a2f46c6675102efa6f82ce56caa3 |
IEDL.DBID | M48 |
ISSN | 1932-6203 |
IngestDate | Fri Aug 29 23:58:31 EDT 2025 Thu Aug 21 18:37:37 EDT 2025 Fri Jul 25 09:37:17 EDT 2025 Wed Jun 18 17:00:52 EDT 2025 Tue Jun 17 03:40:43 EDT 2025 Fri Jun 27 05:13:50 EDT 2025 Fri Jun 27 05:13:45 EDT 2025 Tue Jun 17 02:10:56 EDT 2025 Sun Jul 06 05:02:10 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 5 |
Language | English |
License | This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Creative Commons Attribution License |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c3653-97473e5dfbc9edab95d4b7e29bb8447c7b662a2f46c6675102efa6f82ce56caa3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 Competing Interests: The authors have declared that no competing interests exist. |
ORCID | 0000-0003-3802-7068 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.1371/journal.pone.0322583 |
PQID | 3212656148 |
PQPubID | 1436336 |
PageCount | e0322583 |
ParticipantIDs | plos_journals_3212656148 pubmedcentral_primary_oai_pubmedcentral_nih_gov_12111688 proquest_journals_3212656148 gale_infotracmisc_A841694678 gale_infotracacademiconefile_A841694678 gale_incontextgauss_ISR_A841694678 gale_incontextgauss_IOV_A841694678 gale_healthsolutions_A841694678 crossref_primary_10_1371_journal_pone_0322583 |
PublicationCentury | 2000 |
PublicationDate | 20250527 |
PublicationDateYYYYMMDD | 2025-05-27 |
PublicationDate_xml | – month: 5 year: 2025 text: 20250527 day: 27 |
PublicationDecade | 2020 |
PublicationPlace | San Francisco |
PublicationPlace_xml | – name: San Francisco – name: San Francisco, CA USA |
PublicationTitle | PloS one |
PublicationYear | 2025 |
Publisher | Public Library of Science |
Publisher_xml | – name: Public Library of Science |
References | A Mert (pone.0322583.ref017) 2018; 21 H Dong (pone.0322583.ref028) 2024; 10 RW Picard (pone.0322583.ref001) 2000 pone.0322583.ref006 F Hou (pone.0322583.ref027) 2023; 72 R-N Duan (pone.0322583.ref021) 2013 pone.0322583.ref005 pone.0322583.ref024 Q Zhu (pone.0322583.ref008) 2020 pone.0322583.ref023 R Xin (pone.0322583.ref026) 2023; 2023 X Jie (pone.0322583.ref003) 2014; 24 W Tao (pone.0322583.ref009) 2020; 14 S Chen (pone.0322583.ref010) 2023; 17 L-C Shi (pone.0322583.ref020) 2013 HA Gonzalez (pone.0322583.ref019) 2020; 8 S Liu (pone.0322583.ref013) 2023; 85 pone.0322583.ref016 pone.0322583.ref012 J Chen (pone.0322583.ref014) 2022; 16 M Mamica (pone.0322583.ref007) 2021 Y-L Hsu (pone.0322583.ref011) 2017; 11 Z Mohammadi (pone.0322583.ref004) 2017; 28 L Han (pone.0322583.ref015) 2024; 159 A Tandle (pone.0322583.ref022) 2016 W-L Zheng (pone.0322583.ref025) 2015; 7 S Koelstra (pone.0322583.ref002) 2012; 3 O Bazgir (pone.0322583.ref018) 2018 |
References_xml | – volume: 14 start-page: 382 issue: 1 year: 2020 ident: pone.0322583.ref009 article-title: EEG-based emotion recognition via channel-wise attention and self attention – volume: 11 start-page: 85 issue: 1 year: 2017 ident: pone.0322583.ref011 article-title: Automatic ECG-based emotion recognition in music listening – ident: pone.0322583.ref024 – volume-title: Affective computing year: 2000 ident: pone.0322583.ref001 doi: 10.7551/mitpress/1140.003.0008 – volume-title: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD) year: 2016 ident: pone.0322583.ref022 article-title: Estimation of valence of emotion from musically stimulated EEG using frontal theta asymmetry doi: 10.1109/FSKD.2016.7603152 – volume-title: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) year: 2013 ident: pone.0322583.ref020 article-title: Differential entropy feature for EEG-based vigilance estimation – ident: pone.0322583.ref016 doi: 10.1109/TAFFC.2023.3336531 – volume: 85 start-page: 104799 year: 2023 ident: pone.0322583.ref013 article-title: GLFANet: A global to local feature aggregation network for EEG emotion recognition publication-title: Biomedical Signal Processing and Control doi: 10.1016/j.bspc.2023.104799 – volume-title: International Conference on Computational Science year: 2021 ident: pone.0322583.ref007 article-title: EEG-based emotion recognition using convolutional neural networks doi: 10.1007/978-3-030-77977-1_7 – volume: 17 start-page: 671 issue: 3 year: 2023 ident: pone.0322583.ref010 article-title: A multi-stage dynamical fusion network for multimodal emotion recognition publication-title: Cogn Neurodyn doi: 10.1007/s11571-022-09851-w – volume: 3 start-page: 18 issue: 1 year: 2012 ident: pone.0322583.ref002 article-title: DEAP: A Database for Emotion Analysis ;Using Physiological Signals publication-title: IEEE Trans Affective Comput doi: 10.1109/T-AFFC.2011.15 – volume-title: Valence-arousal model based emotion recognition using EEG, peripheral physiological signals and facial expression year: 2020 ident: pone.0322583.ref008 doi: 10.1145/3380688.3380694 – volume-title: 2013 6th international IEEE/EMBS conference on neural engineering (NER) year: 2013 ident: pone.0322583.ref021 article-title: Differential entropy feature for EEG-based emotion classification doi: 10.1109/NER.2013.6695876 – ident: pone.0322583.ref012 doi: 10.1016/j.asoc.2020.106954 – ident: pone.0322583.ref005 doi: 10.1007/978-3-662-43790-2_11 – volume: 21 start-page: 81 year: 2018 ident: pone.0322583.ref017 article-title: Applications publication-title: Emotion recognition from EEG signals by using multivariate empirical mode decomposition – volume: 2023 start-page: 9281230 issue: 1 year: 2023 ident: pone.0322583.ref026 article-title: Multiview Feature Fusion Attention Convolutional Recurrent Neural Networks for EEG-Based Emotion Recognition – volume: 28 start-page: 1985 year: 2017 ident: pone.0322583.ref004 article-title: Wavelet-based emotion recognition system using EEG signal – volume: 72 start-page: 1 year: 2023 ident: pone.0322583.ref027 article-title: EEG-Based Emotion Recognition for Hearing Impaired and Normal Individuals With Residual Feature Pyramids Network Based on Time–Frequency–Spatial Features publication-title: IEEE Trans Instrum Meas doi: 10.1109/TIM.2023.3240230 – volume: 10 start-page: 045018 issue: 4 year: 2024 ident: pone.0322583.ref028 article-title: Multi-scale 3D-CRU for EEG emotion recognition publication-title: Biomed Phys Eng Express doi: 10.1088/2057-1976/ad43f1 – volume: 7 start-page: 162 issue: 3 year: 2015 ident: pone.0322583.ref025 article-title: Lu B-LJIToamd publication-title: Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks – volume: 24 start-page: 1185 issue: 1 year: 2014 ident: pone.0322583.ref003 article-title: Emotion recognition based on the sample entropy of EEG publication-title: Biomed Mater Eng – volume: 16 start-page: 878146 year: 2022 ident: pone.0322583.ref014 article-title: Electroencephalograph-Based Emotion Recognition Using Brain Connectivity Feature and Domain Adaptive Residual Convolution Model publication-title: Front Neurosci doi: 10.3389/fnins.2022.878146 – volume-title: 2018 25th national and 3rd international iranian conference on biomedical engineering (ICBME) year: 2018 ident: pone.0322583.ref018 article-title: Emotion recognition with machine learning using EEG signals doi: 10.1109/ICBME.2018.8703559 – ident: pone.0322583.ref006 doi: 10.1016/j.dsm.2024.12.004 – volume: 159 start-page: 111635 year: 2024 ident: pone.0322583.ref015 article-title: EEG emotion recognition based on the TimesNet fusion model publication-title: Applied Soft Computing doi: 10.1016/j.asoc.2024.111635 – ident: pone.0322583.ref023 doi: 10.1109/ICCV48922.2021.00082 – volume: 8 start-page: 140896 year: 2020 ident: pone.0322583.ref019 article-title: BioCNN: A hardware inference engine for EEG-based emotion detection publication-title: IEEE Access doi: 10.1109/ACCESS.2020.3012900 |
SSID | ssj0053866 |
Score | 2.4767673 |
Snippet | Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their... |
SourceID | plos pubmedcentral proquest gale crossref |
SourceType | Open Website Open Access Repository Aggregation Database Index Database |
StartPage | e0322583 |
SubjectTerms | Accuracy Artificial intelligence Artificial neural networks Biology and Life Sciences Brain research Channels Classification Color imagery Computer and Information Sciences Datasets Deep learning EEG Electroencephalography Emotion recognition Emotions Engineering and Technology Fast Fourier transformations Feature extraction Fourier transforms Frequency dependence Frequency domain analysis Machine learning Medicine and Health Sciences Methods Neural networks Physical Sciences Physiology Power spectral density Research and Analysis Methods Social Sciences Spatial data Support vector machines Wavelet transforms |
SummonAdditionalLinks | – databaseName: ProQuest Technology Collection dbid: 8FG link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELZguXBBlIe6UMBCSMDB7a7tOM4JrVa7LSAWCSjqLbIdu1SqktB0_z8zjrM0UoW4RZqJlXhefsx8Q8gbHwqnneDM5MEzKXhghQ2aSZB3KDxEyAhW_WWjTk7lp7PsLB24dSmtcvCJ0VFXjcMz8iMBPlZF2MoP7W-GXaPwdjW10LhL7gFVYUqXXh8PnhhsWalULify-VGSzmHb1P5whpqsxSgcJac8aS-bbrTiHOdL3ghA64fkQVo50kUv6j1yx9ePyF6yzY6-SwDS7x-Tdkino01Nfd-nh-4yheAZk93PKfiSq87T1eqYYvlvDVGSmrqi8UtZt7V4RENjqxxkx3hX4YjLzYZ9XmzYmi8XPfkJOV2vfixPWOqswJxQmWC4iRA-q4J1ha-MLbJK2tzzwlotZe5yqxQ3PEjlFOwoYBHig1FBc-cz5YwRT8mkhlncJ5TnRhR-Li1XRprMaSUsghiafCaDkNWUsGGCy7YH0CjjLVoOG49-5koUSJkEMiWvUAplXwa6s79ygfejBbh1PSWvIweiV9SYHnNutl1Xfvz68z-Yvn8bMb1NTKEBwTqTShLgcxAVa8R5MOIEG3Qj8j7qzPBDXflXW-HNQY9uJ-uRbu3mCGG_x5T64leE_0ZQvrnS-tm_R35O7nPsVTzDmsMDMrm-2voXsIC6ti-jlfwBqM8cDA priority: 102 providerName: ProQuest |
Title | Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN-F2CA model |
URI | https://www.proquest.com/docview/3212656148 https://pubmed.ncbi.nlm.nih.gov/PMC12111688 http://dx.doi.org/10.1371/journal.pone.0322583 |
Volume | 20 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9MwELe27oUXtPGhFUaxEBLw4Kq1Hdt5QKhU6QZoBQ2K-hbZjj2QprRrVgle-NvxOUlFpCHtxarkS-Teh-8c3_0OoZfOp1ZZRomW3hHOqCep8YrwIG-fuuAhI1j1-VycLfjHZbLcQ23P1oaB1a1HO-gntdhcDX9d_34XDP5t7Nogx-1Dw_WqdMMRaKhi--gg-CYJPQ3O-e5eIVh3vL2EqIUIOmJNMd3_3tJxVs2W3VtfrapOPNrNpvzHPc0O0f0mrsSTWhGO0J4rH6CjxnIr_LqBl37zEK3bZDu8KrGru_jgXR5R-A2p8Jc47DSbyuEsO8VQHFwGH4p1WeC4UlJtDXzAwbGRDpCDNyzgjdP5nHyazMmMTif19CO0mGXfpmek6btALBMJI3DEYC4pvLGpK7RJk4Ib6WhqjOJcWmmEoJp6LqwI540QojivhVfUukRYrdlj1CsDF48RplKz1I25oUJznVglmAGIQy1H3DNe9BFpGZyva3iNPN6xyXAsqTmXg0DyRiB99BykkNdFojvrzCdwe5qGTV_10YtIAdgWJSTPXOptVeUfPn-_A9HXiw7Rq4bIr4JgrW4KFsJyADOrQ3nSoQwWajvTx6Az7R-qchbiBREhWMOTrR7dPq06urXjEYCCd2fKnz8iODhA9o2FUk_uvPqn6B6FpsYjKE48Qb2bzdY9C5HWjRmgfbmUYVTTMYyz0wE6eJ_Nv1wM4reLQTQuGP9kfwGZQS5E |
linkProvider | Scholars Portal |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELbKcoALojzUhUItBAIObndtx0kOCK2W3e6ybZCgRb0F27ELEkpC0xXiT_EbmcljIRJCXHqL5ImVzDvxzDeEPHU-tpEVnOnQOyYF9yw2PmIS5O1jBxGyBqs-TtTiVL49C862yM-uFwbLKjufWDvqrLD4j_xAgI9VNWzl6_Ibw6lReLrajdBo1GLlfnyHT7bq1fINyPcZ5_PZyXTB2qkCzAoVCIYJtHBB5o2NXaZNHGTShI7HxkRShjY0SnHNvVRWQTYNAdh5rXzErQuU1VrAvtfIdSkgkmNn-vyw8_zgO5Rq2_NEOD5otWG_LHK3P0LLiUQv_LVBYFB-Lapehtuvz_wj4M1vk1ttpkonjWptky2X3yHbrS-o6IsWsPrlXVJ25Xu0yKlr5gLRTWUSXGNx_TkF33VROTqbHVJsN84hKlOdZ7R-UlatDf4SovVoHiTH-JrhjtMkYatJwuZ8OmmW75HTK-H5fTLIgYs7hPJQi9iNpeFKSx3YSAmDoIk6HEkvZDYkrGNwWjaAHWl9ahfCh07DuRQFkrYCGZI9lELatJ1u7D2d4HlsDGEkGpInNQWiZeRYjnOu11WVLt99_A-iD-97RM9bIl-AYK1uWyDgcRCFq0e526MEm7e95R3Ume6FqvS3dcCdnR79fTnq6daGRwgz3l_Jv3yu4cYRBHCsoujBv3feIzcWJ8dH6dEyWT0kNznOSR5hv-MuGVxerN0jSN4uzePaYij5dNUm-gtsPVnm |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwELdGkRAviPGhFQazEAh48NrajuM8IFR1KyuFgoChvQXbsQcSSsuyCvGv8ddxlziFSAjxsrdIvljJfSe--x0hD33InHaCM5MGz6TggWU2aCZB3iHzECFrsOrXC3V0LF-eJCdb5GfbC4Nlla1PrB11sXT4j3wgwMeqGrZyEGJZxNuD6fPVN4YTpPCktR2n0ajI3P_4Dp9v1bPZAcj6EefTww-TIxYnDDAnVCIYJtPCJ0WwLvOFsVlSSJt6nlmrpUxdapXihgepnILMGoKxD0YFzZ1PlDNGwL6XyOVUpBptTE825SXgR5SKrXoiHQ2iZuyvlqXfH6IVadEJhTEg9FZfl1Un2-3Wav4R_KbXybWYtdJxo2bbZMuXN8h29AsVfRLBq5_eJKu2lI8uS-qbGUF0U6UE11hof0rBj51Vnh4evqDYelxChKamLGj9pKxaW_w9ROsxPUiOsbbAHSeLBZuPF2zKJ-Nm-RY5vhCe3ya9Eri4QyhPjcj8SFqujDSJ00pYBFA06VAGIYs-YS2D81UD3pHXJ3gpfPQ0nMtRIHkUSJ_soRTypgV1Y_v5GM9mMwgpuk8e1BSInFGiDp6adVXlszcf_4Po_bsO0eNIFJYgWGdiOwQ8DiJydSh3O5Rg_66zvIM6075Qlf-2FLiz1aO_L-uObm14hJDj3ZXyy-caehwBAUdK6zv_3nmPXAHjzF_NFvO75CrHkclDbH3cJb3zs7W_B3ncub1fGwwlny7aQn8B9Gpd5w |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Research+on+emotion+recognition+using+sparse+EEG+channels+and+cross-subject+modeling+based+on+CNN-KAN-F2CA+model&rft.jtitle=PloS+one&rft.au=Xiong%2C+Fan&rft.au=Fan%2C+Mengzhao&rft.au=Yang%2C+Xu&rft.au=Wang%2C+Chenxiao&rft.date=2025-05-27&rft.pub=Public+Library+of+Science&rft.issn=1932-6203&rft.eissn=1932-6203&rft.volume=20&rft.issue=5&rft.spage=e0322583&rft_id=info:doi/10.1371%2Fjournal.pone.0322583&rft.externalDocID=A841694678 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1932-6203&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1932-6203&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1932-6203&client=summon |