MindLink-Eumpy: An Open-Source Python Toolbox for Multimodal Emotion Recognition
Emotion recognition plays an important role in intelligent human–computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating elec...
Saved in:
Published in | Frontiers in human neuroscience Vol. 15; p. 621493 |
---|---|
Main Authors | , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Switzerland
Frontiers Research Foundation
19.02.2021
Frontiers Media S.A |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Emotion recognition plays an important role in intelligent human–computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram (EEG) and facial expression information. MindLink-Eumpy first applies a series of tools to automatically obtain physiological data from subjects and then analyzes the obtained facial expression data and EEG data, respectively, and finally fuses the two different signals at a decision level. In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multitask convolutional neural network (CNN) based on transfer learning technique. In the detection of EEG, MindLink-Eumpy provides two algorithms, including a subject-dependent model based on support vector machine (SVM) and a subject-independent model based on long short-term memory network (LSTM). In the decision-level fusion, weight enumerator and AdaBoost technique are applied to combine the predictions of SVM and CNN. We conducted two offline experiments on the Database for Emotion Analysis Using Physiological Signals (DEAP) dataset and the Multimodal Database for Affect Recognition and Implicit Tagging (MAHNOB-HCI) dataset, respectively, and conducted an online experiment on 15 healthy subjects. The results show that multimodal methods outperform single-modal methods in both offline and online experiments. In the subject-dependent condition, the multimodal method achieved an accuracy of 71.00% in the valence dimension and an accuracy of 72.14% in the arousal dimension. In the subject-independent condition, the LSTM-based method achieved an accuracy of 78.56% in the valence dimension and an accuracy of 77.22% in the arousal dimension. The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated. |
---|---|
AbstractList | Emotion recognition plays an important role in intelligent human–computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram (EEG) and facial expression information. MindLink-Eumpy first applies a series of tools to automatically obtain physiological data from subjects and then analyzes the obtained facial expression data and EEG data, respectively, and finally fuses the two different signals at a decision level. In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multitask convolutional neural network (CNN) based on transfer learning technique. In the detection of EEG, MindLink-Eumpy provides two algorithms, including a subject-dependent model based on support vector machine (SVM) and a subject-independent model based on long short-term memory network (LSTM). In the decision-level fusion, weight enumerator and AdaBoost technique are applied to combine the predictions of SVM and CNN. We conducted two offline experiments on the Database for Emotion Analysis Using Physiological Signals (DEAP) dataset and the Multimodal Database for Affect Recognition and Implicit Tagging (MAHNOB-HCI) dataset, respectively, and conducted an online experiment on 15 healthy subjects. The results show that multimodal methods outperform single-modal methods in both offline and online experiments. In the subject-dependent condition, the multimodal method achieved an accuracy of 71.00% in the valence dimension and an accuracy of 72.14% in the arousal dimension. In the subject-independent condition, the LSTM-based method achieved an accuracy of 78.56% in the valence dimension and an accuracy of 77.22% in the arousal dimension. The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated. Emotion recognition plays an important role in intelligent human computer interaction, but the related research still faces low accuracy and subject-dependent limitation. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram (EEG) and facial expression information. MindLink-Eumpy applies a series of tools to automatically obtain physiological data from subjects. In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multi-task convolutional neural network (CNN) based on transfer learning technique. In the detection of EEG, MindLink-Eumpy provides two algorithms, including a subject-dependent model based on support vector machine (SVM) and a subject-independent model based on long short-term memory network (LSTM). In the decision-level fusion, the weight enumerator and AdaBoost technique are applied to combine the predictions of SVM and CNN. Two offline experiments were conducted on the DEAP dataset and MAHNOB-HCI dataset respectively and an online experiment was conducted on fifteen healthy subjects. The results show that multimodal methods outperform single-modal methods in both offline and online experiments. In the subject-dependent condition, the multimodal method achieved an accuracy of 71.00% for the valence dimension, and an accuracy of 72.14% for the arousal dimension. In the subject-independent condition, the LSTM-based method achieved an accuracy of 78.56% for the valence dimension, and an accuracy of 77.22% for the arousal dimension. The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated. Emotion recognition plays an important role in intelligent human-computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram (EEG) and facial expression information. MindLink-Eumpy first applies a series of tools to automatically obtain physiological data from subjects and then analyzes the obtained facial expression data and EEG data, respectively, and finally fuses the two different signals at a decision level. In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multitask convolutional neural network (CNN) based on transfer learning technique. In the detection of EEG, MindLink-Eumpy provides two algorithms, including a subject-dependent model based on support vector machine (SVM) and a subject-independent model based on long short-term memory network (LSTM). In the decision-level fusion, weight enumerator and AdaBoost technique are applied to combine the predictions of SVM and CNN. We conducted two offline experiments on the Database for Emotion Analysis Using Physiological Signals (DEAP) dataset and the Multimodal Database for Affect Recognition and Implicit Tagging (MAHNOB-HCI) dataset, respectively, and conducted an online experiment on 15 healthy subjects. The results show that multimodal methods outperform single-modal methods in both offline and online experiments. In the subject-dependent condition, the multimodal method achieved an accuracy of 71.00% in the valence dimension and an accuracy of 72.14% in the arousal dimension. In the subject-independent condition, the LSTM-based method achieved an accuracy of 78.56% in the valence dimension and an accuracy of 77.22% in the arousal dimension. The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated.Emotion recognition plays an important role in intelligent human-computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram (EEG) and facial expression information. MindLink-Eumpy first applies a series of tools to automatically obtain physiological data from subjects and then analyzes the obtained facial expression data and EEG data, respectively, and finally fuses the two different signals at a decision level. In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multitask convolutional neural network (CNN) based on transfer learning technique. In the detection of EEG, MindLink-Eumpy provides two algorithms, including a subject-dependent model based on support vector machine (SVM) and a subject-independent model based on long short-term memory network (LSTM). In the decision-level fusion, weight enumerator and AdaBoost technique are applied to combine the predictions of SVM and CNN. We conducted two offline experiments on the Database for Emotion Analysis Using Physiological Signals (DEAP) dataset and the Multimodal Database for Affect Recognition and Implicit Tagging (MAHNOB-HCI) dataset, respectively, and conducted an online experiment on 15 healthy subjects. The results show that multimodal methods outperform single-modal methods in both offline and online experiments. In the subject-dependent condition, the multimodal method achieved an accuracy of 71.00% in the valence dimension and an accuracy of 72.14% in the arousal dimension. In the subject-independent condition, the LSTM-based method achieved an accuracy of 78.56% in the valence dimension and an accuracy of 77.22% in the arousal dimension. The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated. |
Author | Cai, Zhaoxin Li, Ruixin Pan, Jiahui Wang, Bingbing Huang, Wenxin Qiu, Lina Liang, Yan Liu, Xiaojian Ye, Yaoguang |
AuthorAffiliation | 2 Pazhou Lab , Guangzhou , China 1 School of Software, South China Normal University , Guangzhou , China |
AuthorAffiliation_xml | – name: 1 School of Software, South China Normal University , Guangzhou , China – name: 2 Pazhou Lab , Guangzhou , China |
Author_xml | – sequence: 1 givenname: Ruixin surname: Li fullname: Li, Ruixin – sequence: 2 givenname: Yan surname: Liang fullname: Liang, Yan – sequence: 3 givenname: Xiaojian surname: Liu fullname: Liu, Xiaojian – sequence: 4 givenname: Bingbing surname: Wang fullname: Wang, Bingbing – sequence: 5 givenname: Wenxin surname: Huang fullname: Huang, Wenxin – sequence: 6 givenname: Zhaoxin surname: Cai fullname: Cai, Zhaoxin – sequence: 7 givenname: Yaoguang surname: Ye fullname: Ye, Yaoguang – sequence: 8 givenname: Lina surname: Qiu fullname: Qiu, Lina – sequence: 9 givenname: Jiahui surname: Pan fullname: Pan, Jiahui |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/33679348$$D View this record in MEDLINE/PubMed |
BookMark | eNp9UstuFDEQtFAQecAHcEEjceEyy_gx7TEHpChaQqSNEkE4Wx5Pe9fLjL2ZB2L_Pt5siJIcOLnlrip1V9cxOQgxICHvaTHjvFKfXVhN3YwVjM6AUaH4K3JEAVheUqAHT-pDcjwM66IABiV9Qw45B6m4qI7I9aUPzcKH3_l86jbbL9lpyK42GPKfceotZtfbcRVDdhNjW8e_mYt9djm1o-9iY9ps3sXRp_YPtHEZ_K5-S1470w747uE9Ib--zW_OvueLq_OLs9NFbtOcY94UTYkgsFGyZFhyWSHWElAZVVcoa2OplUoohAIqFA7QCSfQoagaV0HJT8jFXreJZq03ve9Mv9XReH3_EfulNv3obYvaIS-Bg5CgmKCKGSyNlTVTYIBLppLW173WZqo7bCyGsTftM9HnneBXehn_6GQiF8CSwKcHgT7eTjiMuvODxbY1AeM0aCaUKmhBQSboxxfQdXI6JKs0g3QSUCWlCfXh6USPo_w7XALQPcD2cRh6dI8QWuhdOPR9OPQuHHofjsSRLzjWj2Z3tLSUb__DvAOnIr_R |
CitedBy_id | crossref_primary_10_1109_ACCESS_2023_3263670 crossref_primary_10_1109_TIM_2022_3165280 crossref_primary_10_3390_math10173159 crossref_primary_10_3389_fncom_2021_741086 crossref_primary_10_3390_mti9030028 crossref_primary_10_3389_fnins_2024_1355512 crossref_primary_10_5937_telfor2202073S crossref_primary_10_4103_jmss_jmss_59_22 crossref_primary_10_1109_OJEMB_2023_3240280 crossref_primary_10_1109_JBHI_2024_3430310 crossref_primary_10_3389_fnins_2022_988535 crossref_primary_10_3389_fncom_2021_723843 crossref_primary_10_3390_diagnostics13050977 |
Cites_doi | 10.1088/1757-899X/557/1/012032 10.1109/EMBC.2017.8037544 10.1007/s11517-012-1026-1 10.1109/TBME.2018.2889705 10.1109/IJCNN.2015.7280835 10.1117/12.600746 10.1007/BF00994018 10.1016/j.imavis.2012.10.002 10.1109/PROC.1982.12433 10.1109/tnb.2005.853657 10.1109/TCYB.2019.2904052 10.1016/j.inffus.2017.02.003 10.1109/TMM.2018.2798287 10.1007/s40708-016-0031-9 10.1109/CSIT.2016.7549457 10.1109/MCSE.2011.37 10.1109/CVPR.2001.990517 10.1007/978-3-540-85099-1_8 10.1109/72.554195 10.3389/fict.2017.00001 10.1017/S0954579499001947 10.1017/S0954579405050340 10.1016/j.brainres.2019.04.016 10.1016/j.neunet.2014.09.005 10.1109/T-AFFC.2011.15 10.1145/3388142.3388167 10.1007/s00779-011-0479-9 10.1016/j.jad.2010.10.034 10.1109/T-AFFC.2011.25 10.1109/ACCESS.2019.2949707 10.1109/FG.2011.5771414 10.14569/IJACSA.2017.081046 10.4103/0972-6748.57865 10.1109/ICSSIT46314.2019.8987899 10.1162/neco.1997.9.8.1735 10.1109/IHMSC.2019.00078 10.1016/j.aap.2014.04.010 10.1186/s12911-017-0562-x 10.1109/IHMSC.2017.201 10.1016/S0165-0173(97)00064-7 10.1037/0022-3514.58.2.330 10.1007/s00521-014-1653-6 10.1016/0005-7916(94)90063-9 10.1109/ICPR.2018.8545411 |
ContentType | Journal Article |
Copyright | Copyright © 2021 Li, Liang, Liu, Wang, Huang, Cai, Ye, Qiu and Pan. 2021. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. Copyright © 2021 Li, Liang, Liu, Wang, Huang, Cai, Ye, Qiu and Pan. 2021 Li, Liang, Liu, Wang, Huang, Cai, Ye, Qiu and Pan |
Copyright_xml | – notice: Copyright © 2021 Li, Liang, Liu, Wang, Huang, Cai, Ye, Qiu and Pan. – notice: 2021. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: Copyright © 2021 Li, Liang, Liu, Wang, Huang, Cai, Ye, Qiu and Pan. 2021 Li, Liang, Liu, Wang, Huang, Cai, Ye, Qiu and Pan |
DBID | AAYXX CITATION NPM 3V. 7XB 88I 8FE 8FH 8FK ABUWG AFKRA AZQEC BBNVY BENPR BHPHI CCPQU DWQXO GNUQQ HCIFZ LK8 M2P M7P PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS Q9U 7X8 5PM DOA |
DOI | 10.3389/fnhum.2021.621493 |
DatabaseName | CrossRef PubMed ProQuest Central (Corporate) ProQuest Central (purchase pre-March 2016) Science Database (Alumni Edition) ProQuest SciTech Collection ProQuest Natural Science Collection ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials Biological Science Collection ProQuest Central Natural Science Collection ProQuest One ProQuest Central Korea ProQuest Central Student SciTech Premium Collection Biological Sciences Science Database Biological Science Database ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China ProQuest Central Basic MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef PubMed Publicly Available Content Database ProQuest Central Student ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Natural Science Collection ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences Natural Science Collection ProQuest Central Korea Biological Science Collection ProQuest Central (New) ProQuest Science Journals (Alumni Edition) ProQuest Biological Science Collection ProQuest Central Basic ProQuest Science Journals ProQuest One Academic Eastern Edition Biological Science Database ProQuest SciTech Collection ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | Publicly Available Content Database CrossRef PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Anatomy & Physiology |
EISSN | 1662-5161 |
ExternalDocumentID | oai_doaj_org_article_fe35636476924192ae5ac7b296a63729 PMC7933462 33679348 10_3389_fnhum_2021_621493 |
Genre | Journal Article |
GrantInformation_xml | – fundername: Natural Science Foundation of Guangdong Province – fundername: National Natural Science Foundation of China |
GroupedDBID | --- 29H 2WC 53G 5GY 5VS 88I 8FE 8FH 9T4 AAFWJ AAYXX ABIVO ABUWG ACGFO ACGFS ACXDI ADBBV ADRAZ AEGXH AENEX AFKRA AFPKN AIAGR ALMA_UNASSIGNED_HOLDINGS AOIJS AZQEC BAWUL BBNVY BCNDV BENPR BHPHI BPHCQ CCPQU CITATION CS3 DIK DU5 DWQXO E3Z EMOBN F5P GNUQQ GROUPED_DOAJ GX1 HCIFZ HYE KQ8 LK8 M2P M48 M7P M~E O5R O5S OK1 OVT PGMZT PHGZM PHGZT PIMPY PQQKQ PROAC RNS RPM TR2 C1A IPNFZ NPM RIG 3V. 7XB 8FK PKEHL PQEST PQGLB PQUKI PRINS Q9U 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c493t-d0d5e64ed9752e5378eeb76e9a9b8e7bac1c7949e6068e4f6ef4f4efe48df8653 |
IEDL.DBID | M48 |
ISSN | 1662-5161 |
IngestDate | Wed Aug 27 01:12:49 EDT 2025 Thu Aug 21 14:06:44 EDT 2025 Fri Jul 11 08:54:31 EDT 2025 Fri Jul 25 11:42:11 EDT 2025 Thu Apr 03 07:04:05 EDT 2025 Thu Apr 24 23:03:20 EDT 2025 Tue Jul 01 03:44:33 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | subject-independent method multitask convolutional neural network (CNN) support vector machine (SVM) long short-term memory network (LSTM) multimodal emotion recognition |
Language | English |
License | Copyright © 2021 Li, Liang, Liu, Wang, Huang, Cai, Ye, Qiu and Pan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c493t-d0d5e64ed9752e5378eeb76e9a9b8e7bac1c7949e6068e4f6ef4f4efe48df8653 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 This article was submitted to Brain-Computer Interfaces, a section of the journal Frontiers in Human Neuroscience These authors have contributed equally to this work Reviewed by: Pietro Aricò, Sapienza University of Rome, Italy; Yisi Liu, Fraunhofer Singapore, Singapore Edited by: Anton Nijholt, University of Twente, Netherlands |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.3389/fnhum.2021.621493 |
PMID | 33679348 |
PQID | 2634869511 |
PQPubID | 4424408 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_fe35636476924192ae5ac7b296a63729 pubmedcentral_primary_oai_pubmedcentral_nih_gov_7933462 proquest_miscellaneous_2499010167 proquest_journals_2634869511 pubmed_primary_33679348 crossref_primary_10_3389_fnhum_2021_621493 crossref_citationtrail_10_3389_fnhum_2021_621493 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-02-19 |
PublicationDateYYYYMMDD | 2021-02-19 |
PublicationDate_xml | – month: 02 year: 2021 text: 2021-02-19 day: 19 |
PublicationDecade | 2020 |
PublicationPlace | Switzerland |
PublicationPlace_xml | – name: Switzerland – name: Lausanne |
PublicationTitle | Frontiers in human neuroscience |
PublicationTitleAlternate | Front Hum Neurosci |
PublicationYear | 2021 |
Publisher | Frontiers Research Foundation Frontiers Media S.A |
Publisher_xml | – name: Frontiers Research Foundation – name: Frontiers Media S.A |
References | Park (B31) 2013; 51 Littlewort (B27) 2011 Roidl (B38) 2014; 70 Thomson (B43) 1982; 70 van der Walt (B44) 2011; 13 Li (B25) 2019; 7 Punkanen (B36) 2011; 130 Soleymani (B40) 2012; 3 Nguyen (B30) 2018 Sur (B42) 2009; 18 Chen (B7) 2017; 17 Koelstra (B22) 2013; 31 Rodrigues (B37) 2019; 66 Buitelaar (B4) 1999; 11 Buitelaar (B5) 2018; 20 Gu (B16) 2019; 1718 Goodfellow (B15) 2015; 64 Castellano (B6) 2008 Lang (B23) 1997 Guzel Aydin (B17) 2016; 3 Posner (B34) 2005; 17 Cortes (B8) 1995; 20 Jiang (B20) 2019 Lawrence (B24) 1997; 8 Davidson (B11) 1990; 58 Sebe (B39) 2005 Li (B26) 2020; 50 Ng (B29) 2019; 557 Nath (B28) 2020 Koelstra (B21) 2012; 3 Di Flumeri (B12) 2017 Pedregosa (B32) 2011; 12 Viola (B45) 2001 Poria (B33) 2017; 37 Yan den Broek (B46) 2013; 17 He (B18) 2017 Alhagry (B1) 2017; 8 Damasio (B9) 1998; 26 Georgieva (B14) 2015; 26 Bradley (B3) 1994; 25 Hochreiter (B19) 1997; 9 Duan (B13) 2005; 4 Das (B10) 2015 Soleymani (B41) 2017; 4 Prakash (B35) 2019 Alsolamy (B2) 2016 |
References_xml | – volume: 557 year: 2019 ident: B29 article-title: PSD-based features extraction for EEG signal during typing task. publication-title: IOP Conf. Ser. Mater. Sci. Eng. doi: 10.1088/1757-899X/557/1/012032 – start-page: 3228 year: 2017 ident: B12 article-title: EEG-based approach-withdrawal index for the pleasantness evaluation during taste experience in realistic settings publication-title: Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) doi: 10.1109/EMBC.2017.8037544 – volume: 51 start-page: 571 year: 2013 ident: B31 article-title: Evaluation of feature extraction methods for EEG-based brain–computer interfaces in terms of robustness to slight changes in electrode locations. publication-title: Med. Biol. Eng. Comput. doi: 10.1007/s11517-012-1026-1 – volume: 66 start-page: 2390 year: 2019 ident: B37 article-title: Riemannian procrustes analysis: transfer learning for brain–computer interfaces. publication-title: IEEE Trans. Biomed. Eng. doi: 10.1109/TBME.2018.2889705 – start-page: 1 year: 2015 ident: B10 article-title: Adaptive parameterized AdaBoost algorithm with application in EEG motor imagery classification publication-title: Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN) doi: 10.1109/IJCNN.2015.7280835 – start-page: 56 year: 2005 ident: B39 article-title: Multimodal approaches for emotion recognition: a survey publication-title: Proceedings of the SPIE doi: 10.1117/12.600746 – volume: 20 start-page: 273 year: 1995 ident: B8 article-title: Support-vector networks. publication-title: Mach. Learn. doi: 10.1007/BF00994018 – volume: 31 start-page: 164 year: 2013 ident: B22 article-title: Fusion of facial expressions and EEG for implicit affective tagging. publication-title: Image Vision Comput. doi: 10.1016/j.imavis.2012.10.002 – volume: 70 start-page: 1055 year: 1982 ident: B43 article-title: Spectrum estimation and harmonic analysis. publication-title: Proc. IEEE doi: 10.1109/PROC.1982.12433 – volume: 4 start-page: 228 year: 2005 ident: B13 article-title: Multiple SVM-RFE for gene selection in cancer classification with expression data. publication-title: IEEE Trans. Nanobiosci. doi: 10.1109/tnb.2005.853657 – volume: 50 start-page: 3281 year: 2020 ident: B26 article-title: Multisource transfer learning for cross-subject eeg emotion recognition. publication-title: IEEE Trans. Cybern. doi: 10.1109/TCYB.2019.2904052 – volume: 37 start-page: 98 year: 2017 ident: B33 article-title: A review of affective computing: from unimodal analysis to multimodal fusion. publication-title: Inf. Fusion doi: 10.1016/j.inffus.2017.02.003 – volume: 20 start-page: 2454 year: 2018 ident: B5 article-title: Mixed emotions: an open-source toolbox for multimodal emotion analysis. publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2018.2798287 – volume: 12 start-page: 2825 year: 2011 ident: B32 article-title: Scikit-learn: machine learning in python. publication-title: J. Mach. Learn. Res. – volume: 3 start-page: 109 year: 2016 ident: B17 article-title: Wavelet-based study of valence–arousal model of emotions on EEG signals with LabVIEW. publication-title: Brain Inf. doi: 10.1007/s40708-016-0031-9 – start-page: 1 year: 2016 ident: B2 article-title: Emotion estimation from EEG signals during listening to Quran using PSD features publication-title: Proceedings of the 2016 7th International Conference on Computer Science and Information Technology (CSIT) doi: 10.1109/CSIT.2016.7549457 – volume: 13 start-page: 22 year: 2011 ident: B44 article-title: The numpy array: a structure for efficient numerical computation. publication-title: Comput. Sci. Eng. doi: 10.1109/MCSE.2011.37 – start-page: I year: 2001 ident: B45 article-title: Rapid object detection using a boosted cascade of simple features publication-title: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001 doi: 10.1109/CVPR.2001.990517 – start-page: 92 year: 2008 ident: B6 article-title: Emotion recognition through multiple modalities: face, body gesture, speech publication-title: Affect and Emotion in Human-Computer Interaction: From Theory to Applications Lecture Notes in Computer Science doi: 10.1007/978-3-540-85099-1_8 – volume: 8 start-page: 98 year: 1997 ident: B24 article-title: Face recognition: a convolutional neural-network approach. publication-title: IEEE Trans. Neural Netw. doi: 10.1109/72.554195 – volume: 4 start-page: 1 year: 2017 ident: B41 article-title: Toolbox foremotional feature extraction from physiological signals (TEAP). publication-title: Front. ICT doi: 10.3389/fict.2017.00001 – volume: 11 start-page: 39 year: 1999 ident: B4 article-title: Theory of mind and emotion-recognition functioning in autistic spectrum disorders and in psychiatric control and normal children. publication-title: Dev. Psychopathol. doi: 10.1017/S0954579499001947 – volume: 17 start-page: 715 year: 2005 ident: B34 article-title: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. publication-title: Dev. Psychopathol. doi: 10.1017/S0954579405050340 – volume: 1718 start-page: 252 year: 2019 ident: B16 article-title: Inhibitory control of emotional interference in children with learning disorders: evidence from event-related potentials and event-related spectral perturbation analysis. publication-title: Brain Res. doi: 10.1016/j.brainres.2019.04.016 – volume: 64 start-page: 59 year: 2015 ident: B15 article-title: Challenges in representation learning: a report on three machine learning contests. publication-title: Neural Netw. doi: 10.1016/j.neunet.2014.09.005 – volume: 3 start-page: 18 year: 2012 ident: B21 article-title: DEAP: a database for emotion analysis; using physiological signals. publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/T-AFFC.2011.15 – year: 2020 ident: B28 article-title: A comparative study of subject-dependent and subject-independent strategies for EEG-based emotion recognition using LSTM network publication-title: Proceedings of the 2020 the 4th International Conference on Compute and Data Analysis doi: 10.1145/3388142.3388167 – volume: 17 start-page: 53 year: 2013 ident: B46 article-title: Ubiquitous emotion-aware computing. publication-title: Pers. Ubiquit. Comput. doi: 10.1007/s00779-011-0479-9 – volume: 130 start-page: 118 year: 2011 ident: B36 article-title: Biased emotional recognition in depression: perception of emotions in music by depressed patients. publication-title: J. Affect. Disord. doi: 10.1016/j.jad.2010.10.034 – volume: 3 start-page: 42 year: 2012 ident: B40 article-title: A multimodal database for affect recognition and implicit tagging. publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/T-AFFC.2011.25 – volume: 7 start-page: 155724 year: 2019 ident: B25 article-title: The fusion of electroencephalography and facial expression for continuous emotion recognition. publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2949707 – start-page: 298 year: 2011 ident: B27 article-title: The computer expression recognition toolbox (CERT) publication-title: Proceedings of the 2011 IEEE International Conference on Automatic Face Gesture Recognition (FG) doi: 10.1109/FG.2011.5771414 – volume: 8 start-page: 355 year: 2017 ident: B1 article-title: Emotion recognition based on EEG using LSTM recurrent neural network. publication-title: Int. J. Adv. Comput. Sci. Appl. doi: 10.14569/IJACSA.2017.081046 – volume: 18 start-page: 70 year: 2009 ident: B42 article-title: Event-related potential: an overview. publication-title: Ind. Psychiatry J. doi: 10.4103/0972-6748.57865 – start-page: 861 year: 2019 ident: B35 article-title: Face Recognition with convolutional neural network and transfer learning publication-title: Proceedings of the 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT) doi: 10.1109/ICSSIT46314.2019.8987899 – volume: 9 start-page: 1735 year: 1997 ident: B19 article-title: Long short-term memory. publication-title: Neural Comput. doi: 10.1162/neco.1997.9.8.1735 – start-page: 309 year: 2019 ident: B20 article-title: Cross-subject emotion recognition with a decision tree classifier based on sequential backward selection publication-title: Proceedings of the 2019 11th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC) doi: 10.1109/IHMSC.2019.00078 – volume: 70 start-page: 282 year: 2014 ident: B38 article-title: Emotional states of drivers and the impact on speed, acceleration and traffic violations-a simulator study. publication-title: Accid. Anal. Prev. doi: 10.1016/j.aap.2014.04.010 – volume: 17 year: 2017 ident: B7 article-title: Subject-independent emotion recognition based on physiological signals: a three-stage decision method. publication-title: BMC Med. Inform. Decis. Mak. doi: 10.1186/s12911-017-0562-x – start-page: 399 year: 2017 ident: B18 article-title: A MEMD method of human emotion recognition based on valence-arousal model publication-title: 2017 9th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC) doi: 10.1109/IHMSC.2017.201 – start-page: 97 year: 1997 ident: B23 article-title: Motivated attention: affect, activation, and action publication-title: Attention and Orienting: Sensory and Motivational Processes – volume: 26 start-page: 83 year: 1998 ident: B9 article-title: Emotion in the perspective of an integrated nervous. publication-title: Brain Res. Rev. doi: 10.1016/S0165-0173(97)00064-7 – volume: 58 start-page: 330 year: 1990 ident: B11 article-title: Approach-withdrawal and cerebral asymmetry: emotional expression and brain physiology: I. publication-title: J. Pers. Soc. Psychol. doi: 10.1037/0022-3514.58.2.330 – volume: 26 start-page: 573 year: 2015 ident: B14 article-title: Learning to decode human emotions from event-related potentials. publication-title: Neural Comput Appl. doi: 10.1007/s00521-014-1653-6 – volume: 25 start-page: 49 year: 1994 ident: B3 article-title: Measuring emotion: the self-assessment manikin and the semantic differential. publication-title: J. Behav. Ther. Exp. Psychiatry doi: 10.1016/0005-7916(94)90063-9 – start-page: 3543 year: 2018 ident: B30 article-title: Meta transfer learning for facial emotion recognition publication-title: Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR) doi: 10.1109/ICPR.2018.8545411 |
SSID | ssj0062651 |
Score | 2.3702042 |
Snippet | Emotion recognition plays an important role in intelligent human–computer interaction, but the related research still faces the problems of low accuracy and... Emotion recognition plays an important role in intelligent human-computer interaction, but the related research still faces the problems of low accuracy and... Emotion recognition plays an important role in intelligent human computer interaction, but the related research still faces low accuracy and subject-dependent... |
SourceID | doaj pubmedcentral proquest pubmed crossref |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source |
StartPage | 621493 |
SubjectTerms | Accuracy Algorithms Arousal Artificial intelligence Brain research Datasets EEG Electroencephalography Emotions Experiments Long short-term memory long short-term memory network (LSTM) Methods multimodal emotion recognition multitask convolutional neural network (CNN) Nervous system Neural networks Neuroscience Physiology Software subject-independent method support vector machine (SVM) Support vector machines Transfer learning |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3daxQxEA_SJ19EWz9Wq0QQH4TY283HJr6dcqUIStEW-hY22QkV2l3RO-j9984ke0dPRF983SRsMpmZzC8zmWHsFbQ6Qp2kiE4GoYydiRBTI9B6lqgynZpBjrb4bE7O1ccLfXGr1BfFhJX0wIVwRwmkNpTk3CBSQHOkA93FNjTOdIZcTqR98czbgKmig9FK13XxYSIEc0dpuFzRs_OmfmsaxARy5xTKyfr_ZGH-Hih56-Q5vs_uTSYjn5epPmB3YNhnB_MB4fL1mr_mOYgz344fsNNPiLEJX4oF7tP6HZ8PnGJGxNd8Sc9P15QrgJ-N41UYbzharDw_wb0ee_zFotT04V82UUXj8JCdHy_OPpyIqWiCiLiupehnvQajoHetbkDL1gKE1oDrXLDQhi7WEWXQASIXCyoZSCopSKBsn6zR8hHbG8YBnjDeSCNBR1VHPMMQdFvXtSHIZJMEco9WbLYhoo9TRnEqbHHlEVkQ3X2muye6-0L3ir3ZDvle0mn8rfN72pltR8qEnT8gf_iJP_y_-KNih5t99ZN4_vSNkcoaNC7rir3cNqNgkbekG2BcYR9FLkN6pVGxx4UNtjOR0qBeU7Zi7Q6D7Ex1t2X4dpmTd-M4qUzz9H-s7Rm7S-SiIPLaHbK95Y8VPEcbaRleZHH4BUv7DrA priority: 102 providerName: Directory of Open Access Journals – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3db9QwDI_g9sILAsZH2UBBQjwghV2bjya8oBu6aUJiOo1N2lvVpM6GtLVju5N2_z122js4hPbaJmpqO45_tmMz9h5KHSCPUgQnvVDGjoUPsRBoPUtUmU6NIWVbHJnDU_XtTJ8NDrfbIa1ypROTom66QD7yvcJIZQ3aA_mX61-CukZRdHVoofGQbaEKtnbEtvanR7PjlS5Ga13nfSwToZjbi-3Fgq6fF_knUyA2kBunUSra_z9L89-Eyb9OoIMn7PFgOvJJz-un7AG0z9j2pEXYfLXkH3hK5kxe8m02-45Ym3CmmCK_lp_5pOWUOyJ-JGc9ny2pZgA_6bpL391xtFx5uop71TX4iWnf24cfr7KLuvY5Oz2Ynnw9FEPzBBHwv-aiGTcajILGlboALUsL4EsDrnbeQunrkAfciw4QwVhQ0UBUUUEEZZtojZYv2KjtWnjFeCGNBB1UHvAsQ_BtXV16L6ONEihMmrHxiohVGCqLU4OLywoRBtG9SnSviO5VT_eMfVxPue7Latw3eJ84sx5IFbHTg-7mvBo2WBVBakPF8A0iSjRba9B1KH3hTG0oNJmx3RVfq2Gb3lZ_hCpj79avcYNR1KRuoVvgGEWhQ7qtkbGXvRisVyKlQf2mbMbKDQHZWOrmm_bnRSrijfOkMsXr-5e1wx4RIShNPHe7bDS_WcAbtILm_u0g6r8B_7AI4g priority: 102 providerName: ProQuest |
Title | MindLink-Eumpy: An Open-Source Python Toolbox for Multimodal Emotion Recognition |
URI | https://www.ncbi.nlm.nih.gov/pubmed/33679348 https://www.proquest.com/docview/2634869511 https://www.proquest.com/docview/2499010167 https://pubmed.ncbi.nlm.nih.gov/PMC7933462 https://doaj.org/article/fe35636476924192ae5ac7b296a63729 |
Volume | 15 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3da9RAEB9q--KLqPUjWo8VxAch9ZLdbBJB5CpXi9By1B7cW8juzbbCNdHzDnr_vTObDzw5BF8CSTZfszuZ329ndgbgDaaJxcjJ0ObShEpnw9BYF4eEniX9MnM1RB9tcaHPpurrLJntQVfeqhXgr53UjutJTZeL47ufm0-k8B-ZcZK9fe-qmzUvKo-jYx0T4pf34IAMU8oFDc5V71Qg6O6rMUZaE_8ipNM4OXffYstM-Wz-uyDo35GUf5im04fwoMWUYtQMgkewh9VjOBxVxKdvN-Kt8FGefvr8ECbnRMKZgIZj6sjNBzGqBAeVhN_8LL6YbDiZgLiq64Wp7wRBWuHX6N7Wc3rEuCn6Iy67sKO6egLT0_HV57OwraoQWvquVTgfzhPUCud5msSYyDRDNKnGvMxNhqkpbWRJSXMkapOhchqdcgodqmzuMp3Ip7Bf1RU-BxFLLTGxKrJk5IiVZ3mZGiNd5iSy_zSAYSfEwrYpx7nyxaIg6sFyL7zcC5Z70cg9gHf9JT-afBv_anzCPdM35FTZ_kC9vC5azSscykRzlnxNVJPwbIlJaVMT57rU7LMM4Kjr16IbfkWspco0oc8ogNf9adI8dqeUFdZraqPYp8jLOAJ41gyD_k2k1PTjU1kA6dYA2XrV7TPV9xuf3Zuuk0rHL_5HEC_hPu9xNHmUH8H-arnGVwSWVmYAByfji8nlwE820PbLLBp4tfgNhDAU_g |
linkProvider | Scholars Portal |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwELem7gFeEDA-AgOMBDwghTW248RICHXQqWNbVY1O2ptJnDND2pKxtYL-U_yN3OWjUIT2ttfYVqzzz-f7-c53jL2AJHYQeRk6I_NQ6bQf5s6LEK1niSrTqD7U0RZjPTpSn47j4zX2q3sLQ2GVnU6sFXVROboj3xJaqlSjPRC9P_8eUtUo8q52JTQaWOzB4gdStst3ux9xfV8KsTOcfhiFbVWB0CkjZ2HRL2LQCgqTxAJimaQAeaLBZCZPIckzFzkEqQE07VNQXoNXXoEHlRY-1VQlAlX-upK6L3psfXs4nhx2uh_ZQRw1vlOkfmbLlydzeu4uojdaIBeRK6dfXSTgf5btvwGaf514O7fZrdZU5YMGW3fYGpR32cagRJp-tuCveB08Wt_Kb7DJAXJ74rXhEPGxeMsHJadYlfBz7RzgkwXlKODTqjrNq58cLWVeP_09qwr8xbCpJcQPu2imqrzHjq5FrPdZr6xKeMi4kFpC7FTk8OxEsp-aLMlz6VMvgdyyAet3QrSuzWROBTVOLTIakrut5W5J7raRe8BeL4ecN2k8ruq8TSuz7EgZuOsP1cVX225o60HGmpLva2SwaCZnEGcuyYXRmSZXaMA2u3W1rVq4tH9AHLDny2bc0OSlyUqo5thHkauSXocE7EEDg-VMpNSoT1UasGQFICtTXW0pv53UScNxnFRaPLp6Ws_YjdH0YN_u7473HrObJBQKUY_MJuvNLubwBC2wWf60hT1nX657p_0GfO1Hjw |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwED9NnYR4QcD4CAwwEvCAFNrEjhMjIdSxVhuDqhqbtLcscc4MaUvG1gr6r_HXcZePQhHa215jW7HOd-f7-b4AXmAcWQyc9K2Rua90MvBz60KfrGdJKtOoAdbRFhO9c6g-HkVHa_Cry4XhsMpOJ9aKuqgsv5H3Qy1VoskeCPquDYuYbo_fn3_3uYMUe1q7dhoNi-zh4gfBt8t3u9t01i_DcDw6-LDjtx0GfKuMnPnFoIhQKyxMHIUYyThBzGONJjN5gnGe2cASwxokMz9B5TQ65RQ6VEnhEs0dI0j9r8eMinqwvjWaTPe7e4CQQhQ0flSCgabvypM5p76HwRsdEi6RKzdh3TDgf1buv8Gaf91-49twqzVbxbDhszuwhuVd2BiWBNnPFuKVqANJ6xf6DZh-JpzPGNcfEa8s3ophKThuxf9SOwrEdMH1CsRBVZ3m1U9BVrOo04DPqoJ-MWr6Con9LrKpKu_B4bWQ9T70yqrEhyBCqSVGVgWW7lEC_onJ4jyXLnES2UXrwaAjYmrbqubcXOM0JXTDdE9ruqdM97Shuwevl0vOm5IeV03e4pNZTuRq3PWH6uJr2gp36lBGmgvxa0KzZDJnGGU2zkOjM81uUQ82u3NNWxVxmf5haA-eL4dJuNljk5VYzWmOYrclZ4p48KBhg-VOpNSkW1XiQbzCICtbXR0pv53UBcRpnVQ6fHT1tp7BDZKw9NPuZO8x3GSacLR6YDahN7uY4xMyxmb505brBRxft6D9BngES8Q |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MindLink-Eumpy%3A+An+Open-Source+Python+Toolbox+for+Multimodal+Emotion+Recognition&rft.jtitle=Frontiers+in+human+neuroscience&rft.au=Li%2C+Ruixin&rft.au=Liang%2C+Yan&rft.au=Liu%2C+Xiaojian&rft.au=Wang%2C+Bingbing&rft.date=2021-02-19&rft.issn=1662-5161&rft.eissn=1662-5161&rft.volume=15&rft_id=info:doi/10.3389%2Ffnhum.2021.621493&rft.externalDBID=n%2Fa&rft.externalDocID=10_3389_fnhum_2021_621493 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1662-5161&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1662-5161&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1662-5161&client=summon |