FusionSense: Emotion Classification Using Feature Fusion of Multimodal Data and Deep Learning in a Brain-Inspired Spiking Neural Network
Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin te...
Saved in:
Published in | Sensors (Basel, Switzerland) Vol. 20; no. 18; p. 5328 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Switzerland
MDPI
17.09.2020
MDPI AG |
Subjects | |
Online Access | Get full text |
ISSN | 1424-8220 1424-8220 |
DOI | 10.3390/s20185328 |
Cover
Loading…
Abstract | Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem. |
---|---|
AbstractList | Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem. Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem. |
Author | Tan, Clarence Ceballos, Gerardo Kasabov, Nikola Puthanmadam Subramaniyam, Narayan |
AuthorAffiliation | 2 School of Electrical Engineering, University of Los Andes, Merida 5101, Venezuela; gerardoacv@gmail.com 3 Faculty of Medicine and Health Technology and BioMediTech Institute, Tampere University, 33520 Tampere, Finland; narayan.subramaniyam@tuni.fi 1 Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1010, New Zealand; nkasabov@aut.ac.nz 4 Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 02150 Espoo, Finland |
AuthorAffiliation_xml | – name: 1 Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1010, New Zealand; nkasabov@aut.ac.nz – name: 4 Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 02150 Espoo, Finland – name: 2 School of Electrical Engineering, University of Los Andes, Merida 5101, Venezuela; gerardoacv@gmail.com – name: 3 Faculty of Medicine and Health Technology and BioMediTech Institute, Tampere University, 33520 Tampere, Finland; narayan.subramaniyam@tuni.fi |
Author_xml | – sequence: 1 givenname: Clarence orcidid: 0000-0003-1276-9522 surname: Tan fullname: Tan, Clarence – sequence: 2 givenname: Gerardo surname: Ceballos fullname: Ceballos, Gerardo – sequence: 3 givenname: Nikola surname: Kasabov fullname: Kasabov, Nikola – sequence: 4 givenname: Narayan surname: Puthanmadam Subramaniyam fullname: Puthanmadam Subramaniyam, Narayan |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/32957655$$D View this record in MEDLINE/PubMed |
BookMark | eNptkk1v1DAQhiNURD_gwB9APsIh1J-xwwEJtl1YaSmH0rM169iL28RO7YSq_4CfTXa3rFrEyZ6Z933G1sxxcRBisEXxmuD3jNX4NFNMlGBUPSuOCKe8VJTig0f3w-I452uMKWNMvSgOGa2FrIQ4Kn7Px-xjuLQh2w_ovIvDFKFZCzl75w1sw6vswxrNLQxjsmjnQNGhb2M7-C420KIzGABBaNCZtT1aWkhh4_EBAfqcwIdyEXLvk23QZe9vNrULO6bJeWGHu5huXhbPHbTZvno4T4qr-fmP2ddy-f3LYvZpWRrOyVBSkFzVnGBXSeMkUYwJKh2WHBsGjRCcNY2T2GFSrYSyuKaVca6uHa8YXyl2Uix23CbCte6T7yDd6whebxMxrTWkwZvWagGmcgqA145xXNGVdIYoWbGGSFpRMbE-7lj9uOpsY2wYph89gT6tBP9Tr-MvLYUkpN4A3j4AUrwdbR5057OxbQvBxjFryjlXUmDMJ-mbx732Tf6OchK82wlMijkn6_YSgvVmTfR-TSbt6T9a44ftrKdn-vY_jj-1fb6T |
CitedBy_id | crossref_primary_10_1007_s10462_023_10575_4 crossref_primary_10_34133_icomputing_0089 crossref_primary_10_3389_fnbot_2024_1442080 crossref_primary_10_1016_j_engappai_2024_109415 crossref_primary_10_1016_j_bspc_2024_106608 crossref_primary_10_1016_j_engappai_2025_110004 crossref_primary_10_4103_jmss_jmss_59_22 crossref_primary_10_3389_fnhum_2023_1280241 crossref_primary_10_3390_sci6010010 crossref_primary_10_1007_s11227_022_04665_3 crossref_primary_10_1016_j_knosys_2024_112587 crossref_primary_10_3390_s21093240 crossref_primary_10_1016_j_cmpb_2022_106646 crossref_primary_10_1016_j_rico_2023_100362 crossref_primary_10_1016_j_bspc_2024_106204 crossref_primary_10_1371_journal_pone_0269176 crossref_primary_10_1109_TCSS_2024_3420445 crossref_primary_10_1007_s10489_024_05777_4 crossref_primary_10_1108_ACI_03_2022_0080 crossref_primary_10_3389_fnins_2023_1200701 crossref_primary_10_1007_s10278_023_00776_2 crossref_primary_10_1002_widm_1563 crossref_primary_10_1016_j_bspc_2023_105921 crossref_primary_10_7717_peerj_cs_2610 crossref_primary_10_3389_fnhum_2024_1471634 crossref_primary_10_3390_s23094532 |
Cites_doi | 10.1007/978-3-642-24571-8_16 10.1109/FG.2015.7284873 10.1109/JPROC.2003.817122 10.1016/j.ins.2014.06.028 10.1007/978-3-642-42051-1_9 10.1023/B:JONB.0000023655.25550.be 10.1016/j.neulet.2003.10.063 10.1145/2647868.2654984 10.1093/oso/9780195112719.002.0002 10.1007/11848035_70 10.1109/TBME.1985.325532 10.1109/CVPR.2016.23 10.1016/j.compind.2017.04.005 10.1145/632716.632878 10.1145/2659522.2659531 10.1007/s11063-020-10322-8 10.1016/j.neucom.2017.06.050 10.1109/ICCSP.2014.6949798 10.1007/978-3-319-25207-0_14 10.1016/S0925-2312(01)00658-0 10.1109/TITS.2005.848368 10.1145/1500879.1500888 10.1109/CVPR.2014.233 10.1016/j.neunet.2019.09.036 10.1109/ACII.2017.8273639 10.3758/BF03333870 10.1007/s00530-010-0182-0 10.1016/j.conb.2010.03.007 10.1177/2096595819896200 10.1016/S0272-7358(02)00130-7 10.1109/FG.2011.5771357 10.1007/978-3-642-33212-8_21 10.1109/IJCNN.2012.6252439 10.1007/978-3-642-33783-3_58 10.1109/ICASSP.2016.7472669 10.3233/ICA-2007-14301 10.1016/j.csl.2010.10.001 10.1007/978-3-662-57715-8 10.1109/ACII.2009.5349496 10.1016/j.patcog.2010.09.020 10.1109/T-AFFC.2010.1 10.1109/CVPR.2014.241 10.1007/978-1-4615-4831-7_19 10.1007/0-387-27890-7_11 10.1007/s00421-004-1055-z 10.1145/2659522.2659528 10.1016/j.neunet.2010.04.009 10.1093/cercor/bhn003 10.1038/nature14539 10.1109/PlatCon.2017.7883728 10.1145/2813524.2813533 10.1145/355017.355042 10.1007/s11042-013-1450-8 10.1007/BF01115465 10.1016/j.neunet.2015.09.011 10.1038/78829 10.1109/WACV.2016.7477679 10.1511/2001.28.344 10.1109/FG.2013.6553717 10.1109/ACCESS.2017.2676238 10.1007/s12193-016-0222-y 10.1016/j.ijpsycho.2003.12.001 10.1126/sciadv.aat4752 10.1007/978-4-431-67901-1_5 10.1109/CVPR.2016.369 10.1007/BF00990296 10.1109/IJCNN.2014.6889620 10.1080/02699939208411068 10.1145/2818346.2830593 10.1145/2993148.2997632 10.1016/j.inffus.2017.02.003 10.1109/TMM.2006.870737 10.1016/j.physleta.2004.12.078 10.1037/0033-295X.99.3.550 10.21437/Interspeech.2004-322 10.1016/j.neucom.2017.08.043 10.1038/nrn2575 10.1109/T-AFFC.2011.25 10.1007/s11042-014-1869-6 10.1155/2017/1945630 10.1162/neco.1997.9.2.279 10.1016/j.imavis.2012.10.002 10.1109/ICME.2014.6890166 10.1109/72.991428 10.1016/j.jcss.2004.04.001 10.1371/journal.pone.0124674 10.1007/s11063-010-9149-6 10.1109/TPAMI.2010.50 10.1016/j.neunet.2012.11.014 10.1037/h0077714 10.1109/BIBE.2014.26 10.1145/2851581.2890247 10.1109/ICCV.2015.341 10.1109/ACII.2013.90 10.1155/2014/749604 10.1631/FITEE.1400323 10.1109/CVPR.2017.212 10.1016/S0921-8890(02)00372-X 10.1016/j.neunet.2014.01.006 10.1016/S0167-6393(03)00099-2 10.1016/j.cviu.2015.09.015 10.1109/SSCI.2017.8285365 10.1109/ICCV.2015.421 10.1016/j.neunet.2014.10.005 10.1109/34.954607 10.1113/expphysiol.2008.042424 10.3390/fi11050105 10.1007/978-3-319-43665-4_19 10.1080/00332747.1969.11023575 10.1145/2522848.2531745 10.1007/978-94-010-0674-3 |
ContentType | Journal Article |
Copyright | 2020 by the authors. 2020 |
Copyright_xml | – notice: 2020 by the authors. 2020 |
DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 5PM DOA |
DOI | 10.3390/s20185328 |
DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | CrossRef MEDLINE - Academic MEDLINE |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1424-8220 |
ExternalDocumentID | oai_doaj_org_article_5ac6f8aa49f34062b7fc18763d172625 PMC7571195 32957655 10_3390_s20185328 |
Genre | Journal Article |
GroupedDBID | --- 123 2WC 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD AAHBH AAYXX ABDBF ABUWG ACUHS ADBBV ADMLS AENEX AFKRA AFZYC ALIPV ALMA_UNASSIGNED_HOLDINGS BENPR BPHCQ BVXVI CCPQU CITATION CS3 D1I DU5 E3Z EBD ESX F5P FYUFA GROUPED_DOAJ GX1 HH5 HMCUK HYE KQ8 L6V M1P M48 MODMG M~E OK1 OVT P2P P62 PHGZM PHGZT PIMPY PQQKQ PROAC PSQYO RNS RPM TUS UKHRP XSB ~8M 3V. ABJCF ARAPS CGR CUY CVF ECM EIF HCIFZ KB. M7S NPM PDBOC 7X8 PPXIY 5PM PJZUB PUEGO |
ID | FETCH-LOGICAL-c441t-2a7489410f67cf71833527f0740c3ad5543ddf70f016b58e0926cff99f4634b83 |
IEDL.DBID | M48 |
ISSN | 1424-8220 |
IngestDate | Wed Aug 27 01:32:14 EDT 2025 Thu Aug 21 14:08:51 EDT 2025 Fri Jul 11 04:23:47 EDT 2025 Wed Feb 19 02:03:54 EST 2025 Thu Apr 24 22:58:21 EDT 2025 Tue Jul 01 03:55:49 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 18 |
Keywords | Evolving Spiking Neural Networks (eSNNs) Spatio-temporal data facial emotion recognition multimodal data NeuCube |
Language | English |
License | https://creativecommons.org/licenses/by/4.0 Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c441t-2a7489410f67cf71833527f0740c3ad5543ddf70f016b58e0926cff99f4634b83 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0003-1276-9522 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.3390/s20185328 |
PMID | 32957655 |
PQID | 2444875004 |
PQPubID | 23479 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_5ac6f8aa49f34062b7fc18763d172625 pubmedcentral_primary_oai_pubmedcentral_nih_gov_7571195 proquest_miscellaneous_2444875004 pubmed_primary_32957655 crossref_primary_10_3390_s20185328 crossref_citationtrail_10_3390_s20185328 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20200917 |
PublicationDateYYYYMMDD | 2020-09-17 |
PublicationDate_xml | – month: 9 year: 2020 text: 20200917 day: 17 |
PublicationDecade | 2020 |
PublicationPlace | Switzerland |
PublicationPlace_xml | – name: Switzerland |
PublicationTitle | Sensors (Basel, Switzerland) |
PublicationTitleAlternate | Sensors (Basel) |
PublicationYear | 2020 |
Publisher | MDPI MDPI AG |
Publisher_xml | – name: MDPI – name: MDPI AG |
References | ref_94 ref_93 Kamel (ref_33) 2011; 44 ref_14 ref_13 ref_12 ref_11 (ref_49) 1989; 13 Adeli (ref_92) 2007; 14 ref_99 ref_130 ref_10 ref_98 Song (ref_115) 2000; 3 ref_133 ref_132 ref_96 Wysoski (ref_95) 2010; 23 Kasabov (ref_101) 2016; 78 ref_19 ref_17 Soleymani (ref_107) 2011; 3 ref_16 ref_15 ref_128 Homma (ref_62) 2008; 93 ref_127 ref_129 Nwe (ref_37) 2003; 41 Kasabov (ref_18) 2014; 52 ref_24 ref_23 Picard (ref_34) 2001; 23 ref_120 ref_122 ref_121 ref_124 ref_29 Koelstra (ref_116) 2013; 31 ref_28 Calvo (ref_1) 2010; 1 ref_27 ref_26 Trivedi (ref_131) 2008; 31 Danelakis (ref_7) 2015; 74 ref_72 Kory (ref_74) 2015; 47 ref_71 ref_70 Maass (ref_87) 2004; 69 ref_76 Pan (ref_112) 1985; BME-32 Liu (ref_103) 2010; 20 Yeasin (ref_9) 2006; 8 Huang (ref_125) 2016; 147 Edwards (ref_2) 2002; 22 Huang (ref_40) 2015; 16 Albornoz (ref_39) 2011; 25 Walk (ref_48) 1984; 22 ref_82 ref_81 ref_80 LeCun (ref_83) 2015; 521 Fong (ref_3) 2003; 42 Bohte (ref_89) 2002; 48 Simard (ref_114) 2005; 336 Sun (ref_32) 2017; 267 Coulson (ref_51) 2004; 28 ref_86 Kasabov (ref_100) 2015; 294 Granholm (ref_68) 2004; 52 ref_50 Maass (ref_88) 1997; 9 ref_56 ref_55 ref_54 Meftah (ref_91) 2010; 32 ref_53 Uddin (ref_25) 2017; 5 Plutchik (ref_6) 2001; 89 ref_59 ref_61 ref_60 Russell (ref_4) 1980; 39 ref_69 Ekman (ref_20) 1992; 6 ref_67 ref_66 ref_65 Thayer (ref_63) 1997; 11 ref_64 Bullmore (ref_104) 2009; 10 Koelstra (ref_117) 2010; 32 Bohte (ref_90) 2002; 13 Chen (ref_106) 2008; 18 (ref_123) 2017; 11 Poria (ref_8) 2015; 63 ref_119 ref_118 ref_36 ref_35 ref_111 ref_110 ref_30 ref_113 Hjortskov (ref_58) 2004; 92 Pantic (ref_75) 2003; 91 ref_38 Atrey (ref_78) 2010; 16 Cibau (ref_41) 2013; 16 Hu (ref_126) 2019; 5 ref_108 Poria (ref_79) 2017; 37 ref_109 ref_47 ref_46 ref_45 ref_44 Zeng (ref_31) 2018; 273 ref_43 ref_42 Wang (ref_77) 2014; 72 Wang (ref_84) 2018; 4 Stam (ref_105) 2004; 355 Taherkhani (ref_85) 2019; 122 ref_102 Ekman (ref_52) 1969; 32 Healey (ref_57) 2005; 6 Ekman (ref_22) 1976; 1 Ekman (ref_21) 1992; 99 ref_5 Kasabov (ref_97) 2013; 41 Zhang (ref_73) 2017; 92 |
References_xml | – ident: ref_129 doi: 10.1007/978-3-642-24571-8_16 – ident: ref_11 doi: 10.1109/FG.2015.7284873 – volume: 91 start-page: 1370 year: 2003 ident: ref_75 article-title: Toward an affect-sensitive multimodal human-computer interaction publication-title: Proc. IEEE doi: 10.1109/JPROC.2003.817122 – volume: 294 start-page: 565 year: 2015 ident: ref_100 article-title: Spiking neural network methodology for modelling, classification and understanding of EEG spatio-temporal data measuring cognitive processes publication-title: Inf. Sci. doi: 10.1016/j.ins.2014.06.028 – ident: ref_99 doi: 10.1007/978-3-642-42051-1_9 – volume: 28 start-page: 117 year: 2004 ident: ref_51 article-title: Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence publication-title: J. Nonverbal Behav. doi: 10.1023/B:JONB.0000023655.25550.be – volume: 355 start-page: 25 year: 2004 ident: ref_105 article-title: Functional connectivity patterns of human magnetoencephalographic recordings: A ‘small-world’network? publication-title: Neurosci. Lett. doi: 10.1016/j.neulet.2003.10.063 – ident: ref_46 doi: 10.1145/2647868.2654984 – ident: ref_108 – ident: ref_50 doi: 10.1093/oso/9780195112719.002.0002 – ident: ref_66 doi: 10.1007/11848035_70 – ident: ref_71 – volume: BME-32 start-page: 230 year: 1985 ident: ref_112 article-title: A real-time QRS detection algorithm publication-title: IEEE Trans. Biomed. Eng. doi: 10.1109/TBME.1985.325532 – ident: ref_94 – ident: ref_132 doi: 10.1109/CVPR.2016.23 – volume: 92 start-page: 84 year: 2017 ident: ref_73 article-title: Respiration-based emotion recognition with deep learning publication-title: Comput. Ind. doi: 10.1016/j.compind.2017.04.005 – ident: ref_59 doi: 10.1145/632716.632878 – ident: ref_80 doi: 10.1145/2659522.2659531 – ident: ref_86 doi: 10.1007/s11063-020-10322-8 – volume: 267 start-page: 385 year: 2017 ident: ref_32 article-title: An efficient unconstrained facial expression recognition algorithm based on Stack Binarized Auto-encoders and Binarized Neural Networks publication-title: Neurocomputing doi: 10.1016/j.neucom.2017.06.050 – ident: ref_53 doi: 10.1109/ICCSP.2014.6949798 – ident: ref_81 doi: 10.1007/978-3-319-25207-0_14 – ident: ref_56 – volume: 48 start-page: 17 year: 2002 ident: ref_89 article-title: Error-backpropagation in temporally encoded networks of spiking neurons publication-title: Neurocomputing doi: 10.1016/S0925-2312(01)00658-0 – volume: 16 start-page: 934 year: 2013 ident: ref_41 article-title: Speech emotion recognition using a deep autoencoder publication-title: Anales de la XV Reunion de Procesamiento de la Informacion y Control – volume: 6 start-page: 156 year: 2005 ident: ref_57 article-title: Detecting stress during real-world driving tasks using physiological sensors publication-title: IEEE Trans. Intell. Transp. Syst. doi: 10.1109/TITS.2005.848368 – ident: ref_67 doi: 10.1145/1500879.1500888 – ident: ref_10 – ident: ref_24 doi: 10.1109/CVPR.2014.233 – volume: 122 start-page: 253 year: 2019 ident: ref_85 article-title: A review of learning in biologically plausible spiking neural networks publication-title: Neural Netw. doi: 10.1016/j.neunet.2019.09.036 – ident: ref_72 doi: 10.1109/ACII.2017.8273639 – volume: 22 start-page: 437 year: 1984 ident: ref_48 article-title: Emotion and dance in dynamic light displays publication-title: Bull. Psychon. Soc. doi: 10.3758/BF03333870 – volume: 16 start-page: 345 year: 2010 ident: ref_78 article-title: Multimodal fusion for multimedia analysis: A survey publication-title: Multimed. Syst. doi: 10.1007/s00530-010-0182-0 – volume: 20 start-page: 288 year: 2010 ident: ref_103 article-title: Neuromorphic sensory systems publication-title: Curr. Opin. Neurobiol. doi: 10.1016/j.conb.2010.03.007 – volume: 5 start-page: 1 year: 2019 ident: ref_126 article-title: Ten challenges for EEG-based affective computing publication-title: Brain Sci. Adv. doi: 10.1177/2096595819896200 – volume: 22 start-page: 789 year: 2002 ident: ref_2 article-title: Emotion recognition via facial expression and affective prosody in schizophrenia: A methodological review publication-title: Clin. Psychol. Rev. doi: 10.1016/S0272-7358(02)00130-7 – ident: ref_5 doi: 10.1109/FG.2011.5771357 – ident: ref_98 doi: 10.1007/978-3-642-33212-8_21 – ident: ref_17 doi: 10.1109/IJCNN.2012.6252439 – ident: ref_30 doi: 10.1007/978-3-642-33783-3_58 – ident: ref_45 doi: 10.1109/ICASSP.2016.7472669 – volume: 14 start-page: 187 year: 2007 ident: ref_92 article-title: Improved spiking neural networks for EEG classification and epilepsy and seizure detection publication-title: Integr. Comput.-Aided Eng. doi: 10.3233/ICA-2007-14301 – volume: 25 start-page: 556 year: 2011 ident: ref_39 article-title: Spoken emotion recognition using hierarchical classifiers publication-title: Comput. Speech Lang. doi: 10.1016/j.csl.2010.10.001 – ident: ref_96 doi: 10.1007/978-3-662-57715-8 – ident: ref_61 doi: 10.1109/ACII.2009.5349496 – volume: 44 start-page: 572 year: 2011 ident: ref_33 article-title: Survey on speech emotion recognition: Features, classification schemes, and databases publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2010.09.020 – volume: 1 start-page: 18 year: 2010 ident: ref_1 article-title: Affect detection: An interdisciplinary review of models, methods, and their applications publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/T-AFFC.2010.1 – ident: ref_111 doi: 10.1109/CVPR.2014.241 – ident: ref_93 doi: 10.1007/978-1-4615-4831-7_19 – ident: ref_23 doi: 10.1007/0-387-27890-7_11 – volume: 92 start-page: 84 year: 2004 ident: ref_58 article-title: The effect of mental stress on heart rate variability and blood pressure during computer work publication-title: Eur. J. Appl. Physiol. doi: 10.1007/s00421-004-1055-z – ident: ref_76 doi: 10.1145/2659522.2659528 – volume: 23 start-page: 819 year: 2010 ident: ref_95 article-title: Evolving spiking neural networks for audiovisual information processing publication-title: Neural Netw. doi: 10.1016/j.neunet.2010.04.009 – volume: 18 start-page: 2374 year: 2008 ident: ref_106 article-title: Revealing modular architecture of human brain structural networks by using cortical thickness from MRI publication-title: Cerebral Cortex doi: 10.1093/cercor/bhn003 – volume: 521 start-page: 436 year: 2015 ident: ref_83 article-title: Deep learning publication-title: Nature doi: 10.1038/nature14539 – volume: 11 start-page: 304 year: 1997 ident: ref_63 article-title: Cardiorespiratory differentiation of musically-induced emotions publication-title: J. Psychophysiol. – ident: ref_47 doi: 10.1109/PlatCon.2017.7883728 – ident: ref_36 – ident: ref_82 doi: 10.1145/2813524.2813533 – ident: ref_19 – ident: ref_109 – ident: ref_69 doi: 10.1145/355017.355042 – volume: 72 start-page: 1257 year: 2014 ident: ref_77 article-title: Hybrid video emotional tagging using users’ EEG and video content publication-title: Multimed. Tools Appl. doi: 10.1007/s11042-013-1450-8 – ident: ref_55 – volume: 1 start-page: 56 year: 1976 ident: ref_22 article-title: Measuring facial movement publication-title: Environ. Psychol. Nonverbal Behav. doi: 10.1007/BF01115465 – ident: ref_26 – ident: ref_113 – volume: 47 start-page: 1 year: 2015 ident: ref_74 article-title: A review and meta-analysis of multimodal affect detection systems publication-title: ACM Comput. Surv. CSUR – volume: 31 start-page: 607 year: 2008 ident: ref_131 article-title: Head pose estimation in computer vision: A survey publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 78 start-page: 1 year: 2016 ident: ref_101 article-title: Evolving spatio-temporal data machines based on the NeuCube neuromorphic framework: Design methodology and selected applications publication-title: Neural Netw. doi: 10.1016/j.neunet.2015.09.011 – volume: 3 start-page: 919 year: 2000 ident: ref_115 article-title: Competitive Hebbian learning through spike-timing-dependent synaptic plasticity publication-title: Nat. Neurosci. doi: 10.1038/78829 – ident: ref_122 doi: 10.1109/WACV.2016.7477679 – volume: 89 start-page: 344 year: 2001 ident: ref_6 article-title: The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice publication-title: Am. Sci. doi: 10.1511/2001.28.344 – ident: ref_128 doi: 10.1109/FG.2013.6553717 – volume: 5 start-page: 4525 year: 2017 ident: ref_25 article-title: Facial expression recognition utilizing local direction-based robust features and deep belief network publication-title: IEEE Access doi: 10.1109/ACCESS.2017.2676238 – ident: ref_35 – volume: 11 start-page: 9 year: 2017 ident: ref_123 article-title: SVM-based feature selection methods for emotion recognition from multimodal data publication-title: J. Multimodal User Interfaces doi: 10.1007/s12193-016-0222-y – volume: 52 start-page: 1 year: 2004 ident: ref_68 article-title: Pupillometric measures of cognitive and emotional processes publication-title: Int. J. Psychophysiol. doi: 10.1016/j.ijpsycho.2003.12.001 – volume: 4 start-page: eaat4752 year: 2018 ident: ref_84 article-title: Learning of spatiotemporal patterns in a spiking neural network with resistive switching synapses publication-title: Sci. Adv. doi: 10.1126/sciadv.aat4752 – ident: ref_64 doi: 10.1007/978-4-431-67901-1_5 – ident: ref_28 doi: 10.1109/CVPR.2016.369 – ident: ref_118 – volume: 13 start-page: 247 year: 1989 ident: ref_49 article-title: The contribution of general features of body movement to the attribution of emotions publication-title: J. Nonverbal Behav. doi: 10.1007/BF00990296 – ident: ref_130 doi: 10.1109/IJCNN.2014.6889620 – volume: 6 start-page: 169 year: 1992 ident: ref_20 article-title: An argument for basic emotions publication-title: Cognit. Emot. doi: 10.1080/02699939208411068 – ident: ref_29 doi: 10.1145/2818346.2830593 – ident: ref_14 doi: 10.1145/2993148.2997632 – volume: 37 start-page: 98 year: 2017 ident: ref_79 article-title: A review of affective computing: From unimodal analysis to multimodal fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2017.02.003 – volume: 8 start-page: 500 year: 2006 ident: ref_9 article-title: Recognition of facial expressions and measurement of levels of interest from video publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2006.870737 – volume: 336 start-page: 8 year: 2005 ident: ref_114 article-title: Fastest learning in small-world neural networks publication-title: Phys. Lett. A doi: 10.1016/j.physleta.2004.12.078 – volume: 99 start-page: 550 year: 1992 ident: ref_21 article-title: Are there basic emotions? publication-title: Psychol. Rev. doi: 10.1037/0033-295X.99.3.550 – ident: ref_38 doi: 10.21437/Interspeech.2004-322 – ident: ref_110 – volume: 273 start-page: 643 year: 2018 ident: ref_31 article-title: Facial expression recognition via learning deep sparse autoencoders publication-title: Neurocomputing doi: 10.1016/j.neucom.2017.08.043 – volume: 10 start-page: 186 year: 2009 ident: ref_104 article-title: Complex brain networks: Graph theoretical analysis of structural and functional systems publication-title: Nat. Rev. Neurosci. doi: 10.1038/nrn2575 – volume: 3 start-page: 42 year: 2011 ident: ref_107 article-title: A multimodal database for affect recognition and implicit tagging publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/T-AFFC.2011.25 – ident: ref_124 – volume: 74 start-page: 5577 year: 2015 ident: ref_7 article-title: A survey on facial expression recognition in 3D video sequences publication-title: Multimed. Tools Appl. doi: 10.1007/s11042-014-1869-6 – ident: ref_44 doi: 10.1155/2017/1945630 – volume: 9 start-page: 279 year: 1997 ident: ref_88 article-title: Fast sigmoidal networks via spiking neurons publication-title: Neural Comput. doi: 10.1162/neco.1997.9.2.279 – volume: 31 start-page: 164 year: 2013 ident: ref_116 article-title: Fusion of facial expressions and EEG for implicit affective tagging publication-title: Image Vis. Comput. doi: 10.1016/j.imavis.2012.10.002 – ident: ref_65 doi: 10.1109/ICME.2014.6890166 – volume: 13 start-page: 426 year: 2002 ident: ref_90 article-title: Unsupervised clustering with spiking neurons by sparse temporal coding and multilayer RBF networks publication-title: IEEE Trans. Neural Netw. doi: 10.1109/72.991428 – volume: 69 start-page: 593 year: 2004 ident: ref_87 article-title: On the computational power of circuits of spiking neurons publication-title: J. Comput. Syst. Sci. doi: 10.1016/j.jcss.2004.04.001 – ident: ref_127 doi: 10.1371/journal.pone.0124674 – volume: 32 start-page: 131 year: 2010 ident: ref_91 article-title: Segmentation and edge detection based on spiking neural network model publication-title: Neural Process. Lett. doi: 10.1007/s11063-010-9149-6 – volume: 32 start-page: 1940 year: 2010 ident: ref_117 article-title: A dynamic texture-based approach to recognition of facial actions and their temporal models publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2010.50 – volume: 41 start-page: 188 year: 2013 ident: ref_97 article-title: Dynamic evolving spiking neural networks for on-line spatio-and spectro-temporal pattern recognition publication-title: Neural Netw. doi: 10.1016/j.neunet.2012.11.014 – volume: 39 start-page: 1161 year: 1980 ident: ref_4 article-title: A circumplex model of affect publication-title: J. Personal. Soc. Psychol. doi: 10.1037/h0077714 – ident: ref_70 doi: 10.1109/BIBE.2014.26 – ident: ref_120 doi: 10.1145/2851581.2890247 – ident: ref_27 doi: 10.1109/ICCV.2015.341 – ident: ref_42 doi: 10.1109/ACII.2013.90 – ident: ref_43 doi: 10.1155/2014/749604 – volume: 16 start-page: 358 year: 2015 ident: ref_40 article-title: Speech emotion recognition with unsupervised feature learning publication-title: Front. Inf. Technol. Electr. Eng. doi: 10.1631/FITEE.1400323 – ident: ref_54 doi: 10.1109/CVPR.2017.212 – volume: 42 start-page: 143 year: 2003 ident: ref_3 article-title: A survey of socially interactive robots publication-title: Robot. Auton. Syst. doi: 10.1016/S0921-8890(02)00372-X – ident: ref_12 – volume: 52 start-page: 62 year: 2014 ident: ref_18 article-title: NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data publication-title: Neural Netw. doi: 10.1016/j.neunet.2014.01.006 – volume: 41 start-page: 603 year: 2003 ident: ref_37 article-title: Speech emotion recognition using hidden Markov models publication-title: Speech Commun. doi: 10.1016/S0167-6393(03)00099-2 – volume: 147 start-page: 114 year: 2016 ident: ref_125 article-title: Multi-modal emotion analysis from facial expressions and electroencephalogram publication-title: Comput. Vision Image Underst. doi: 10.1016/j.cviu.2015.09.015 – ident: ref_119 doi: 10.1109/SSCI.2017.8285365 – ident: ref_15 – ident: ref_133 doi: 10.1109/ICCV.2015.421 – volume: 63 start-page: 104 year: 2015 ident: ref_8 article-title: Towards an intelligent framework for multimodal affective data analysis publication-title: Neural Netw. doi: 10.1016/j.neunet.2014.10.005 – ident: ref_60 – volume: 23 start-page: 1175 year: 2001 ident: ref_34 article-title: Toward machine emotional intelligence: Analysis of affective physiological state publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/34.954607 – volume: 93 start-page: 1011 year: 2008 ident: ref_62 article-title: Breathing rhythms and emotions publication-title: Exp. Physiol. doi: 10.1113/expphysiol.2008.042424 – ident: ref_121 doi: 10.3390/fi11050105 – ident: ref_16 doi: 10.1007/978-3-319-43665-4_19 – volume: 32 start-page: 88 year: 1969 ident: ref_52 article-title: Nonverbal leakage and clues to deception publication-title: Psychiatry doi: 10.1080/00332747.1969.11023575 – ident: ref_13 doi: 10.1145/2522848.2531745 – ident: ref_102 doi: 10.1007/978-94-010-0674-3 |
SSID | ssj0023338 |
Score | 2.4772222 |
Snippet | Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state... |
SourceID | doaj pubmedcentral proquest pubmed crossref |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source |
StartPage | 5328 |
SubjectTerms | Brain - diagnostic imaging Deep Learning Electroencephalography Emotions Evolving Spiking Neural Networks (eSNNs) facial emotion recognition Humans multimodal data NeuCube Neural Networks, Computer Spatio-temporal data |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07T8MwELYQEwyIN-WlAzGwRCS2EydsvCpgYAEktsiJbagEaVXa_8DP5s5OqxYhsbDGtmydz7nviy_fMXZSx4aEyXLkJlZGUuVppI3Eg1fxIjEY06z22RYP2e2zvH9JX2ZKfVFOWJAHDoY7S3WduVxrWTiBwYdXytUJyagZDL0I3untizFvQqZaqiWQeQUdIYGk_uwTwxzGJSq5PhN9vEj_b8jyZ4LkTMTprrKVFirCRVjiGluwzTpbnhEQ3GBf3TF97XpELmrP4SaU5AFf6JJSgLzVwWcFAGG98dBCGAF9B_7f24--wTmu9UiDbgxcWzuAVnP1FXoNaLikIhLRXUNX8tbA46BHX9eBVD1w5ENII99kz92bp6vbqK2tENUIgEYR1yQ7I5PYZSRMlPh_r5RDQBHXQhsEGcIYp2KHkLBKcxsXPKudKwonMyGrXGyxxabf2B0GNRpWVNzahBuZuLzSKlWuyBTHllRVHXY6sXlZt8LjVP_ivUQCQttTTrenw46nXQdBbeO3Tpe0cdMOJJDtH6DblK3blH-5TYcdTba9xANFtyS6sf3xZ4l4h0gcvjw6bDu4wXQqwQvkZymOVnMOMreW-Zam9-ZFu9EmpK63-x-L32NLnGg_VbJQ-2xxNBzbA8RGo-rQH4NvJJwMnA priority: 102 providerName: Directory of Open Access Journals |
Title | FusionSense: Emotion Classification Using Feature Fusion of Multimodal Data and Deep Learning in a Brain-Inspired Spiking Neural Network |
URI | https://www.ncbi.nlm.nih.gov/pubmed/32957655 https://www.proquest.com/docview/2444875004 https://pubmed.ncbi.nlm.nih.gov/PMC7571195 https://doaj.org/article/5ac6f8aa49f34062b7fc18763d172625 |
Volume | 20 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lj9MwEB7tQ0K7B8R7w6MyiAOXQOI4cbISQpRtWZCoEEul3iIntpdKu0m3Dwn-AT97Z5w0alFPXHJIbCXyeDLfZ4-_AXhdBpqEyVLkJkb4Qqaxr7RAxyt4FmqMaUa5bItRcj4WXyfxZA_WNTbbAVzspHZUT2o8v3r7--bPB3T498Q4kbK_W2AQw6jD0304xIAkyT-_iW4zgUdIwxpRoe3mR3An4hkCbjrntxGVnHj_LsT5b-LkRiQa3oO7LYRkHxub34c9Uz2A4w1hwYfwd7iiVbAL5KjmlA2aUj3MFcCk1CBnDeayBRhhwNXcsKYHqy1zZ3Kva43vOFNLxVSl2ZkxM9ZqsV6yacUU61NxCf9LRVv1RrOL2ZRW3RmpfWDPUZNe_gjGw8HPT-d-W3PBLxEYLX2uSI5GhIFNSLAodGeypEWgEZSR0gg-Iq2tDCxCxSJOTZDxpLQ2y6xIIlGk0WM4qOrKnAArcYyjghsTci1CmxZKxtJmieT4JJaFB2_WY56XrSA51cW4ypGYkKXyzlIevOqazhoVjl2N-mS4rgEJZ7sb9fwyb_0wj1WZ2FQpkdkIsQwvpC1DUuXTiOSQC3rwcm32HB2Ndk9UZerVIkccROQOfyoePGmmQfeq9TTyQG5NkK1v2X5STX85MW8cE1Lde_rfPZ_BEac1ACprIZ_DwXK-Mi8QKC2LHuzLicRrOvzcg8P-YPT9R88tOvScg9wCSMIX3g |
linkProvider | Scholars Portal |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=FusionSense%3A+Emotion+Classification+Using+Feature+Fusion+of+Multimodal+Data+and+Deep+Learning+in+a+Brain-Inspired+Spiking+Neural+Network&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Tan%2C+Clarence&rft.au=Ceballos%2C+Gerardo&rft.au=Kasabov%2C+Nikola&rft.au=Puthanmadam+Subramaniyam%2C+Narayan&rft.date=2020-09-17&rft.pub=MDPI&rft.eissn=1424-8220&rft.volume=20&rft.issue=18&rft_id=info:doi/10.3390%2Fs20185328&rft_id=info%3Apmid%2F32957655&rft.externalDocID=PMC7571195 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |