Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion
Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detecti...
Saved in:
Published in | Sensors (Basel, Switzerland) Vol. 22; no. 15; p. 5611 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
27.07.2022
MDPI |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively. |
---|---|
AbstractList | Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively. Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study "in the wild" with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively.Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study "in the wild" with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively. |
Author | Kanjo, Eiman Zaki, Someya Mohsen Houssein, Essam H. Younis, Eman M. G. |
AuthorAffiliation | 2 Faculty of Computers and Information Minia University, Al-Obour High Institute for Management, Computers and Information systems, Obour, Cairo 999060, Egypt; someyam@oi.edu.eg or 3 Computing and Technology, Nottingham Trent University (NTU), Nottingham NG1 4FQ, UK; eiman.kanjo@ntu.ac.uk 1 Faculty of Computers and Information Minia University, Minia 61519, Egypt; essam.halim@mu.edu.eg |
AuthorAffiliation_xml | – name: 1 Faculty of Computers and Information Minia University, Minia 61519, Egypt; essam.halim@mu.edu.eg – name: 2 Faculty of Computers and Information Minia University, Al-Obour High Institute for Management, Computers and Information systems, Obour, Cairo 999060, Egypt; someyam@oi.edu.eg or – name: 3 Computing and Technology, Nottingham Trent University (NTU), Nottingham NG1 4FQ, UK; eiman.kanjo@ntu.ac.uk |
Author_xml | – sequence: 1 givenname: Eman M. G. orcidid: 0000-0003-2778-4231 surname: Younis fullname: Younis, Eman M. G. – sequence: 2 givenname: Someya Mohsen orcidid: 0000-0003-0439-2999 surname: Zaki fullname: Zaki, Someya Mohsen – sequence: 3 givenname: Eiman surname: Kanjo fullname: Kanjo, Eiman – sequence: 4 givenname: Essam H. orcidid: 0000-0002-8127-7233 surname: Houssein fullname: Houssein, Essam H. |
BookMark | eNplkk1v1DAQhi1URNuFA_8gEhc4hPor_rggobKFSrtCAnq2HGey9cqxi51U4t-T7BZEy8mjmed9x3o15-gkpggIvSb4PWMaXxRKSdMIQp6hM8IprxWl-OSf-hSdl7LHmDLG1At0yhrdSCLkGbLrexsmO_q4q9axwNAGqDZgc1w6WxhvU1eqPuVqO4XR19vU2VCthzT6FKtv4NIu-kN9UxbFd4hlhj_Z0VZXU5kHL9Hz3oYCrx7eFbq5Wv-4_FJvvn6-vvy4qR2TYqzbHncYd1ZajFumVYv7ttVcKS1tb0UDknYd40KAhBaTvncguO2VBqcwA8ZW6Pro2yW7N3fZDzb_Msl6c2ikvDM2j94FMI5C4xwoKbDgWPNW8aajTctExzQjZPb6cPS6m9oBOgdxzDY8Mn08if7W7NK90UySJeQVevtgkNPPCcpoBl8chGAjpKkYKjEliuvDrjdP0H2acpyjWigsBddKztTFkXI5lZKhN86Pdgl-3u-DIdgsp2D-nsKsePdE8ef7_7O_AYWUtBA |
CitedBy_id | crossref_primary_10_1007_s10489_024_05630_8 crossref_primary_10_1016_j_engstruct_2024_119485 crossref_primary_10_1080_21681163_2023_2212082 crossref_primary_10_1007_s44373_025_00015_z crossref_primary_10_1007_s11042_024_19467_3 crossref_primary_10_1007_s00521_024_09426_2 crossref_primary_10_1016_j_oooo_2023_11_006 crossref_primary_10_1109_ACCESS_2023_3320042 crossref_primary_10_1109_ACCESS_2024_3427111 crossref_primary_10_3390_s23177456 crossref_primary_10_1016_j_bspc_2023_105661 crossref_primary_10_1016_j_engappai_2024_108339 crossref_primary_10_1016_j_eswa_2023_121729 crossref_primary_10_3390_s24216841 crossref_primary_10_1016_j_aej_2024_10_059 crossref_primary_10_1186_s40537_024_00966_x crossref_primary_10_3390_electronics13193904 crossref_primary_10_1007_s10586_024_04804_w |
Cites_doi | 10.1371/journal.pone.0154360 10.1109/TSMCA.2008.918624 10.1016/j.neucom.2013.02.041 10.1109/T-AFFC.2011.37 10.1016/j.inffus.2020.01.011 10.1016/j.measurement.2021.109966 10.1016/j.eswa.2010.12.028 10.1109/EMBC.2017.8037328 10.1155/2013/704504 10.1016/j.neuroimage.2013.11.007 10.1007/s00521-022-07292-4 10.1155/S1110865704406192 10.1007/978-981-16-6448-9_57 10.1109/JSEN.2018.2883497 10.1016/j.inffus.2017.05.005 10.1371/journal.pone.0242946 10.1016/j.jvcir.2021.103395 10.1016/j.ipm.2019.102185 10.1109/TAFFC.2014.2327617 10.3390/s18061905 10.1007/978-3-031-04409-0_29 10.2196/17818 10.1002/widm.1249 10.1016/j.inffus.2016.09.005 10.1007/s11042-022-13149-8 10.3390/s20030718 10.3390/w13091308 10.1109/CSIE.2009.130 10.1109/JSEN.2018.2867221 10.1016/j.ijpsycho.2012.07.106 10.1109/5.554205 10.1016/j.inffus.2018.09.001 10.1145/2577554.2577562 10.1145/3440943.3444727 10.1007/s00779-007-0180-1 10.1016/j.scitotenv.2012.10.098 10.1109/IEMBS.2007.4353164 10.3390/app12052527 10.1371/journal.pone.0220692 10.1016/j.ecolind.2014.06.002 10.1109/ACCESS.2019.2891579 10.1027/1614-0001/a000037 10.1371/journal.pone.0145791 10.1109/MPRV.2009.79 10.1109/TITS.2005.848368 10.1007/BF02344719 10.1016/j.envsoft.2015.06.003 10.1007/s10772-017-9396-2 10.3390/s20030592 10.1109/TPAMI.2008.26 10.1016/0005-7916(94)90063-9 10.1109/ATSIP.2016.7523190 10.1152/japplphysiol.01377.2010 10.7717/peerj.2258 10.1007/978-1-4419-9326-7_1 10.7717/peerj.2364 |
ContentType | Journal Article |
Copyright | 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2022 by the authors. 2022 |
Copyright_xml | – notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2022 by the authors. 2022 |
DBID | AAYXX CITATION 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M1P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI PRINS 7X8 5PM DOA |
DOI | 10.3390/s22155611 |
DatabaseName | CrossRef ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One ProQuest Central Korea ProQuest Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) ProQuest Health & Medical Collection PML(ProQuest Medical Library) ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ |
DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central China ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | CrossRef Publicly Available Content Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ: Directory of Open Access Journal (DOAJ) url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: BENPR name: ProQuest Central Database Suite (ProQuest) url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1424-8220 |
ExternalDocumentID | oai_doaj_org_article_c2e5cce876064094b845d25b36d39311 PMC9371233 10_3390_s22155611 |
GroupedDBID | --- 123 2WC 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD AAHBH AAYXX ABDBF ABUWG ACUHS ADBBV ADMLS AENEX AFKRA AFZYC ALIPV ALMA_UNASSIGNED_HOLDINGS BENPR BPHCQ BVXVI CCPQU CITATION CS3 D1I DU5 E3Z EBD ESX F5P FYUFA GROUPED_DOAJ GX1 HH5 HMCUK HYE IAO ITC KQ8 L6V M1P M48 MODMG M~E OK1 OVT P2P P62 PHGZM PHGZT PIMPY PQQKQ PROAC PSQYO RNS RPM TUS UKHRP XSB ~8M 3V. 7XB 8FK AZQEC DWQXO K9. PJZUB PKEHL PPXIY PQEST PQUKI PRINS 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c376t-bf0d00da7a00b398b0fbb948897afa65e72dd3466e7eb01ffce64af89ec803e33 |
IEDL.DBID | 7X7 |
ISSN | 1424-8220 |
IngestDate | Wed Aug 27 01:24:14 EDT 2025 Thu Aug 21 18:20:06 EDT 2025 Tue Aug 05 10:54:18 EDT 2025 Fri Jul 25 20:02:16 EDT 2025 Tue Jul 01 02:42:05 EDT 2025 Thu Apr 24 23:07:23 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 15 |
Language | English |
License | https://creativecommons.org/licenses/by/4.0 Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c376t-bf0d00da7a00b398b0fbb948897afa65e72dd3466e7eb01ffce64af89ec803e33 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-0439-2999 0000-0003-2778-4231 0000-0002-8127-7233 |
OpenAccessLink | https://www.proquest.com/docview/2700764987?pq-origsite=%requestingapplication% |
PMID | 35957167 |
PQID | 2700764987 |
PQPubID | 2032333 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_c2e5cce876064094b845d25b36d39311 pubmedcentral_primary_oai_pubmedcentral_nih_gov_9371233 proquest_miscellaneous_2702184911 proquest_journals_2700764987 crossref_citationtrail_10_3390_s22155611 crossref_primary_10_3390_s22155611 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20220727 |
PublicationDateYYYYMMDD | 2022-07-27 |
PublicationDate_xml | – month: 7 year: 2022 text: 20220727 day: 27 |
PublicationDecade | 2020 |
PublicationPlace | Basel |
PublicationPlace_xml | – name: Basel |
PublicationTitle | Sensors (Basel, Switzerland) |
PublicationYear | 2022 |
Publisher | MDPI AG MDPI |
Publisher_xml | – name: MDPI AG – name: MDPI |
References | ref_50 Park (ref_11) 2007; 33 Fishman (ref_55) 2012; 113 Zhang (ref_62) 2016; 4 Patel (ref_37) 2011; 38 ref_57 ref_56 (ref_23) 2016; 10 ref_54 Song (ref_59) 2019; 7 Kim (ref_33) 2004; 42 ref_53 ref_52 ref_51 ref_19 ref_18 Sultana (ref_65) 2020; 8 Kanjo (ref_9) 2009; 8 Kanjo (ref_15) 2008; 12 Pandey (ref_29) 2022; 34 Bradley (ref_7) 1994; 25 Wen (ref_61) 2014; 5 Kanjo (ref_8) 2019; 49 Kim (ref_26) 2008; 30 Cosoli (ref_46) 2021; 184 ref_25 ref_22 ref_21 Lisetti (ref_24) 2004; 2004 ref_64 Verma (ref_41) 2014; 102 Li (ref_60) 2016; 4 Jang (ref_38) 2012; 3 Steinle (ref_14) 2013; 443 Kanjo (ref_12) 2018; 40 Adibuzzaman (ref_20) 2013; 13 Hall (ref_16) 1997; 85 Katsis (ref_36) 2008; 38 ref_35 Banzhaf (ref_47) 2014; 45 ref_32 Realo (ref_10) 2011; 32 Gupta (ref_30) 2018; 19 Sewell (ref_48) 2008; 11 Castanedo (ref_17) 2013; 2013 Li (ref_27) 2020; 55 Zhang (ref_28) 2020; 59 Healey (ref_34) 2005; 6 ref_45 ref_44 Reis (ref_5) 2015; 74 ref_43 Soleymani (ref_39) 2011; 3 ref_42 Sagi (ref_6) 2018; 8 ref_3 ref_2 ref_49 Dias (ref_1) 2022; 82 Albraikan (ref_31) 2018; 19 Gravina (ref_13) 2017; 35 Noroozi (ref_63) 2017; 20 Chang (ref_40) 2013; 122 ref_4 Freund (ref_58) 1999; 14 |
References_xml | – ident: ref_19 doi: 10.1371/journal.pone.0154360 – volume: 38 start-page: 502 year: 2008 ident: ref_36 article-title: Toward emotion recognition in car-racing drivers: A biosignal processing approach publication-title: IEEE Trans. Syst. Man -Cybern.-Part Syst. Humans doi: 10.1109/TSMCA.2008.918624 – volume: 122 start-page: 79 year: 2013 ident: ref_40 article-title: Physiological emotion analysis using support vector regression publication-title: Neurocomputing doi: 10.1016/j.neucom.2013.02.041 – ident: ref_49 – volume: 3 start-page: 211 year: 2011 ident: ref_39 article-title: Multimodal emotion recognition in response to videos publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/T-AFFC.2011.37 – volume: 59 start-page: 103 year: 2020 ident: ref_28 article-title: Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review publication-title: Inf. Fusion doi: 10.1016/j.inffus.2020.01.011 – ident: ref_51 – volume: 184 start-page: 109966 year: 2021 ident: ref_46 article-title: Measurement of multimodal physiological signals for stimulation detection by wearable devices publication-title: Measurement doi: 10.1016/j.measurement.2021.109966 – volume: 38 start-page: 7235 year: 2011 ident: ref_37 article-title: Applying neural network analysis on heart rate variability data to assess driver fatigue publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2010.12.028 – ident: ref_42 doi: 10.1109/EMBC.2017.8037328 – volume: 2013 start-page: 704504 year: 2013 ident: ref_17 article-title: A review of data fusion techniques publication-title: Sci. World J. doi: 10.1155/2013/704504 – volume: 102 start-page: 162 year: 2014 ident: ref_41 article-title: Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals publication-title: NeuroImage doi: 10.1016/j.neuroimage.2013.11.007 – ident: ref_43 doi: 10.1007/s00521-022-07292-4 – volume: 2004 start-page: 929414 year: 2004 ident: ref_24 article-title: Using noninvasive wearable computers to recognize human emotions from physiological signals publication-title: EURASIP J. Adv. Signal Process. doi: 10.1155/S1110865704406192 – ident: ref_3 doi: 10.1007/978-981-16-6448-9_57 – volume: 19 start-page: 2266 year: 2018 ident: ref_30 article-title: Cross-subject emotion recognition using flexible analytic wavelet transform from EEG signals publication-title: IEEE Sens. J. doi: 10.1109/JSEN.2018.2883497 – volume: 11 start-page: 1 year: 2008 ident: ref_48 article-title: Ensemble learning publication-title: RN – volume: 40 start-page: 18 year: 2018 ident: ref_12 article-title: Towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach publication-title: Inf. Fusion doi: 10.1016/j.inffus.2017.05.005 – ident: ref_45 doi: 10.1371/journal.pone.0242946 – volume: 82 start-page: 103395 year: 2022 ident: ref_1 article-title: Cross-dataset emotion recognition from facial expressions through convolutional neural networks publication-title: J. Vis. Commun. Image Represent. doi: 10.1016/j.jvcir.2021.103395 – volume: 55 start-page: 102185 year: 2020 ident: ref_27 article-title: Exploring temporal representations by leveraging attention-based bidirectional LSTM-RNNs for multi-modal emotion recognition publication-title: Inf. Process. Manag. doi: 10.1016/j.ipm.2019.102185 – volume: 5 start-page: 126 year: 2014 ident: ref_61 article-title: Emotion recognition based on multi-variant correlation of physiological signals publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2014.2327617 – ident: ref_32 doi: 10.3390/s18061905 – ident: ref_2 doi: 10.1007/978-3-031-04409-0_29 – volume: 8 start-page: e17818 year: 2020 ident: ref_65 article-title: Using machine learning and smartphone and smartwatch data to detect emotional states and transitions: Exploratory study publication-title: JMIR mHealth uHealth doi: 10.2196/17818 – volume: 8 start-page: e1249 year: 2018 ident: ref_6 article-title: Ensemble learning: A survey. Wiley Interdisciplinary Reviews publication-title: Data Min. Knowl. Discov. doi: 10.1002/widm.1249 – volume: 35 start-page: 68 year: 2017 ident: ref_13 article-title: Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges publication-title: Inf. Fusion doi: 10.1016/j.inffus.2016.09.005 – ident: ref_52 – ident: ref_4 doi: 10.1007/s11042-022-13149-8 – volume: 34 start-page: 1730 year: 2022 ident: ref_29 article-title: Subject independent emotion recognition from EEG using VMD and deep learning publication-title: J. King Saud-Univ.-Comput. Inf. Sci. – ident: ref_64 doi: 10.3390/s20030718 – ident: ref_50 doi: 10.3390/w13091308 – volume: 33 start-page: 17 year: 2007 ident: ref_11 article-title: The effects of lighting on consumers’ emotions and behavioral intentions in a retail environment: A cross-cultural comparison publication-title: J. Inter. Des. – volume: 14 start-page: 1612 year: 1999 ident: ref_58 article-title: A short introduction to boosting publication-title: J.-Jpn. Soc. Artif. Intell. – ident: ref_22 doi: 10.1109/CSIE.2009.130 – volume: 19 start-page: 8402 year: 2018 ident: ref_31 article-title: Toward user-independent emotion recognition using physiological signals publication-title: IEEE Sens. J. doi: 10.1109/JSEN.2018.2867221 – volume: 3 start-page: 402 year: 2012 ident: ref_38 article-title: Classification of three emotions by machine learning algorithms using psychophysiological signals publication-title: Int. J. Psychophysiol. doi: 10.1016/j.ijpsycho.2012.07.106 – volume: 85 start-page: 6 year: 1997 ident: ref_16 article-title: An introduction to multisensor data fusion publication-title: Proc. IEEE doi: 10.1109/5.554205 – volume: 49 start-page: 46 year: 2019 ident: ref_8 article-title: Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection publication-title: Inf. Fusion doi: 10.1016/j.inffus.2018.09.001 – volume: 13 start-page: 67 year: 2013 ident: ref_20 article-title: In situ affect detection in mobile devices: A multimodal approach for advertisement using social network publication-title: ACM SIGAPP Appl. Comput. Rev. doi: 10.1145/2577554.2577562 – ident: ref_54 doi: 10.1145/3440943.3444727 – volume: 12 start-page: 599 year: 2008 ident: ref_15 article-title: MobGeoSen: Facilitating personal geosensor data collection and visualization using mobile phones publication-title: Pers. Ubiquitous Comput. doi: 10.1007/s00779-007-0180-1 – volume: 443 start-page: 184 year: 2013 ident: ref_14 article-title: Quantifying human exposure to air pollution—Moving from static monitoring to spatio-temporally resolved personal exposure assessment publication-title: Sci. Total. Environ. doi: 10.1016/j.scitotenv.2012.10.098 – ident: ref_21 doi: 10.1109/IEMBS.2007.4353164 – ident: ref_44 doi: 10.3390/app12052527 – ident: ref_53 doi: 10.1371/journal.pone.0220692 – volume: 45 start-page: 664 year: 2014 ident: ref_47 article-title: A conceptual framework for integrated analysis of environmental quality and quality of life publication-title: Ecol. Indic. doi: 10.1016/j.ecolind.2014.06.002 – volume: 7 start-page: 12177 year: 2019 ident: ref_59 article-title: MPED: A multi-modal physiological emotion database for discrete emotion recognition publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2891579 – volume: 32 start-page: 74 year: 2011 ident: ref_10 article-title: The influence of the weather on affective experience publication-title: J. Individ. Differ. doi: 10.1027/1614-0001/a000037 – volume: 10 start-page: 74 year: 2016 ident: ref_23 article-title: A comparison of physiological signal analysis techniques and classifiers for automatic emotional evaluation of audiovisual contents publication-title: Front. Comput. Neurosci. – ident: ref_57 doi: 10.1371/journal.pone.0145791 – volume: 8 start-page: 50 year: 2009 ident: ref_9 article-title: MobSens: Making smart phones smarter publication-title: IEEE Pervasive Comput. doi: 10.1109/MPRV.2009.79 – volume: 6 start-page: 156 year: 2005 ident: ref_34 article-title: Detecting stress during real-world driving tasks using physiological sensors publication-title: IEEE Trans. Intell. Transp. Syst. doi: 10.1109/TITS.2005.848368 – ident: ref_25 – volume: 42 start-page: 419 year: 2004 ident: ref_33 article-title: Emotion recognition system using short-term monitoring of physiological signals publication-title: Med. Biol. Eng. Comput. doi: 10.1007/BF02344719 – volume: 74 start-page: 238 year: 2015 ident: ref_5 article-title: Integrating modelling and smart sensors for environmental and human health publication-title: Environ. Model. Softw. doi: 10.1016/j.envsoft.2015.06.003 – volume: 20 start-page: 239 year: 2017 ident: ref_63 article-title: Vocal-based emotion recognition using random forests and decision tree publication-title: Int. J. Speech Technol. doi: 10.1007/s10772-017-9396-2 – ident: ref_35 doi: 10.3390/s20030592 – volume: 30 start-page: 2067 year: 2008 ident: ref_26 article-title: Emotion recognition based on physiological changes in music listening publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2008.26 – volume: 25 start-page: 49 year: 1994 ident: ref_7 article-title: Measuring emotion: The self-assessment manikin and the semantic differential publication-title: J. Behav. Ther. Exp. Psychiatry doi: 10.1016/0005-7916(94)90063-9 – ident: ref_18 doi: 10.1109/ATSIP.2016.7523190 – volume: 113 start-page: 297 year: 2012 ident: ref_55 article-title: A method for analyzing temporal patterns of variability of a time series from Poincare plots publication-title: J. Appl. Physiol. doi: 10.1152/japplphysiol.01377.2010 – volume: 4 start-page: e2258 year: 2016 ident: ref_62 article-title: Emotion recognition based on customized smart bracelet with built-in accelerometer publication-title: PeerJ doi: 10.7717/peerj.2258 – ident: ref_56 doi: 10.1007/978-1-4419-9326-7_1 – volume: 4 start-page: e2364 year: 2016 ident: ref_60 article-title: Emotion recognition using Kinect motion capture data of human gaits publication-title: PeerJ doi: 10.7717/peerj.2364 |
SSID | ssj0023338 |
Score | 2.4776804 |
Snippet | Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions... |
SourceID | doaj pubmedcentral proquest crossref |
SourceType | Open Website Open Access Repository Aggregation Database Enrichment Source Index Database |
StartPage | 5611 |
SubjectTerms | Body temperature Data collection Decision making emotion recognition Emotions ensemble learning Heart rate Human-computer interaction Hypotheses multi-modal emotion recognition physiological and environmental Physiology Sensors Skin Smartphones subject independent predictive models for emotion Wearable computers |
SummonAdditionalLinks | – databaseName: DOAJ dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1La9wwEBYhp-RQ2iSl2zxQQg69iJWt9zFtdgmBzSEP2JvRy2kh9YZ49_93JHuXNRRyyc1Yg5BHM54ZzegbhC7B5tNQ2oIwERThMmiieeGJKJgtnNbS5uz57E7ePPHbuZhvtfpKNWEdPHDHuLEvo_A-gtKmnJPhTnMRSuGYDMyw7lYv2Lx1MNWHWgwirw5HiEFQP25LsGzgKRQD65NB-gee5bAucsvQTD-jT72HiK-6lX1BO7E5QPtbuIGHyE56jO7mGU-aNv51LxH3SKnPeJabQrcY3FGc79eS2SLAjJOuYw--X9cMwXOuGMAPEMsC8bVdWjxdpfOzI_Q0nTz-uiF9rwTi4RexJK6mgdJglaXUMaMdrZ0zoJ1G2dpKEVUZAuNSRhUdLeraR8ltrU30mrLI2Fe02yya-A3hInCv6rJOBSXcU-EKGY2wJiVtpZJ2hH6seVj5Hkg89bN4qSCgSOyuNuweoYsN6WuHnvE_op9pIzYECfA6vwAxqHoxqN4TgxE6WW9j1WthW6WkupLcaDVC55th0J-UFLFNXKwyTYpyTZpCDbZ_sKDhSPPnd0biTmCCIGvfP-ILjtFema5WUEVKdYJ2l2-reAoOz9KdZdn-B4N3_1s priority: 102 providerName: Directory of Open Access Journals – databaseName: Scholars Portal Journals: Open Access dbid: M48 link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwELaq9gIHRHmIhYIM4sDF4PjtA6p47KpCWg7ASr1FfmWLtGRhH1L594y9SdRIFbcoGTnRjMczX8b-BqHXEPNpZK4iXEZNhIqGGFEFIivuKm-McqV6Pv-qLhbiy6W8PEJ9j81OgdtboV3uJ7XYrN5e__l7Dg7_PiNOgOzvtgziFuQBAIJOICDp7J9zMRQTGAcYdiAVGouPQlFh7B-lmeNNkjeizuw-uteli_jDwb6n6Ci1D9DdGySCD5GbdoTd7RJP22365VcJd7SpSzwvHaK3GHJTXA7bkvk6wojTQ_se_K3fQATXZfsA_g7AFoQ_u53Ds33-mfYILWbTH58uSNc4gQRYL3bENzRSGp12lHpujaeN9xZc1WrXOCWTZjFyoVTSydOqaUJSwjXGpmAoT5w_Rsftuk1PEK6iCLphTd5dIgKVvlLJSmdzBVdp5SboTa_DOnSs4rm5xaoGdJHVXQ_qnqBXg-jvA5XGbUIfsyEGgcx-XW6sN8u6c6Y6sCRDSLCQ5zqkFd4IGZn0XEVueR7krDdj3c-oOlfYtRLW6Al6OTwGZ8oVEtem9b7IZMhr8xB6ZP7RB42ftD-vCi13ZhaEufb0_y9_hu6wfIKCasL0GTrebfbpOeQ1O_-izNp_tID5Sg priority: 102 providerName: Scholars Portal |
Title | Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion |
URI | https://www.proquest.com/docview/2700764987 https://www.proquest.com/docview/2702184911 https://pubmed.ncbi.nlm.nih.gov/PMC9371233 https://doaj.org/article/c2e5cce876064094b845d25b36d39311 |
Volume | 22 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwELagvcAB8RQLZWUQBy5WndixnROisEuFtBUqVNpb5Fe2SCUpze7_Z8br3TYS4mJF8ciJPB57Xv6GkPdw5vNQ2oKJKmgmVTDMyMKzqhC2cMYom6LnizN1eiG_LatldrgNOa1ytyemjTr0Hn3kxxgg1UqCifzx-g_DqlEYXc0lNO6TQ4Quw5Quvbw1uATYX1s0IQGm_fFQwvkG-kIxOoMSVP9IvxxnR945buaPyaOsJ9JPW8Y-Ifdi95Q8vIMe-IzYWUbq7lZ01g3xt7uKNOOlrugilYYeKCilNN2yZYs-wIizbd0eer7LHILnlDdAf4BFC8Rf7NrS-Qa9aM_JxXz28_MpyxUTmIeNYs1cywPnwWrLuRO1cbx1rgYZrbVtraqiLkMQUqmoo-NF2_qopG1NHb3hIgrxghx0fRdfEloE6XVbtphWIj2vXKFiXdkaQ7dKKzshH3Zz2PgMJ45VLa4aMCtwupv9dE_Iuz3p9RZD419EJ8iIPQHCXqcX_c2qyVLU-DJW3kfYwTEAWUtnZBXKygkVRC1wkKMdG5ssi0Nzu3Im5O2-G6QIQyO2i_0m0aCtW-MQesT-0Q-Ne7pflwmPGyEFYa29-v_HX5MHJV6d4JqV-ogcrG828Q0oNGs3TasWWjP_OiWHJ7Oz7-fT5ByAdiHNXx4X_EE |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELaqcgAOiKdYKGAQSFysOrZjJweEgO5qS7s9QCvtLfiVLVJJSrMrxJ_iNzKTTbaNhLj1FsUjJxrPeGY8428IeQ02nwdhEybTYJjSIWOZSjxLE2kTl2Xattnz2ZGenqjP83S-Rf70d2GwrLLfE9uNOtQez8h3MUFqtIIQ-f35T4ZdozC72rfQWIvFQfz9C0K25t3-HqzvGyEm4-NPU9Z1FWAelGnJXMkD58Eay7mTeeZ46VwOcpwbW1qdRiNCkErraKLjSVn6qJUtszz6jMuIB6Cw5d8Aw8tRo8z8MsCTEO-t0YukzPluI8Cegn-SDGxe2xpg4M8OqzGvmLfJXXKn80vph7Ug3SNbsbpPbl9BK3xA7LhDBq8WdFw18Yc7i7TDZ13QWduKuqHgBNP2Vi-b1QFmHK_7BNEvfaUSPLd1CvQrRNBAvGeXlk5WeGr3kJxcCy8fke2qruJjQpOgvClFiWUsyvPUJTrmqc0xVayNtiPytudh4Tv4cuyicVZAGIPsLjbsHpFXG9LzNWbHv4g-4kJsCBBmu31RXyyKTmsLL2LqfQSLgQnPXLlMpUGkTuogc4mT7PTLWHS63xSXkjoiLzfDoLWYirFVrFctDcbWOU5hBss_-KHhSPX9tMX_RghDkLUn___4C3Jzejw7LA73jw6eklsCr21ww4TZIdvLi1V8Bs7U0j1vJZiSb9etMn8Bd_w2kA |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3di9QwEA_HCaIP4ieunhpFwZewaZImzYOIurvcee4h6sG-1Xx1Fc72vO4i_mv-dU6y7d4VxLd7K-2QtpOZzExm8huEnoPNp56ZjPDcKyKkL0ghMkfyjJvMFoU0KXs-P5L7x-L9Il_soD_9WZhYVtmviWmh9o2Le-TjmCBVUkCIPK66soiPk9nr058kdpCKmda-ncZGRA7D718QvrWvDiYw1y8Ym02_vNsnXYcB4kCxVsRW1FPqjTKUWq4LSytrNci0VqYyMg-Kec-FlEEFS7OqckEKUxU6uILyEDdDYfm_ojj8GuiSWpwHexxivw2SEeeajlsGthV8lWxg_1KbgIFvO6zMvGDqZjfRjc5HxW82QnUL7YT6Nrp-AbnwDjLTDiW8XuJp3YYf9iTgDqt1ieepLXWLwSHG6YQvmTceRpxuegbhT33VElynmgX8GaJpIJ6YlcGzddzBu4uOL4WX99Bu3dThPsKZF05VrIolLcLR3GYy6NzomDaWSpoRetnzsHQdlHnsqHFSQkgT2V1u2T1Cz7akpxv8jn8RvY0TsSWIkNvpRnO2LDsNLh0LuXMBrEdMfmphC5F7llsuPdc8DrLXT2PZrQNteS61I_R0-xg0OKZlTB2adaKJcbaOQ6jB9A8-aPik_v4tYYFHOEOQtQf_f_kTdBWUpfxwcHT4EF1j8QQHVYSpPbS7OluHR-BXrezjJMAYfb1sjfkLiDc6xg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Evaluating+Ensemble+Learning+Methods+for+Multi-Modal+Emotion+Recognition+Using+Sensor+Data+Fusion&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Younis%2C+Eman+M+G&rft.au=Someya%2C+Mohsen+Zaki&rft.au=Kanjo%2C+Eiman&rft.au=Houssein%2C+Essam+H&rft.date=2022-07-27&rft.pub=MDPI+AG&rft.eissn=1424-8220&rft.volume=22&rft.issue=15&rft.spage=5611&rft_id=info:doi/10.3390%2Fs22155611&rft.externalDBID=HAS_PDF_LINK |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |