Utilizing Deep Learning Towards Multi-Modal Bio-Sensing and Vision-Based Affective Computing
In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based researc...
Saved in:
Published in | IEEE transactions on affective computing Vol. 13; no. 1; pp. 96 - 107 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.01.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based research problems such as object detection. Yet, these advances in deep-learning have not adequately translated into bio-sensing research. This work applies novel deep-learning-based methods to various bio-sensing and video data of four publicly available multi-modal emotion datasets. For each dataset, we first individually evaluate the emotion-classification performance obtained by each modality. We then evaluate the performance obtained by fusing the features from these modalities. We show that our algorithms outperform the results reported by other studies for emotion/valence/arousal/liking classification on DEAP and MAHNOB-HCI datasets and set up benchmarks for the newer AMIGOS and DREAMER datasets. We also evaluate the performance of our algorithms by combining the datasets and by using transfer learning to show that the proposed method overcomes the inconsistencies between the datasets. Hence, we do a thorough analysis of multi-modal affective data from more than 120 subjects and 2,800 trials. Finally, utilizing a convolution-deconvolution network, we propose a new technique towards identifying salient brain regions corresponding to various affective states. |
---|---|
AbstractList | In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based research problems such as object detection. Yet, these advances in deep-learning have not adequately translated into bio-sensing research. This work applies novel deep-learning-based methods to various bio-sensing and video data of four publicly available multi-modal emotion datasets. For each dataset, we first individually evaluate the emotion-classification performance obtained by each modality. We then evaluate the performance obtained by fusing the features from these modalities. We show that our algorithms outperform the results reported by other studies for emotion/valence/arousal/liking classification on DEAP and MAHNOB-HCI datasets and set up benchmarks for the newer AMIGOS and DREAMER datasets. We also evaluate the performance of our algorithms by combining the datasets and by using transfer learning to show that the proposed method overcomes the inconsistencies between the datasets. Hence, we do a thorough analysis of multi-modal affective data from more than 120 subjects and 2,800 trials. Finally, utilizing a convolution-deconvolution network, we propose a new technique towards identifying salient brain regions corresponding to various affective states. |
Author | Sejnowski, Terrence J. Jung, Tzyy-Ping Siddharth |
Author_xml | – sequence: 1 orcidid: 0000-0002-1001-8218 surname: Siddharth fullname: Siddharth email: ssiddhar@eng.ucsd.edu organization: Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA – sequence: 2 givenname: Tzyy-Ping orcidid: 0000-0002-8377-2166 surname: Jung fullname: Jung, Tzyy-Ping email: jung@sccn.ucsd.edu organization: Institute for Neural Computation, University of California San Diego, La Jolla, CA, USA – sequence: 3 givenname: Terrence J. orcidid: 0000-0002-0622-7391 surname: Sejnowski fullname: Sejnowski, Terrence J. email: terry@salk.edu organization: Institute for Neural Computation, University of California San Diego, La Jolla, CA, USA |
BookMark | eNp9kMFOwzAMhiM0JMbYC8ClEueOpGnS-LgNBkibOLBxQqrSNkWZuqYkKQienpZNCHHAF9vy_9nyf4oGtakVQucETwjBcLWeLhbzSYQJTCIgHBN2hIYEYggpjtngV32Cxs5tcReUUh4lQ_S88brSn7p-Ca6VaoKlkrbuu7V5l7ZwwaqtvA5XppBVMNMmfFS16-eyLoIn7bSpw5l0qgimZalyr99UMDe7pvWd6Awdl7JyanzII7RZ3Kznd-Hy4fZ-Pl2GeQTMhyVQoCTBBXAhE44lzguupOCKAJQkBpkJDqJkkPEsF0xAkmScMchJlPGOHqHL_d7GmtdWOZ9uTWvr7mQaccpwHEdJ0qnEXpVb45xVZZprL333gbdSVynBaW9n-m1n2tuZHuzs0OgP2li9k_bjf-hiD2ml1A8gEkIFcPoFp4eBfQ |
CODEN | ITACBQ |
CitedBy_id | crossref_primary_10_1016_j_compbiomed_2020_103927 crossref_primary_10_1109_TAFFC_2020_3004114 crossref_primary_10_3390_sym14040687 crossref_primary_10_1109_TBCAS_2022_3187944 crossref_primary_10_3390_s22093248 crossref_primary_10_1016_j_bspc_2024_107462 crossref_primary_10_3390_brainsci13050759 crossref_primary_10_1109_TCDS_2021_3051465 crossref_primary_10_1142_S0129065722500393 crossref_primary_10_1109_TNSE_2023_3271354 crossref_primary_10_1109_TNSRE_2022_3219418 crossref_primary_10_1109_TSMC_2024_3523342 crossref_primary_10_1109_TNSRE_2022_3192533 crossref_primary_10_1145_3653722 crossref_primary_10_3390_s23052387 crossref_primary_10_1016_j_ipm_2019_102185 crossref_primary_10_1109_JBHI_2022_3224775 crossref_primary_10_1016_j_compbiomed_2022_105303 crossref_primary_10_3389_fnins_2023_1330077 crossref_primary_10_1109_TNSRE_2021_3123969 crossref_primary_10_3390_app12094236 crossref_primary_10_3390_e24101322 crossref_primary_10_1007_s11227_022_05026_w crossref_primary_10_1007_s12559_024_10361_6 crossref_primary_10_1109_TAFFC_2023_3265433 crossref_primary_10_1109_TAFFC_2020_3014842 crossref_primary_10_3390_s22239102 crossref_primary_10_1016_j_inffus_2025_102982 crossref_primary_10_1109_TCSS_2024_3405949 crossref_primary_10_1109_TCDS_2020_3007453 crossref_primary_10_1109_JSEN_2023_3335229 crossref_primary_10_1016_j_engappai_2024_108413 crossref_primary_10_1016_j_neucom_2020_09_017 crossref_primary_10_3390_app14167165 crossref_primary_10_1109_JSEN_2022_3172133 crossref_primary_10_1109_TNNLS_2023_3238519 crossref_primary_10_1002_widm_1563 crossref_primary_10_1007_s11571_024_10077_1 crossref_primary_10_1109_TAFFC_2021_3114123 crossref_primary_10_1016_j_compbiomed_2023_107450 crossref_primary_10_1016_j_inffus_2023_102218 crossref_primary_10_1088_1741_2552_abf609 crossref_primary_10_1109_ACCESS_2024_3436556 crossref_primary_10_1016_j_bspc_2024_107089 crossref_primary_10_3390_info15050274 crossref_primary_10_1109_ACCESS_2019_2927768 crossref_primary_10_1007_s12652_021_03462_9 crossref_primary_10_1109_ACCESS_2024_3349552 crossref_primary_10_1007_s11571_023_09968_6 crossref_primary_10_1142_S0129065723500661 crossref_primary_10_1109_JSEN_2021_3135953 crossref_primary_10_1016_j_bspc_2022_103877 crossref_primary_10_1016_j_brainresbull_2024_110901 crossref_primary_10_1088_1757_899X_1187_1_012012 crossref_primary_10_1109_ACCESS_2024_3420103 crossref_primary_10_1016_j_neurot_2024_e00507 crossref_primary_10_1007_s10489_022_04226_4 crossref_primary_10_1016_j_knosys_2025_113238 crossref_primary_10_1109_ACCESS_2024_3506157 crossref_primary_10_1016_j_jestch_2021_03_012 crossref_primary_10_3390_s22239282 crossref_primary_10_1109_TIM_2023_3338676 crossref_primary_10_1109_TAFFC_2023_3329526 crossref_primary_10_1016_j_bspc_2023_104928 crossref_primary_10_12688_f1000research_73255_1 crossref_primary_10_12688_f1000research_73255_2 crossref_primary_10_1039_D3MH01950K crossref_primary_10_1109_ACCESS_2022_3193768 crossref_primary_10_1007_s12559_023_10171_2 crossref_primary_10_1109_TAFFC_2023_3288118 crossref_primary_10_1108_ACI_05_2021_0130 crossref_primary_10_1109_TCDS_2023_3293321 crossref_primary_10_1109_ACCESS_2021_3051281 crossref_primary_10_1016_j_physa_2022_127700 crossref_primary_10_1016_j_heliyon_2023_e23611 crossref_primary_10_1109_TCDS_2024_3391131 crossref_primary_10_1109_JSEN_2021_3138269 crossref_primary_10_1007_s10439_023_03341_8 crossref_primary_10_1109_TNNLS_2023_3236635 crossref_primary_10_1109_ACCESS_2020_3027026 crossref_primary_10_1109_TCDS_2023_3270170 crossref_primary_10_1038_s41598_019_52891_2 crossref_primary_10_1109_TNSRE_2023_3268751 crossref_primary_10_1109_JSEN_2020_3027181 crossref_primary_10_3390_mti6060047 crossref_primary_10_1186_s40708_024_00242_x crossref_primary_10_3389_fnins_2022_965871 crossref_primary_10_1016_j_eswa_2024_125089 crossref_primary_10_1109_TIM_2023_3347790 |
Cites_doi | 10.1016/j.intcom.2004.06.009 10.1016/j.foodres.2017.07.021 10.1109/TAFFC.2018.2884461 10.1109/CVPR.2001.990517 10.1109/CW.2012.15 10.1016/j.compbiomed.2013.10.017 10.1007/978-3-319-58628-1_31 10.1016/j.neucom.2005.12.126 10.1109/JBHI.2017.2688239 10.1007/978-3-540-70994-7_6 10.1016/S0735-1097(97)00554-8 10.1016/j.apergo.2011.09.003 10.1145/3132635.3132641 10.1109/T-AFFC.2011.37 10.1109/TAFFC.2017.2712143 10.1145/2522848.2531745 10.1109/T-AFFC.2011.25 10.1006/nimg.2002.1087 10.1109/FG.2015.7284873 10.1136/jnnp.57.12.1518 10.1121/1.2133000 10.1109/TMM.2006.870737 10.1016/j.neunet.2014.10.005 10.1109/34.908962 10.1016/j.imavis.2012.10.002 10.1162/neco.1997.9.8.1735 10.1089/cpb.2006.9993 10.1109/MSP.2012.2211477 10.1016/j.ijhcs.2007.10.011 10.1109/CIBCB.2016.7758108 10.1109/EMBC.2018.8512320 10.1109/CVPR.2016.91 10.1109/EMBC.2016.7591731 10.1109/TBME.2018.2868759 10.1007/978-3-540-30133-2_26 10.5244/C.29.41 10.1007/s11517-010-0592-3 10.1007/s11263-015-0816-y 10.1109/ICME.2014.6890161 10.1007/s10479-011-0841-3 10.1109/CVPRW.2014.131 10.1109/CVPR.2014.240 10.1109/EMBC.2015.7318997 10.1109/EMBC.2013.6609968 10.1016/j.jneumeth.2003.10.009 10.1037/h0077714 10.1145/2818346.2830595 10.1186/1475-925X-12-56 10.1136/hrt.52.4.396 10.1016/j.eswa.2015.10.049 10.1109/TPAMI.2005.159 10.1016/j.camwa.2007.04.035 10.1109/TITB.2010.2042607 10.1109/T-AFFC.2011.15 10.1088/1741-2560/10/4/046003 10.1109/MCSE.2007.55 10.1109/TBME.2004.827086 10.1016/j.jsams.2017.09.498 10.1016/0169-7439(87)80084-9 10.1016/j.cmpb.2016.12.005 10.1109/TIP.2017.2754941 10.1109/TAFFC.2017.2714671 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TAFFC.2019.2916015 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1949-3045 |
EndPage | 107 |
ExternalDocumentID | 10_1109_TAFFC_2019_2916015 8713896 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Science Foundation grantid: 1540943 funderid: 10.13039/100000001 – fundername: UC San Diego Center for Wearable Sensors – fundername: Army Research Laboratory grantid: W911NF-10-2-0022 funderid: 10.13039/100006754 – fundername: National Science Foundation grantid: NCS-1734883 funderid: 10.13039/100000001 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG AENEX AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD HZ~ IEDLZ IFIPE IPLJI JAVBF M43 O9- OCL PQQKQ RIA RIE RNI RZB AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-f9393170d968a760a0cd6ea86e199f149ab8698f59b6bc858977b6559c12b6f93 |
IEDL.DBID | RIE |
ISSN | 1949-3045 |
IngestDate | Mon Jun 30 03:21:10 EDT 2025 Tue Jul 01 02:57:52 EDT 2025 Thu Apr 24 22:56:13 EDT 2025 Wed Aug 27 02:49:29 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-f9393170d968a760a0cd6ea86e199f149ab8698f59b6bc858977b6559c12b6f93 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-1001-8218 0000-0002-8377-2166 0000-0002-0622-7391 |
PQID | 2635044277 |
PQPubID | 2040414 |
PageCount | 12 |
ParticipantIDs | ieee_primary_8713896 proquest_journals_2635044277 crossref_primary_10_1109_TAFFC_2019_2916015 crossref_citationtrail_10_1109_TAFFC_2019_2916015 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-Jan.-March-1 2022-1-1 20220101 |
PublicationDateYYYYMMDD | 2022-01-01 |
PublicationDate_xml | – month: 01 year: 2022 text: 2022-Jan.-March-1 |
PublicationDecade | 2020 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE transactions on affective computing |
PublicationTitleAbbrev | TAFFC |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref10 ref54 ref17 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 ref42 ref41 ref44 ref43 ref8 ref7 ref9 ref4 ref3 ref6 ref5 Lei (ref11) 2017; 2 ref40 ref35 ref34 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 Simonyan (ref49) 2014 Bashivan (ref47) 2016 ref24 ref68 ref23 ref67 ref26 ref25 ref69 ref20 ref64 ref63 ref66 ref21 ref65 ref28 ref27 ref29 Rangesh (ref52) Krizhevsky (ref22) 2014 Billauer (ref55) 2012 ref60 ref62 ref61 Wang (ref37) 2013; 3 |
References_xml | – start-page: 2545 volume-title: Proc. IEEE 19th Int. Conf. Intell. Transp. Syst. ident: ref52 article-title: Driver hand localization and grasp analysis: A vision-based real-time approach – ident: ref1 doi: 10.1016/j.intcom.2004.06.009 – ident: ref6 doi: 10.1016/j.foodres.2017.07.021 – ident: ref31 doi: 10.1109/TAFFC.2018.2884461 – ident: ref61 doi: 10.1109/CVPR.2001.990517 – ident: ref28 doi: 10.1109/CW.2012.15 – ident: ref36 doi: 10.1016/j.compbiomed.2013.10.017 – ident: ref13 doi: 10.1007/978-3-319-58628-1_31 – ident: ref65 doi: 10.1016/j.neucom.2005.12.126 – ident: ref33 doi: 10.1109/JBHI.2017.2688239 – ident: ref2 doi: 10.1007/978-3-540-70994-7_6 – ident: ref57 doi: 10.1016/S0735-1097(97)00554-8 – ident: ref9 doi: 10.1016/j.apergo.2011.09.003 – year: 2014 ident: ref22 article-title: The CIFAR-10 dataset – ident: ref25 doi: 10.1145/3132635.3132641 – ident: ref26 doi: 10.1109/T-AFFC.2011.37 – ident: ref35 doi: 10.1109/TAFFC.2017.2712143 – year: 2016 ident: ref47 article-title: Learning representations from EEG with deep recurrent-convolutional neural networks publication-title: CoRR, abs/1511.06448 – ident: ref18 doi: 10.1145/2522848.2531745 – ident: ref32 doi: 10.1109/T-AFFC.2011.25 – ident: ref69 doi: 10.1006/nimg.2002.1087 – ident: ref20 doi: 10.1109/FG.2015.7284873 – ident: ref68 doi: 10.1136/jnnp.57.12.1518 – ident: ref59 doi: 10.1121/1.2133000 – ident: ref16 doi: 10.1109/TMM.2006.870737 – ident: ref17 doi: 10.1016/j.neunet.2014.10.005 – ident: ref14 doi: 10.1109/34.908962 – year: 2014 ident: ref49 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv preprint arXiv:1409.1556 – ident: ref39 doi: 10.1016/j.imavis.2012.10.002 – ident: ref64 doi: 10.1162/neco.1997.9.8.1735 – ident: ref3 doi: 10.1089/cpb.2006.9993 – ident: ref23 doi: 10.1109/MSP.2012.2211477 – ident: ref15 doi: 10.1016/j.ijhcs.2007.10.011 – ident: ref40 doi: 10.1109/CIBCB.2016.7758108 – ident: ref46 doi: 10.1109/EMBC.2018.8512320 – ident: ref21 doi: 10.1109/CVPR.2016.91 – ident: ref29 doi: 10.1109/EMBC.2016.7591731 – ident: ref12 doi: 10.1109/TBME.2018.2868759 – ident: ref60 doi: 10.1007/978-3-540-30133-2_26 – ident: ref63 doi: 10.5244/C.29.41 – ident: ref54 doi: 10.1007/s11517-010-0592-3 – ident: ref50 doi: 10.1007/s11263-015-0816-y – ident: ref41 doi: 10.1109/ICME.2014.6890161 – ident: ref66 doi: 10.1007/s10479-011-0841-3 – ident: ref51 doi: 10.1109/CVPRW.2014.131 – ident: ref62 doi: 10.1109/CVPR.2014.240 – ident: ref42 doi: 10.1109/EMBC.2015.7318997 – ident: ref43 doi: 10.1109/EMBC.2013.6609968 – volume: 3 issue: 5 year: 2013 ident: ref37 article-title: Modeling physiological data with deep belief networks publication-title: Int. J. Inf. Educ. Technol. – ident: ref44 doi: 10.1016/j.jneumeth.2003.10.009 – ident: ref27 doi: 10.1037/h0077714 – volume: 2 start-page: 53 issue: 2 year: 2017 ident: ref11 article-title: Identifying correlation between facial expression and heart rate and skin conductance with iMotions biometric platform publication-title: J. Emerging Forensic Sci. Res. – ident: ref19 doi: 10.1145/2818346.2830595 – ident: ref7 doi: 10.1186/1475-925X-12-56 – ident: ref56 doi: 10.1136/hrt.52.4.396 – ident: ref38 doi: 10.1016/j.eswa.2015.10.049 – ident: ref45 doi: 10.1109/TPAMI.2005.159 – ident: ref58 doi: 10.1016/j.camwa.2007.04.035 – ident: ref8 doi: 10.1109/TITB.2010.2042607 – ident: ref30 doi: 10.1109/T-AFFC.2011.15 – ident: ref4 doi: 10.1088/1741-2560/10/4/046003 – year: 2012 ident: ref55 article-title: peakdet: Peak detection using MATLAB publication-title: Detect Peaks in a Vector – ident: ref48 doi: 10.1109/MCSE.2007.55 – ident: ref5 doi: 10.1109/TBME.2004.827086 – ident: ref10 doi: 10.1016/j.jsams.2017.09.498 – ident: ref53 doi: 10.1016/0169-7439(87)80084-9 – ident: ref34 doi: 10.1016/j.cmpb.2016.12.005 – ident: ref67 doi: 10.1109/TIP.2017.2754941 – ident: ref24 doi: 10.1109/TAFFC.2017.2714671 |
SSID | ssj0000333627 |
Score | 2.6001682 |
Snippet | In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 96 |
SubjectTerms | Affect (Psychology) Affective computing Algorithms Arousal bio-sensing Brain-computer interface (BCI) Classification computer vision Datasets Deep learning ECG EEG Electrocardiography Electroencephalography emotion processing Emotions Face Feature extraction GSR Machine learning multi-modality Object recognition Performance evaluation PPG Support vector machines Video data Vision |
Title | Utilizing Deep Learning Towards Multi-Modal Bio-Sensing and Vision-Based Affective Computing |
URI | https://ieeexplore.ieee.org/document/8713896 https://www.proquest.com/docview/2635044277 |
Volume | 13 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LTxsxEB4BJy6lLSBCofKhN3DifXntY6CNUKX0QoJyQFr5WUWNkohsLvx6xt7dqIIK9bbSenYtfbbnm_E8AL75JFOiRCNHeC5pnumcyoIpmlijkZ6XpYgO_fEvfjfNf86K2R5c73JhnHMx-Mz1w2O8y7crsw2usgGSe9SvfB_20XBrcrV2_hSWZXgWl11eDJODyXA0ug3BW7KfIgliofPtX7onNlN5cwJHtTI6gnE3oSaa5E9_W-u-eX5Vq_F_Z_wRPrT8kgybBfEJ9tzyMxx1vRtIu5WP4XFazxfzZ1Rc5Ltza9LWWf1NJjGOdkNiZi4dryx-7Wa-ovch0h3fq6UlDzEhnd6gBrRkGCNC8NAkzV9w0AlMRz8mt3e07bRATSqLmnqZSSQSzEouVMmZYsZypwR3iZQejSilBZfCF1JzbUQhkDVqjsaISVLNUfoUDparpTsDYlLLmMq9F7LIk9RpI423vjSOaadz1YOkw6AybRny0A1jUUVzhMkq4lYF3KoWtx5c7WTWTRGOd0cfByB2I1sMenDRQV21-3RThVI8LM_Tsjz_t9QXOExDwkN0ulzAQf20dZdIQ2r9Na6_FwNK2aQ |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NbtQwELaqcoALpRTEQik-wAl56ziJYx84bH9WW9rthV3UA1Lwb7Wi2q3YrBB9Fl6Fd-vYcVYIELdK3CLFThTPaOabyTczCL32Wa5EBUGO8FySItcFkSVVJLNGAzyvKhET-uNzPpoW7y_Kiw30Y10L45yL5DPXD5fxX75dmFVIle0DuAf_yhOF8tR9_wYB2vLdyRFI8w1jw-PJ4YikGQLEMFk2xMtcgoukVnKhKk4VNZY7JbjLpPQQHigtuBS-lJprI0oBeEhzgNkmY5r70GoJDPw9wBkla6vD1hkcmudg_auuEofK_clgODwMdDHZZwC7aJi1-4u3i-Nb_rD50ZENt9DP7gha_sqX_qrRfXPzW3fI__WMHqGHCUHjQavy22jDzR-jrW46BU7Gagd9mjazq9kNuGZ85Nw1Tp1kL_EkMoWXONYek_HCwtMOZgvyIXD54b6aW_wxltyTA_DxFg8i5wXcAm7fAoueoOmdfONTtDlfzN0zhA2zlKrCeyHLImNOG2m89ZVxVDtdqB7KOpnXJjVaD_M-ruoYcFFZRz2pg57USU966O16z3XbZuSfq3eC4Ncrk8x7aLdTrTpZomUdmg3RomBV9fzvu16h-6PJ-Kw-Ozk_fYEesFDeEVNMu2iz-bpyLwF0NXov6j5Gn-9akW4BVy01Zw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Utilizing+Deep+Learning+Towards+Multi-Modal+Bio-Sensing+and+Vision-Based+Affective+Computing&rft.jtitle=IEEE+transactions+on+affective+computing&rft.au=Siddharth&rft.au=Jung%2C+Tzyy-Ping&rft.au=Sejnowski%2C+Terrence+J.&rft.date=2022-01-01&rft.pub=IEEE&rft.eissn=1949-3045&rft.volume=13&rft.issue=1&rft.spage=96&rft.epage=107&rft_id=info:doi/10.1109%2FTAFFC.2019.2916015&rft.externalDocID=8713896 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1949-3045&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1949-3045&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1949-3045&client=summon |