Personalized models for facial emotion recognition through transfer learning
Emotions represent a key aspect of human life and behavior. In recent years, automatic recognition of emotions has become an important component in the fields of affective computing and human-machine interaction. Among many physiological and kinematic signals that could be used to recognize emotions...
Saved in:
Published in | Multimedia tools and applications Vol. 79; no. 47-48; pp. 35811 - 35828 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.12.2020
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Emotions represent a key aspect of human life and behavior. In recent years, automatic recognition of emotions has become an important component in the fields of affective computing and human-machine interaction. Among many physiological and kinematic signals that could be used to recognize emotions, acquiring facial expression images is one of the most natural and inexpensive approaches. The creation of a generalized, inter-subject, model for emotion recognition from facial expression is still a challenge, due to anatomical, cultural and environmental differences. On the other hand, using traditional machine learning approaches to create a subject-customized, personal, model would require a large dataset of labelled samples. For these reasons, in this work, we propose the use of transfer learning to produce subject-specific models for extracting the emotional content of facial images in the valence/arousal dimensions. Transfer learning allows us to reuse the knowledge assimilated from a large multi-subject dataset by a deep-convolutional neural network and employ the feature extraction capability in the single subject scenario. In this way, it is possible to reduce the amount of labelled data necessary to train a personalized model, with respect to relying just on subjective data. Our results suggest that generalized transferred knowledge, in conjunction with a small amount of personal data, is sufficient to obtain high recognition performances and improvement with respect to both a generalized model and personal models. For both valence and arousal dimensions, quite good performances were obtained (RMSE = 0.09 and RMSE = 0.1 for valence and arousal, respectively). Overall results suggested that both the transferred knowledge and the personal data helped in achieving this improvement, even though they alternated in providing the main contribution. Moreover, in this task, we observed that the benefits of transferring knowledge are so remarkable that no specific active or passive sampling techniques are needed for selecting images to be labelled. |
---|---|
AbstractList | Emotions represent a key aspect of human life and behavior. In recent years, automatic recognition of emotions has become an important component in the fields of affective computing and human-machine interaction. Among many physiological and kinematic signals that could be used to recognize emotions, acquiring facial expression images is one of the most natural and inexpensive approaches. The creation of a generalized, inter-subject, model for emotion recognition from facial expression is still a challenge, due to anatomical, cultural and environmental differences. On the other hand, using traditional machine learning approaches to create a subject-customized, personal, model would require a large dataset of labelled samples. For these reasons, in this work, we propose the use of transfer learning to produce subject-specific models for extracting the emotional content of facial images in the valence/arousal dimensions. Transfer learning allows us to reuse the knowledge assimilated from a large multi-subject dataset by a deep-convolutional neural network and employ the feature extraction capability in the single subject scenario. In this way, it is possible to reduce the amount of labelled data necessary to train a personalized model, with respect to relying just on subjective data. Our results suggest that generalized transferred knowledge, in conjunction with a small amount of personal data, is sufficient to obtain high recognition performances and improvement with respect to both a generalized model and personal models. For both valence and arousal dimensions, quite good performances were obtained (RMSE = 0.09 and RMSE = 0.1 for valence and arousal, respectively). Overall results suggested that both the transferred knowledge and the personal data helped in achieving this improvement, even though they alternated in providing the main contribution. Moreover, in this task, we observed that the benefits of transferring knowledge are so remarkable that no specific active or passive sampling techniques are needed for selecting images to be labelled. |
Author | Spezialetti, Matteo Rescigno, Martina Rossi, Silvia |
Author_xml | – sequence: 1 givenname: Martina surname: Rescigno fullname: Rescigno, Martina organization: Department of Electrical Engineering and Information Technology, University of Naples Federico II – sequence: 2 givenname: Matteo surname: Spezialetti fullname: Spezialetti, Matteo organization: Department of Electrical Engineering and Information Technology, University of Naples Federico II – sequence: 3 givenname: Silvia orcidid: 0000-0002-3379-1756 surname: Rossi fullname: Rossi, Silvia email: silvia.rossi@unina.it organization: Department of Electrical Engineering and Information Technology, University of Naples Federico II |
BookMark | eNp9kEtLAzEUhYNUsK3-AVcDrqM3j5lJl1J8QUEXug6ZTDJNmSY1mS7015t2BMFFV_dcON_lnjNDEx-8QeiawC0BqO8SIcApBgoYFhxKzM_QlJQ1w3VNySRrJgDXJZALNEtpA0CqkvIpWr2ZmIJXvfs2bbENrelTYUMsrNJO9YXZhsEFX0SjQ-fdUQ_rGPbduhii8smaWPRGRe98d4nOreqTufqdc_Tx-PC-fMar16eX5f0Kaw50wFSXgohGsUoLq1tTgqjzrhVvbaUEW5CFtapqBG0aMMIaXkHLWqW5JYQoxuboZry7i-Fzb9IgN2Efc4gkKa8ZE1XJaHaJ0aVjSCkaK7Ub1CFBftz1koA8dCfH7mTuTh67kzyj9B-6i26r4tdpiI1Qymbfmfj31QnqBxo3hK0 |
CitedBy_id | crossref_primary_10_3390_info14080430 crossref_primary_10_1007_s11277_024_10993_9 crossref_primary_10_1016_j_cmpb_2022_106646 crossref_primary_10_1007_s11042_024_19646_2 crossref_primary_10_3389_fpsyg_2022_839440 crossref_primary_10_1007_s11042_022_14186_z crossref_primary_10_1007_s12559_024_10281_5 crossref_primary_10_3390_su142114308 crossref_primary_10_1155_2023_2457898 crossref_primary_10_3390_electronics10121491 crossref_primary_10_1017_S1471068422000382 crossref_primary_10_1007_s11042_024_20079_0 crossref_primary_10_1016_j_imavis_2022_104583 crossref_primary_10_26634_jip_9_2_18968 |
Cites_doi | 10.1109/TPAMI.2014.2366127 10.1109/FG.2011.5771374 10.1162/neco.1991.3.1.79 10.1007/BF02686918 10.1145/2818346.2829994 10.2190/DUGG-P24E-52WK-6CDG 10.3390/s18020401 10.4304/jmm.1.6.22-35 10.1109/FG.2013.6553805 10.1109/CVPR.2015.7298878 10.1109/ICME.2014.6890301 10.1016/j.ins.2018.09.060 10.1016/j.imavis.2008.08.005 10.1007/978-3-030-01424-7_27 10.1111/j.1745-6916.2007.00044.x 10.1109/CVPRW.2014.25 10.1037/h0077714 10.5334/pb-46-1-2-99 10.1007/978-3-319-96133-0_24 10.1109/ICSMC.2012.6378301 10.1109/CVPRW.2017.282 10.1007/s11263-015-0816-y 10.1109/ICCV.2015.341 10.1145/2808196.2811634 10.1109/ICIP.2016.7532431 10.1073/pnas.1200155109 10.1007/978-1-4614-6849-3 10.3390/s18072074 10.1002/cb.1710 10.1109/CVPR.2001.990517 10.1007/978-3-642-42051-1_16 10.1007/978-3-030-11027-7_24 10.1109/TAFFC.2017.2740923 10.1109/TKDE.2009.191 10.1177/1529100619832930 10.1109/CVPRW.2017.245 10.1080/02699939208411068 10.1109/CVPR.2015.7298594 10.1145/2623330.2623612 10.1177/0956797611435134 10.1073/pnas.1322355111 10.1109/TMM.2016.2523421 10.1007/s12008-018-0473-9 10.1145/2818346.2830596 10.1007/s10772-011-9125-1 10.1109/CVPRW.2017.248 10.1080/0144929X.2018.1485745 10.1609/aaai.v31i1.11231 10.1007/978-1-4614-7138-7 10.1109/CVPRW.2017.246 10.1109/CVPRW.2010.5543262 10.1145/2808196.2811642 10.1371/journal.pone.0032321 10.1016/j.patrec.2013.02.002 10.4108/icst.pervasivehealth.2013.252133 10.1109/AFGR.1998.670949 10.1109/CVPRW.2017.244 10.1109/T-AFFC.2013.4 10.3390/s130607714 10.1037/a0029038 10.1037/a0024244 10.3389/fpsyg.2016.00522 10.1109/TPAMI.2016.2547397 10.1109/T-AFFC.2012.16 10.1109/ROMAN.2018.8525514 10.1007/s00138-015-0677-y 10.1109/AFGR.2000.840611 |
ContentType | Journal Article |
Copyright | The Author(s) 2020 The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: The Author(s) 2020 – notice: The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | C6C AAYXX CITATION 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8AO 8FD 8FE 8FG 8FK 8FL 8G5 ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ GUQSH HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N M2O MBDVC P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI PRINS Q9U |
DOI | 10.1007/s11042-020-09405-4 |
DatabaseName | Springer Nature OA Free Journals CrossRef ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) ProQuest Pharma Collection Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Research Library ProQuest Central ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials - QC ProQuest Central Business Premium Collection Technology Collection ProQuest One Community College ProQuest Central Korea Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student ProQuest Research Library ProQuest SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Research Library Research Library (Corporate) Advanced Technologies & Aerospace Collection ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China ProQuest Central Basic |
DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Research Library Prep Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College Research Library (Alumni Edition) ProQuest Pharma Collection ProQuest Central China ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Research Library ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) Business Premium Collection (Alumni) |
DatabaseTitleList | ABI/INFORM Global (Corporate) CrossRef |
Database_xml | – sequence: 1 dbid: C6C name: Springer Nature OA Free Journals url: http://www.springeropen.com/ sourceTypes: Publisher – sequence: 2 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1573-7721 |
EndPage | 35828 |
ExternalDocumentID | 10_1007_s11042_020_09405_4 |
GrantInformation_xml | – fundername: Università degli Studi di Napoli Federico II |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 2.D 203 28- 29M 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 3EH 3V. 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 7WY 8AO 8FE 8FG 8FL 8G5 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACSNA ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS C6C CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GUQSH GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IHE IJ- IKXTQ ITG ITH ITM IWAJR IXC IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M2O M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 Q2X QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TEORI TH9 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7W Z7X Z7Y Z7Z Z81 Z83 Z86 Z88 Z8M Z8N Z8Q Z8R Z8S Z8T Z8U Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACMFV ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 7SC 7XB 8AL 8FD 8FK ABRTQ JQ2 L.- L7M L~C L~D MBDVC PKEHL PQEST PQGLB PQUKI PRINS Q9U |
ID | FETCH-LOGICAL-c402t-2c5818ba36c8fcde508718bca4df6a83919ffa6b82bb0e8fe460d3dac4f111a33 |
IEDL.DBID | BENPR |
ISSN | 1380-7501 |
IngestDate | Fri Jul 25 04:10:09 EDT 2025 Thu Apr 24 23:09:30 EDT 2025 Tue Jul 01 04:13:05 EDT 2025 Fri Feb 21 02:37:33 EST 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 47-48 |
Keywords | Affective computing Convolutional neural networks Transfer learning Facial emotion recognition |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c402t-2c5818ba36c8fcde508718bca4df6a83919ffa6b82bb0e8fe460d3dac4f111a33 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-3379-1756 |
OpenAccessLink | https://link.springer.com/10.1007/s11042-020-09405-4 |
PQID | 2473386532 |
PQPubID | 54626 |
PageCount | 18 |
ParticipantIDs | proquest_journals_2473386532 crossref_citationtrail_10_1007_s11042_020_09405_4 crossref_primary_10_1007_s11042_020_09405_4 springer_journals_10_1007_s11042_020_09405_4 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2020-12-01 |
PublicationDateYYYYMMDD | 2020-12-01 |
PublicationDate_xml | – month: 12 year: 2020 text: 2020-12-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York – name: Dordrecht |
PublicationSubtitle | An International Journal |
PublicationTitle | Multimedia tools and applications |
PublicationTitleAbbrev | Multimed Tools Appl |
PublicationYear | 2020 |
Publisher | Springer US Springer Nature B.V |
Publisher_xml | – name: Springer US – name: Springer Nature B.V |
References | BartlettMSLittlewortGFrankMGLainscsekCFaselIRMovellanJRAutomatic recognition of facial actions in spontaneous expressionsJ Multimed2006162235 EkmanPAn argument for basic emotionsCognit Emot199263–4169200 MollahosseiniAHasaniBMahoorMHAffectnet: a database for facial expression, valence, and arousal computing in the wildIEEE Trans Affect Comput20171011831 ZenGPorziLSanginetoERicciESebeNLearning personalized models for facial expression analysis and gesture recognitionIEEE Transactions on Multimedia2016184775788 SpezialettiMCinqueLTavaresJMRPlacidiGTowards EEG-based BCI driven by emotions for addressing BCI-illiteracy: a meta-analytic reviewBehav Inform Technol2018378855871 Kahou ES, Michalski V, Konda K, Memisevic R, Pal C (2015) Recurrent neural networks for emotion recognition in video. In: Proceedings of the 2015 ACM on international conference on multimodal interaction, pp 467–474 Chao L, Tao J, Yang M, Li Y, Wen Z (2015) Long short term memory recurrent neural network based multimodal dimensional emotion recognition. In: Proceedings of the 5th international workshop on audio/visual emotion challenge, pp 65–72 MehrabianAPleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperamentCurr Psychol19961442612921714069 Susskind J, Anderson A, Hinton G (2010). The Toronto face database. Technical report, UTML TR 2010-001, University of Toronto. Tan C, Sun F, Kong T, Zhang W, Yang C, Liu C (2018) A survey on deep transfer learning. In: International conference on artificial neural networks. Springer, Cham, pp 270–279 WaleckiRRudovicOPavlovicVSchullerBPanticMDeep structured learning for facial expression intensity estimationImage Vis Comput2017259143154 Picard RW (1999) Affective computing for HCI. In: HCI (1), pp 829–833 HarrisJMCiorciariJGountasJConsumer neuroscience for marketing researchersJ Consum Behav2018173239252 Li J, Chen Y, Xiao S, Zhao J, Roy S, Feng J, Yan S, Sim T (2017) Estimation of affective level in the wild with multiple memory networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 1–8 IzardCEBasic emotions, natural kinds, emotion schemas, and a new paradigmPerspect Psychol Sci200723260280 DuSTaoYMartinezAMCompound facial expressions of emotionProc Natl Acad Sci201411115E1454E1462 GalYUncertainty in deep learning2016CambridgeUniversity of Cambridge Goodfellow IJ, Erhan D, Carrier PL, Courville A, Mirza M, Hamner B, Cukierski W et al (2013) Challenges in representation learning: a report on three machine learning contests. In: International conference on neural information processing, pp 117–124 Li M, Zhang T, Chen Y, Smola AJ (2014) Efficient mini-batch training for stochastic optimization. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining, pp 661–670 MavadatiSMMahoorMHBartlettKTrinhPCohnJFDisfa: a spontaneous facial action intensity databaseIEEE Trans Affect Comput201342151160 JacobsRAJordanMINowlanSJHintonGEAdaptive mixtures of local expertsNeural Comput1991317987 RussellJA circumplex model of affectJ Pers Soc Psychol198039611611178 Jiang J (2008) A literature survey on domain adaptation of statistical classifiers. Technical report, University of Illinois at Urbana-Champaign VerschuereBCrombezGKosterEUziebloKPsychopathy and physiological detection of concealed information: a reviewPsychol Belg20064699116 SaloveyPMayerJDEmotional intelligenceImagin Cogn Pers199093185211 Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: CVPR (1), vol 1, pp 511–518 3 KoBA brief review of facial emotion recognition based on visual informationSensors2018182401 JamesGWittenDHastieTTibshiraniRAn introduction to statistical learning2013New YorkSpringer1281.62147 Donahue J, Hendricks AL, Guadarrama S, Rohrbach M, Venugopalan S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2625–2634 Guo R, Li S, He L, Gao W, Qi H, Owens G (2013) Pervasive and unobtrusive emotion sensing for human mental health. In: Proceedings of the 7th international conference on pervasive computing Technologies for Healthcare, Venice, Italy, 5–8 May 2013, pp 436–439 Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops, pp 94–101 TrnkaRLačevABalcarKKuškaMTavelPModeling semantic emotion space using a 3D hypercube-projection: an innovative analytical approach for the psychology of emotionsFront Psychol20167522 Arriaga O, Valdenegro-Toro M, Plӧger PG (2019) Real-time convolutional neural networks for emotion and gender classification. In: Proceedings of the 2019 European symposium on artificial neural networks, computational intelligence. ISBN 978-287-587-065-0 Ng HW, Nguyen VD, Vonikakis V, Winkler S (2015) Deep learning for emotion recognition on small datasets using transfer learning. In: Proceedings of the 2015 ACM on international conference on multimodal interaction. ACM, pp 443–449 Chang WY, Hsu SH, Chien JH (2017) FATAUVA-net: an integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 17–25 Ringeval F, Sonderegger A, Sauer J, Lalanne D (2013) Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In: 2013 10th IEEE international conference and workshops on automatic face and gesture recognition (FG), pp 1–8 KleinsmithABianchi-BerthouzeNAffective body expression perception and recognition: a surveyIEEE Trans Affect Comput2012411533 SariyanidiEGunesHCavallaroAAutomatic analysis of facial affect: a survey of registration, representation, and recognitionIEEE Trans Pattern Anal Mach Intell201437611131133 Hasani B, Mahoor MH (2017) Facial expression recognition using enhanced deep 3D convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 30–40 Feffer M, Picard RW (2018) A mixture of personalized experts for human affect estimation. In: International conference on machine learning and data mining in pattern recognition, pp 316–330 GhimireDLeeJGeometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machinesSensors201313677147734 Ekman P, Keltner D (1997) Universal facial expressions of emotion. In: Segerstrale U, Molnar P (eds) Nonverbal communication: where nature meets culture, pp 27–46 Valstar MF, Jiang B, Mehu M, Pantic M, Scherer K (2011) The first facial expression recognition and analysis challenge. In: IEEE international conference on automatic face and gesture recognition and workshops (FG’11), pp 921–926 PlutchikRKellermanHTheories of emotion1980New YorkAcademic Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence ZhangXMahoorMHMavadatiSMFacial expression recognition using lp-norm MKL multiclass-SVMMach Vis Appl2015264467483 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, VanHoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 Tsymbalov E, Panov M, Shapeev A (2018) Dropout-based active learning for regression. In: International conference on analysis of images, social networks and texts, pp 247–258 RussakovskyODengJSuHKrauseJSatheeshSMaSHuangZKarpathyAKhoslaABernsteinMBergACFei-FeiLImagenet large scale visual recognition challengeInt J Comput Vis201511532112523422482 PanSJYangQA survey on transfer learningIEEE Trans Knowl Data Eng2009221013451359 LindquistKASiegelEHQuigleyKSBarrettLFThe hundred-year emotion war: are emotions natural kinds or psychological constructions? Comment on Lench, Flores, and Bench (2011)Psychol Bull20131391255263 ChenJLiuXTuPAragonesALearning person-specific models for facial expression and action unit recognitionPattern Recogn Lett2013341519641970 ShanCGongSMcOwanPWFacial expression recognition based on local binary patterns: a comprehensive studyImage Vis Comput2009276803816 JackREGarrodOGYuHCaldaraRSchynsPGFacial expressions of emotion are not culturally universalProc Natl Acad Sci20121091972417244 KohaviRA study of cross-validation and bootstrap for accuracy estimation and model selectionIjcai199514211371145 BarrettLFAdolphsRMarsellaSMartinezAMPollakSDEmotional expressions reconsidered: challenges to inferring emotion from human facial movementsPsychol Sci Public Interest2019201168 TomkinsSSAffect imagery consciousness: the complete edition: two volumes2008New YorkSpringer publishing company KaulardKCunninghamDWBülthoffHHWallravenCThe MPI facial expression database—a validated database of emotional and conversational facial expressionsPLoS One201273 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 Wager S, Wang S, Liang PS (2013) Dropout training as adaptive regularization. In: Advances in neural information processing systems, pp 351–359 Izquierdo-ReyesJRamirez-MendozaRABustamante-BelloMRPons-RoviraJLGonzalez-VargasJEEmotion recognition for semi-autonomous vehicles frameworkInternational Journal on Interactive Design and Manufacturing (IJIDeM)201812414471454 Zafeiriou S, Kollias D, Nicolaou MA, Papaioannou A, Zhao G, Kotsia I (2017) Aff-wild: valence and arousal 'In-the-Wild' challenge. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 34–41 Pantic M, Valstar M, Rad 9405_CR68 9405_CR69 9405_CR26 9405_CR27 9405_CR28 J Izquierdo-Reyes (9405_CR22) 2018; 12 9405_CR29 HC Lench (9405_CR38) 2011; 137 D Ghimire (9405_CR15) 2013; 13 L Shu (9405_CR62) 2018; 18 LF Barrett (9405_CR2) 2019; 20 J Chen (9405_CR6) 2013; 34 X Zhang (9405_CR82) 2015; 26 A Mehrabian (9405_CR45) 1996; 14 MA Sayette (9405_CR60) 2012; 23 RE Jack (9405_CR23) 2012; 109 B Ko (9405_CR33) 2018; 18 WS Chu (9405_CR7) 2016; 39 SG Koolagudi (9405_CR35) 2012; 15 SM Mavadati (9405_CR44) 2013; 4 9405_CR63 9405_CR20 9405_CR64 A Mollahosseini (9405_CR47) 2017; 10 9405_CR66 B Verschuere (9405_CR75) 2006; 46 9405_CR67 9405_CR4 9405_CR13 9405_CR5 K Kaulard (9405_CR30) 2012; 7 SJ Pan (9405_CR49) 2009; 22 9405_CR16 9405_CR17 9405_CR1 9405_CR19 9405_CR8 9405_CR9 JM Harris (9405_CR18) 2018; 17 S Du (9405_CR10) 2014; 111 D Wu (9405_CR79) 2019; 474 9405_CR50 9405_CR51 R Trnka (9405_CR72) 2016; 7 R Plutchik (9405_CR52) 1980 9405_CR53 9405_CR54 9405_CR55 SS Tomkins (9405_CR71) 2008 G Zen (9405_CR81) 2016; 18 9405_CR12 9405_CR46 R Walecki (9405_CR78) 2017; 259 9405_CR48 MS Bartlett (9405_CR3) 2006; 1 C Shan (9405_CR61) 2009; 27 M Spezialetti (9405_CR65) 2018; 37 G James (9405_CR25) 2013 9405_CR80 RA Jacobs (9405_CR24) 1991; 3 A Kleinsmith (9405_CR32) 2012; 4 9405_CR40 9405_CR42 9405_CR43 9405_CR36 M Kuhn (9405_CR37) 2013 E Sariyanidi (9405_CR59) 2014; 37 9405_CR39 Y Gal (9405_CR14) 2016 KA Lindquist (9405_CR41) 2013; 139 R Kohavi (9405_CR34) 1995; 14 J Russell (9405_CR57) 1980; 39 CE Izard (9405_CR21) 2007; 2 9405_CR70 O Russakovsky (9405_CR56) 2015; 115 9405_CR73 9405_CR74 P Ekman (9405_CR11) 1992; 6 9405_CR31 P Salovey (9405_CR58) 1990; 9 9405_CR76 9405_CR77 |
References_xml | – reference: LenchHCFloresSABenchSWDiscrete emotions predict changes in cognition, judgment, experience, behavior, and physiology: a meta-analysis of experimental emotion elicitationsPsychol Bull20111375834855 – reference: Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, VanHoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 – reference: Goodfellow IJ, Erhan D, Carrier PL, Courville A, Mirza M, Hamner B, Cukierski W et al (2013) Challenges in representation learning: a report on three machine learning contests. In: International conference on neural information processing, pp 117–124 – reference: GhimireDLeeJGeometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machinesSensors201313677147734 – reference: MollahosseiniAHasaniBMahoorMHAffectnet: a database for facial expression, valence, and arousal computing in the wildIEEE Trans Affect Comput20171011831 – reference: Ng HW, Nguyen VD, Vonikakis V, Winkler S (2015) Deep learning for emotion recognition on small datasets using transfer learning. In: Proceedings of the 2015 ACM on international conference on multimodal interaction. ACM, pp 443–449 – reference: BartlettMSLittlewortGFrankMGLainscsekCFaselIRMovellanJRAutomatic recognition of facial actions in spontaneous expressionsJ Multimed2006162235 – reference: Feffer M, Picard RW (2018) A mixture of personalized experts for human affect estimation. In: International conference on machine learning and data mining in pattern recognition, pp 316–330 – reference: SayetteMACreswellKGDimoffJDFairbairnCECohnJFHeckmanBWKirchnerTRLevineJMMorelandRLAlcohol and group formation a multimodal investigation of the effects of alcohol on emotion and social bondingPsychol Sci2012238869878 – reference: JackREGarrodOGYuHCaldaraRSchynsPGFacial expressions of emotion are not culturally universalProc Natl Acad Sci20121091972417244 – reference: Arriaga O, Valdenegro-Toro M, Plӧger PG (2019) Real-time convolutional neural networks for emotion and gender classification. In: Proceedings of the 2019 European symposium on artificial neural networks, computational intelligence. ISBN 978-287-587-065-0 – reference: JacobsRAJordanMINowlanSJHintonGEAdaptive mixtures of local expertsNeural Comput1991317987 – reference: Izquierdo-ReyesJRamirez-MendozaRABustamante-BelloMRPons-RoviraJLGonzalez-VargasJEEmotion recognition for semi-autonomous vehicles frameworkInternational Journal on Interactive Design and Manufacturing (IJIDeM)201812414471454 – reference: KohaviRA study of cross-validation and bootstrap for accuracy estimation and model selectionIjcai199514211371145 – reference: Suk M, Prabhakaran B (2014) Real-time mobile facial expression recognition system-a case study. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 132–137 – reference: ZenGPorziLSanginetoERicciESebeNLearning personalized models for facial expression analysis and gesture recognitionIEEE Transactions on Multimedia2016184775788 – reference: ChenJLiuXTuPAragonesALearning person-specific models for facial expression and action unit recognitionPattern Recogn Lett2013341519641970 – reference: Kahou ES, Michalski V, Konda K, Memisevic R, Pal C (2015) Recurrent neural networks for emotion recognition in video. In: Proceedings of the 2015 ACM on international conference on multimodal interaction, pp 467–474 – reference: Valstar MF, Jiang B, Mehu M, Pantic M, Scherer K (2011) The first facial expression recognition and analysis challenge. In: IEEE international conference on automatic face and gesture recognition and workshops (FG’11), pp 921–926 – reference: Li J, Chen Y, Xiao S, Zhao J, Roy S, Feng J, Yan S, Sim T (2017) Estimation of affective level in the wild with multiple memory networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 1–8 – reference: KaulardKCunninghamDWBülthoffHHWallravenCThe MPI facial expression database—a validated database of emotional and conversational facial expressionsPLoS One201273 – reference: HarrisJMCiorciariJGountasJConsumer neuroscience for marketing researchersJ Consum Behav2018173239252 – reference: Chang WY, Hsu SH, Chien JH (2017) FATAUVA-net: an integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 17–25 – reference: ChuWSDe la TorreFCohnJFSelective transfer machine for personalized facial expression analysisIEEE Trans Pattern Anal Mach Intell2016393529545 – reference: Donahue J, Hendricks AL, Guadarrama S, Rohrbach M, Venugopalan S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2625–2634 – reference: Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 – reference: Miranda-Correa JA, Abadi MK, Sebe N, Patras I (2018) AMIGOS: a dataset for affect, personality and mood research on individuals and groups. IEEE Trans Affect Comput – reference: Picard RW (1999) Affective computing for HCI. In: HCI (1), pp 829–833 – reference: KuhnMJohnsonKApplied predictive modeling2013New YorkSpringer1306.62014 – reference: GalYUncertainty in deep learning2016CambridgeUniversity of Cambridge – reference: Chao L, Tao J, Yang M, Li Y, Wen Z (2015) Long short term memory recurrent neural network based multimodal dimensional emotion recognition. In: Proceedings of the 5th international workshop on audio/visual emotion challenge, pp 65–72 – reference: ShuLXieJYangMLiZLiZLiaoDXuXYangXA review of emotion recognition using physiological signalsSensors20181872074 – reference: Jung H, Lee S, Yim J, Park S, Kim J (2015) Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE international conference on computer vision, pp 2983–2991 – reference: DuSTaoYMartinezAMCompound facial expressions of emotionProc Natl Acad Sci201411115E1454E1462 – reference: WaleckiRRudovicOPavlovicVSchullerBPanticMDeep structured learning for facial expression intensity estimationImage Vis Comput2017259143154 – reference: Wager S, Wang S, Liang PS (2013) Dropout training as adaptive regularization. In: Advances in neural information processing systems, pp 351–359 – reference: PlutchikRKellermanHTheories of emotion1980New YorkAcademic – reference: LindquistKASiegelEHQuigleyKSBarrettLFThe hundred-year emotion war: are emotions natural kinds or psychological constructions? Comment on Lench, Flores, and Bench (2011)Psychol Bull20131391255263 – reference: Ringeval F, Schuller B, Valstar M, Jaiswal S, Marchi E, Lalanne D, Cowie R, Pantic M (2015) Av+ ec 2015: the first affect recognition challenge bridging across audio, video, and physiological data. In: Proceedings of the 5th international workshop on audio/visual emotion challenge, pp 3–8 – reference: Khorrami P, Le Paine T, Brady K, Dagli C, Huang TS (2016) How deep neural networks can improve emotion recognition on video data. In: 2016 IEEE international conference on image processing (ICIP), pp 619–623 – reference: ZhangXMahoorMHMavadatiSMFacial expression recognition using lp-norm MKL multiclass-SVMMach Vis Appl2015264467483 – reference: MavadatiSMMahoorMHBartlettKTrinhPCohnJFDisfa: a spontaneous facial action intensity databaseIEEE Trans Affect Comput201342151160 – reference: Tsymbalov E, Panov M, Shapeev A (2018) Dropout-based active learning for regression. In: International conference on analysis of images, social networks and texts, pp 247–258 – reference: RussellJA circumplex model of affectJ Pers Soc Psychol198039611611178 – reference: IzardCEBasic emotions, natural kinds, emotion schemas, and a new paradigmPerspect Psychol Sci200723260280 – reference: Susskind J, Anderson A, Hinton G (2010). The Toronto face database. Technical report, UTML TR 2010-001, University of Toronto. – reference: Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence – reference: PanSJYangQA survey on transfer learningIEEE Trans Knowl Data Eng2009221013451359 – reference: Hasani B, Mahoor MH (2017) Facial affect estimation in the wild using deep residual and convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 9–16 – reference: Zafeiriou S, Kollias D, Nicolaou MA, Papaioannou A, Zhao G, Kotsia I (2017) Aff-wild: valence and arousal 'In-the-Wild' challenge. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 34–41 – reference: SpezialettiMCinqueLTavaresJMRPlacidiGTowards EEG-based BCI driven by emotions for addressing BCI-illiteracy: a meta-analytic reviewBehav Inform Technol2018378855871 – reference: Dhall A, Ramana Murthy O, Goecke R, Joshi J, Gedeon T (2015) Video and image based emotion recognition challenges in the wild: Emotiw 2015. In: Proceedings of the 2015 ACM on international conference on multi-modal interaction, pp 423–426 – reference: TrnkaRLačevABalcarKKuškaMTavelPModeling semantic emotion space using a 3D hypercube-projection: an innovative analytical approach for the psychology of emotionsFront Psychol20167522 – reference: WuDLinCTHuangJActive learning for regression using greedy samplingInf Sci2019474901053866975 – reference: SaloveyPMayerJDEmotional intelligenceImagin Cogn Pers199093185211 – reference: ShanCGongSMcOwanPWFacial expression recognition based on local binary patterns: a comprehensive studyImage Vis Comput2009276803816 – reference: Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: CVPR (1), vol 1, pp 511–518 3 – reference: Jiang J (2008) A literature survey on domain adaptation of statistical classifiers. Technical report, University of Illinois at Urbana-Champaign – reference: VerschuereBCrombezGKosterEUziebloKPsychopathy and physiological detection of concealed information: a reviewPsychol Belg20064699116 – reference: Kanade T, Cohn JF, Tian Y (2000) Comprehensive database for facial expression analysis. In: Proceedings fourth IEEE international conference on automatic face and gesture recognition (cat. No. PR00580), pp 46–53 – reference: Tan C, Sun F, Kong T, Zhang W, Yang C, Liu C (2018) A survey on deep transfer learning. In: International conference on artificial neural networks. Springer, Cham, pp 270–279 – reference: EkmanPAn argument for basic emotionsCognit Emot199263–4169200 – reference: Pantic M, Valstar M, Rademaker R, Maat L (2005) Web-based database for facial expression analysis. In: 2005 IEEE international conference on multimedia and expo, p 5 – reference: KoolagudiSGRaoKSEmotion recognition from speech: a reviewInternational journal of speech technology201215299117 – reference: MehrabianAPleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperamentCurr Psychol19961442612921714069 – reference: Ekman P, Keltner D (1997) Universal facial expressions of emotion. In: Segerstrale U, Molnar P (eds) Nonverbal communication: where nature meets culture, pp 27–46 – reference: BarrettLFAdolphsRMarsellaSMartinezAMPollakSDEmotional expressions reconsidered: challenges to inferring emotion from human facial movementsPsychol Sci Public Interest2019201168 – reference: SariyanidiEGunesHCavallaroAAutomatic analysis of facial affect: a survey of registration, representation, and recognitionIEEE Trans Pattern Anal Mach Intell201437611131133 – reference: Guo R, Li S, He L, Gao W, Qi H, Owens G (2013) Pervasive and unobtrusive emotion sensing for human mental health. In: Proceedings of the 7th international conference on pervasive computing Technologies for Healthcare, Venice, Italy, 5–8 May 2013, pp 436–439 – reference: KoBA brief review of facial emotion recognition based on visual informationSensors2018182401 – reference: Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Coding facial expressions with gabor wavelets. In: Proceedings third IEEE international conference on automatic face and gesture recognition, pp 200–205 – reference: Ringeval F, Sonderegger A, Sauer J, Lalanne D (2013) Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In: 2013 10th IEEE international conference and workshops on automatic face and gesture recognition (FG), pp 1–8 – reference: KleinsmithABianchi-BerthouzeNAffective body expression perception and recognition: a surveyIEEE Trans Affect Comput2012411533 – reference: Soleymani M, Pantic M (2012) Human-centered implicit tagging: overview and perspectives. In: 2012 IEEE international conference on systems, man, and cybernetics (SMC), pp 3304–3309 – reference: Hasani B, Mahoor MH (2017) Facial expression recognition using enhanced deep 3D convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 30–40 – reference: Soleymani M, Asghari-Esfeden S, Pantic M, Fu Y (2014) Continuous emotion detection using EEG signals and facial expressions. In: 2014 IEEE international conference on multimedia and expo (ICME), pp 1–6 – reference: Li M, Zhang T, Chen Y, Smola AJ (2014) Efficient mini-batch training for stochastic optimization. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining, pp 661–670 – reference: TomkinsSSAffect imagery consciousness: the complete edition: two volumes2008New YorkSpringer publishing company – reference: RussakovskyODengJSuHKrauseJSatheeshSMaSHuangZKarpathyAKhoslaABernsteinMBergACFei-FeiLImagenet large scale visual recognition challengeInt J Comput Vis201511532112523422482 – reference: Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops, pp 94–101 – reference: JamesGWittenDHastieTTibshiraniRAn introduction to statistical learning2013New YorkSpringer1281.62147 – reference: Rossi S, Ercolano G, Raggioli L, Savino E, Ruocco M (2018) The disappearing robot: an analysis of disengagement and distraction during non-interactive tasks. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 522–527 – volume: 37 start-page: 1113 issue: 6 year: 2014 ident: 9405_CR59 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2014.2366127 – ident: 9405_CR74 doi: 10.1109/FG.2011.5771374 – volume-title: Theories of emotion year: 1980 ident: 9405_CR52 – volume: 3 start-page: 79 issue: 1 year: 1991 ident: 9405_CR24 publication-title: Neural Comput doi: 10.1162/neco.1991.3.1.79 – volume: 14 start-page: 261 issue: 4 year: 1996 ident: 9405_CR45 publication-title: Curr Psychol doi: 10.1007/BF02686918 – ident: 9405_CR8 doi: 10.1145/2818346.2829994 – volume: 9 start-page: 185 issue: 3 year: 1990 ident: 9405_CR58 publication-title: Imagin Cogn Pers doi: 10.2190/DUGG-P24E-52WK-6CDG – volume: 18 start-page: 401 issue: 2 year: 2018 ident: 9405_CR33 publication-title: Sensors doi: 10.3390/s18020401 – volume: 1 start-page: 22 issue: 6 year: 2006 ident: 9405_CR3 publication-title: J Multimed doi: 10.4304/jmm.1.6.22-35 – ident: 9405_CR53 doi: 10.1109/FG.2013.6553805 – volume: 14 start-page: 1137 issue: 2 year: 1995 ident: 9405_CR34 publication-title: Ijcai – ident: 9405_CR9 doi: 10.1109/CVPR.2015.7298878 – ident: 9405_CR64 doi: 10.1109/ICME.2014.6890301 – volume: 474 start-page: 90 year: 2019 ident: 9405_CR79 publication-title: Inf Sci doi: 10.1016/j.ins.2018.09.060 – volume: 27 start-page: 803 issue: 6 year: 2009 ident: 9405_CR61 publication-title: Image Vis Comput doi: 10.1016/j.imavis.2008.08.005 – ident: 9405_CR70 doi: 10.1007/978-3-030-01424-7_27 – volume: 2 start-page: 260 issue: 3 year: 2007 ident: 9405_CR21 publication-title: Perspect Psychol Sci doi: 10.1111/j.1745-6916.2007.00044.x – ident: 9405_CR66 doi: 10.1109/CVPRW.2014.25 – volume: 39 start-page: 1161 issue: 6 year: 1980 ident: 9405_CR57 publication-title: J Pers Soc Psychol doi: 10.1037/h0077714 – volume: 46 start-page: 99 year: 2006 ident: 9405_CR75 publication-title: Psychol Belg doi: 10.5334/pb-46-1-2-99 – ident: 9405_CR13 doi: 10.1007/978-3-319-96133-0_24 – ident: 9405_CR63 doi: 10.1109/ICSMC.2012.6378301 – ident: 9405_CR19 doi: 10.1109/CVPRW.2017.282 – volume: 115 start-page: 211 issue: 3 year: 2015 ident: 9405_CR56 publication-title: Int J Comput Vis doi: 10.1007/s11263-015-0816-y – ident: 9405_CR27 doi: 10.1109/ICCV.2015.341 – ident: 9405_CR5 doi: 10.1145/2808196.2811634 – ident: 9405_CR31 doi: 10.1109/ICIP.2016.7532431 – ident: 9405_CR48 – volume: 109 start-page: 7241 issue: 19 year: 2012 ident: 9405_CR23 publication-title: Proc Natl Acad Sci doi: 10.1073/pnas.1200155109 – volume-title: Applied predictive modeling year: 2013 ident: 9405_CR37 doi: 10.1007/978-1-4614-6849-3 – volume: 18 start-page: 2074 issue: 7 year: 2018 ident: 9405_CR62 publication-title: Sensors doi: 10.3390/s18072074 – volume: 17 start-page: 239 issue: 3 year: 2018 ident: 9405_CR18 publication-title: J Consum Behav doi: 10.1002/cb.1710 – ident: 9405_CR12 – volume-title: Affect imagery consciousness: the complete edition: two volumes year: 2008 ident: 9405_CR71 – ident: 9405_CR76 doi: 10.1109/CVPR.2001.990517 – ident: 9405_CR50 – ident: 9405_CR16 doi: 10.1007/978-3-642-42051-1_16 – ident: 9405_CR73 doi: 10.1007/978-3-030-11027-7_24 – volume: 10 start-page: 18 issue: 1 year: 2017 ident: 9405_CR47 publication-title: IEEE Trans Affect Comput doi: 10.1109/TAFFC.2017.2740923 – volume: 22 start-page: 1345 issue: 10 year: 2009 ident: 9405_CR49 publication-title: IEEE Trans Knowl Data Eng doi: 10.1109/TKDE.2009.191 – volume: 20 start-page: 1 issue: 1 year: 2019 ident: 9405_CR2 publication-title: Psychol Sci Public Interest doi: 10.1177/1529100619832930 – ident: 9405_CR20 doi: 10.1109/CVPRW.2017.245 – volume: 6 start-page: 169 issue: 3–4 year: 1992 ident: 9405_CR11 publication-title: Cognit Emot doi: 10.1080/02699939208411068 – ident: 9405_CR68 doi: 10.1109/CVPR.2015.7298594 – ident: 9405_CR39 doi: 10.1145/2623330.2623612 – volume: 23 start-page: 869 issue: 8 year: 2012 ident: 9405_CR60 publication-title: Psychol Sci doi: 10.1177/0956797611435134 – volume: 111 start-page: E1454 issue: 15 year: 2014 ident: 9405_CR10 publication-title: Proc Natl Acad Sci doi: 10.1073/pnas.1322355111 – volume: 18 start-page: 775 issue: 4 year: 2016 ident: 9405_CR81 publication-title: IEEE Transactions on Multimedia doi: 10.1109/TMM.2016.2523421 – ident: 9405_CR26 – volume: 12 start-page: 1447 issue: 4 year: 2018 ident: 9405_CR22 publication-title: International Journal on Interactive Design and Manufacturing (IJIDeM) doi: 10.1007/s12008-018-0473-9 – ident: 9405_CR28 doi: 10.1145/2818346.2830596 – volume: 15 start-page: 99 issue: 2 year: 2012 ident: 9405_CR35 publication-title: International journal of speech technology doi: 10.1007/s10772-011-9125-1 – ident: 9405_CR80 doi: 10.1109/CVPRW.2017.248 – volume: 37 start-page: 855 issue: 8 year: 2018 ident: 9405_CR65 publication-title: Behav Inform Technol doi: 10.1080/0144929X.2018.1485745 – ident: 9405_CR69 doi: 10.1609/aaai.v31i1.11231 – ident: 9405_CR36 – volume-title: An introduction to statistical learning year: 2013 ident: 9405_CR25 doi: 10.1007/978-1-4614-7138-7 – ident: 9405_CR4 doi: 10.1109/CVPRW.2017.246 – ident: 9405_CR42 doi: 10.1109/CVPRW.2010.5543262 – ident: 9405_CR51 – ident: 9405_CR54 doi: 10.1145/2808196.2811642 – volume: 259 start-page: 143 year: 2017 ident: 9405_CR78 publication-title: Image Vis Comput – volume: 7 issue: 3 year: 2012 ident: 9405_CR30 publication-title: PLoS One doi: 10.1371/journal.pone.0032321 – volume: 34 start-page: 1964 issue: 15 year: 2013 ident: 9405_CR6 publication-title: Pattern Recogn Lett doi: 10.1016/j.patrec.2013.02.002 – ident: 9405_CR17 doi: 10.4108/icst.pervasivehealth.2013.252133 – ident: 9405_CR43 doi: 10.1109/AFGR.1998.670949 – ident: 9405_CR40 doi: 10.1109/CVPRW.2017.244 – volume: 4 start-page: 151 issue: 2 year: 2013 ident: 9405_CR44 publication-title: IEEE Trans Affect Comput doi: 10.1109/T-AFFC.2013.4 – volume: 13 start-page: 7714 issue: 6 year: 2013 ident: 9405_CR15 publication-title: Sensors doi: 10.3390/s130607714 – ident: 9405_CR46 – ident: 9405_CR67 – volume-title: Uncertainty in deep learning year: 2016 ident: 9405_CR14 – volume: 139 start-page: 255 issue: 1 year: 2013 ident: 9405_CR41 publication-title: Psychol Bull doi: 10.1037/a0029038 – ident: 9405_CR77 – volume: 137 start-page: 834 issue: 5 year: 2011 ident: 9405_CR38 publication-title: Psychol Bull doi: 10.1037/a0024244 – volume: 7 start-page: 522 year: 2016 ident: 9405_CR72 publication-title: Front Psychol doi: 10.3389/fpsyg.2016.00522 – volume: 39 start-page: 529 issue: 3 year: 2016 ident: 9405_CR7 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2016.2547397 – volume: 4 start-page: 15 issue: 1 year: 2012 ident: 9405_CR32 publication-title: IEEE Trans Affect Comput doi: 10.1109/T-AFFC.2012.16 – ident: 9405_CR55 doi: 10.1109/ROMAN.2018.8525514 – ident: 9405_CR1 – volume: 26 start-page: 467 issue: 4 year: 2015 ident: 9405_CR82 publication-title: Mach Vis Appl doi: 10.1007/s00138-015-0677-y – ident: 9405_CR29 doi: 10.1109/AFGR.2000.840611 |
SSID | ssj0016524 |
Score | 2.3971133 |
Snippet | Emotions represent a key aspect of human life and behavior. In recent years, automatic recognition of emotions has become an important component in the fields... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 35811 |
SubjectTerms | Affective computing Arousal Artificial neural networks Computer Communication Networks Computer Science Customization Data Structures and Information Theory Datasets Emotion recognition Emotions Feature extraction Image acquisition Machine learning Multimedia Information Systems Personal information Sampling methods Special Purpose and Application-Based Systems |
SummonAdditionalLinks | – databaseName: SpringerLink Journals (ICM) dbid: U2A link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwMhEJ74uOjBR9VYrYaDNyVZWJbSY2NsGqPGg0162wALXkw1dr346x0o21WjJt42AeYwA8w3O8x8AGdOS-WNZNRZ76iwfUaNzDwevEoXzPelj-Vjt3dyPBHX02KaisLmzWv3JiUZb-q22I2FUpIQ7oSebwUVq7BehNgdd_GED5e5A1kkKluVUfSHLJXK_CzjqztqMea3tGj0NqMd2EowkQwXdt2FFTfrwHZDwUDSiezA5qd-gntwc99A63dXkUhyMyeISonX4c84cQvOHrJ8NYTfiaiH1BHBouzEI_G4D5PR1cPlmCa6BGoxCKwptwV6X6NzaZW3lUPohY7HWC0qLzUCITbwXkujuDGZU94JmVV5pa3weOHpPD-AtdnzzB0CGXBrVCa5VhmuNmhQRDleGxvSNiZXXWCN1kqbeokHSounsu2CHDRdoqbLqOlSdOF8ueZl0Unjz9m9xhhlOlXzkot-HjhKc96Fi8ZA7fDv0o7-N_0YNnjYI_HVSg_W6tc3d4LYozancat9AP7h0CA priority: 102 providerName: Springer Nature |
Title | Personalized models for facial emotion recognition through transfer learning |
URI | https://link.springer.com/article/10.1007/s11042-020-09405-4 https://www.proquest.com/docview/2473386532 |
Volume | 79 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LTwIxEJ4oXPTgAzWiSHrwpo37LOVkgPCID0KMJHjatN3WiwGU9eKvd7p0QU3ksrtJtz3MtDNfO535AC61YNxI5lOtjKaRavhUMs_gwktF7JsGM3n62OOQDcbR3SSeuAO3hbtWWdjE3FCnM2XPyG-CqBFafsowuJ2_U8saZaOrjkJjG8pogjluvsrt7nD0tIojsNjR2nKPom_0XdrMMnnOt6kpdvtka8jFNPrtmtZ480-INPc8vQPYc5CRtJY6PoQtPa3AfkHHQNzqrMDuj9qCR_AwKmD2l05JTnizIIhQiRH2lJzoJX8PWd0gwm9H2kOyHM3i2I5T4vUYxr3uc2dAHXUCVbghzGigYvTEUoRMcaNSjTAMnZBUIkoNEwiK_KYxgkkeSOlpbnTEvDRMhYoMGj8RhidQms6m-hRIM1CSeywQ3MPeEpWLiMcIqWwIR4a8Cn4htUS5uuKW3uItWVdEtpJOUNJJLukkqsLVqs98WVVj49-1QhmJW2GLZD0fqnBdKGjd_P9oZ5tHO4edwM6J_MZKDUrZx6e-QNyRyTps816_DuVWr90e2nf_5b5bd1MOWzusg89x0PoGX0bacw |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV09T8MwED0hGICBb0T59AATWCSO47oDQggoBQpiAIkt2I7NglqgRQh-FL-Rc-K0gAQbW6QkHs7Pd88-3z2ATauEdFrE1BpnKTf1mGoROVx4uUpjVxeuKB-7uBStG352m96OwEdVC-OvVVY-sXDUedf4M_JdxuuJ16dM2P7jE_WqUT67WklolLA4t2-vuGXr7Z0e4fxuMdY8vj5s0aAqQA3ulfqUmRSDlFaJMNKZ3CJDQf-sjeK5Ewr5QtxwTgktmdaRlc5yEeVJrgx36BeUPwBFlz_Gk6ThV5RsngyyFiINIroyohiJ41CkU5bqxb4Qxm_WfMe6lPLvgXDIbn8kZIs415yBqUBQyUGJqFkYsZ05mK7EH0jwBXMw-aWT4Ty0rypS_25zUsjr9AjyYeKUP5MntlQLIoP7SvgcJIJIv-DOOHZQsLhfgJt_MekijHa6HbsEpMGMlpFgSkb4t0YoIb9yShufMNKJrEFcWS0zoYu5F9N4yIb9l72lM7R0Vlg64zXYHvzzWPbw-PPr1WoysrCee9kQfTXYqSZo-Pr30Zb_Hm0DxlvXF-2sfXp5vgITzOOjuCuzCqP95xe7hoynr9cLmBG4-29cfwLLnxHU |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3JTsMwEB2hVkJwYEcUCvgAJ7BInMRNDwixVaxVhUDiFmzH5oLK0iAEn8bXMU6cBpDg1lukJD6MxzNvPMsD2NCCx0Zyn2plNA1Vy6eSewYPXioi37S4ydvHLrv85CY8u41ux-Cz7IWxZZWlTcwNdfqo7B35DgtbgeWnDNiOcWURvaPO3tMztQxSNtNa0mkUKnKu398wfBvsnh7hXm8y1jm-PjyhjmGAKoybMspUhA5LioCr2KhUI1pBWy2VCFPDBWIHv22M4DJmUno6NjrkXhqkQoUGbYSwl6Fo_ustjIq8GtQPjru9q2EOg0eOUjf2KPpl37XsFI17vm2LsaGbnV8X0fCnW6yw7q_0bO71OjMw5eAq2S_0axbGdH8OpksqCOIswxxMfptrOA8XvRLif-iU5GQ7A4LomBhhb-iJLriDyLB6CZ8dYRDJciSNazs-i_sFuBmJUBeh1n_s6yUgbaZk7HEmYg__lqhYiLaMkMqmj2QQN8AvpZYoN9PcUms8JNU0ZivpBCWd5JJOwgZsDf95KiZ6_Pt1s9yMxJ3uQVLpYgO2yw2qXv-92vL_q63DOOp0cnHaPV-BCWbVIy-caUIte3nVqwh_Mrnm9IzA3ahV-wsZsRdm |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Personalized+models+for+facial+emotion+recognition+through+transfer+learning&rft.jtitle=Multimedia+tools+and+applications&rft.au=Rescigno%2C+Martina&rft.au=Spezialetti%2C+Matteo&rft.au=Rossi%2C+Silvia&rft.date=2020-12-01&rft.issn=1380-7501&rft.eissn=1573-7721&rft.volume=79&rft.issue=47-48&rft.spage=35811&rft.epage=35828&rft_id=info:doi/10.1007%2Fs11042-020-09405-4&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s11042_020_09405_4 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1380-7501&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1380-7501&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1380-7501&client=summon |