RES-CapsNet: an improved capsule network for micro-expression recognition
Micro-expression is a type of facial expression that reveals the deepest feeling held within the human heart. Despite the substantial improvement that has been achieved, micro-expression recognition remains a significant challenge considering its low intensity and short duration. In this paper, we i...
Saved in:
Published in | Multimedia systems Vol. 29; no. 3; pp. 1593 - 1601 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.06.2023
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Micro-expression is a type of facial expression that reveals the deepest feeling held within the human heart. Despite the substantial improvement that has been achieved, micro-expression recognition remains a significant challenge considering its low intensity and short duration. In this paper, we investigate the recognition of micro-expression using deep learning techniques and present the RES-CapsNet, which is an improved capsule network that employs Res2Net as the backbone to extract multi-level and multi-scale characteristics. Furthermore, RES-CapsNet adds a squeeze-excitation (SE) block to the primary capsule layer (PrimaryCaps). Benefiting from a SE block, the valuable micro-expression features are highlighted and the useless ones are suppressed. In addition, between the first convolutional layer and the PrimaryCaps in RES-CapsNet, we introduce an effective channel attention (ECA) module that simply includes a few parameters while dramatically improving the performance. The proposed architecture initially obtains apex frames from the micro-expression sequence to capture the most distinct facial muscle movements and then feeds the pre-processed images into RES-CapsNet for further feature extraction and classification. The Leave-One-Subject-Out (LOSO) cross-validation strategy is implemented on three prevalent spontaneous micro-expression databases (i.e., CASME II, SMIC, and SAMM) to assess the feasibility of our RES-CapsNet. Extensive experiments demonstrate that our RES-CapsNet describes considerable details of micro-expression effectively and achieves superiorly higher performance than the baseline CapsuleNet. |
---|---|
AbstractList | Micro-expression is a type of facial expression that reveals the deepest feeling held within the human heart. Despite the substantial improvement that has been achieved, micro-expression recognition remains a significant challenge considering its low intensity and short duration. In this paper, we investigate the recognition of micro-expression using deep learning techniques and present the RES-CapsNet, which is an improved capsule network that employs Res2Net as the backbone to extract multi-level and multi-scale characteristics. Furthermore, RES-CapsNet adds a squeeze-excitation (SE) block to the primary capsule layer (PrimaryCaps). Benefiting from a SE block, the valuable micro-expression features are highlighted and the useless ones are suppressed. In addition, between the first convolutional layer and the PrimaryCaps in RES-CapsNet, we introduce an effective channel attention (ECA) module that simply includes a few parameters while dramatically improving the performance. The proposed architecture initially obtains apex frames from the micro-expression sequence to capture the most distinct facial muscle movements and then feeds the pre-processed images into RES-CapsNet for further feature extraction and classification. The Leave-One-Subject-Out (LOSO) cross-validation strategy is implemented on three prevalent spontaneous micro-expression databases (i.e., CASME II, SMIC, and SAMM) to assess the feasibility of our RES-CapsNet. Extensive experiments demonstrate that our RES-CapsNet describes considerable details of micro-expression effectively and achieves superiorly higher performance than the baseline CapsuleNet. |
Author | Shi, Liang Huang, Shucheng Li, Jia Shu, Xin |
Author_xml | – sequence: 1 givenname: Xin surname: Shu fullname: Shu, Xin organization: School of Computer Science, Jiangsu University of Science and Technology – sequence: 2 givenname: Jia surname: Li fullname: Li, Jia organization: School of Computer Science, Jiangsu University of Science and Technology – sequence: 3 givenname: Liang surname: Shi fullname: Shi, Liang organization: School of Computer Science, Jiangsu University of Science and Technology, School of Computer Science and Communication Engineering, JiangSu University – sequence: 4 givenname: Shucheng surname: Huang fullname: Huang, Shucheng organization: School of Computer Science, Jiangsu University of Science and Technology |
BookMark | eNp9kMtKAzEUhoMo2FZfwNWA6-hJMpfEnZSqhaLgZR0ymaRMbZMxmXrp05taQXDRVQ7h_875-Ifo0HlnEDojcEEAqssIUDDAQBkGAiXHmwM0IDmjmHBOD9EARE5xLkp6jIYxLgBIVTIYoOnj5AmPVRfvTX-VKZe1qy74d9NkOn2ulyZzpv_w4TWzPmSrVgePzWcXTIytd1kw2s9d26f5BB1ZtYzm9PcdoZebyfP4Ds8ebqfj6xnWjIgeU97kDExhm7KuDSt0VZYWQBGe04LUVoOlnHAi8soWoqp0EuU1CK5EQ1VD2Qid7_Ymz7e1ib1c-HVw6aSknKY2BMuLlOK7VBKOMRgrddurrWcfVLuUBOS2OLkrTqbi5E9xcpNQ-g_tQrtS4Ws_xHZQTGE3N-HPag_1DXGbgnQ |
CitedBy_id | crossref_primary_10_1007_s00521_024_10262_7 crossref_primary_10_3390_brainsci14040344 crossref_primary_10_1007_s00371_024_03443_x crossref_primary_10_1109_ACCESS_2024_3395116 crossref_primary_10_1007_s10462_025_11159_0 crossref_primary_10_1109_ACCESS_2025_3530114 crossref_primary_10_1371_journal_pone_0307446 crossref_primary_10_1007_s10489_024_05896_y crossref_primary_10_1007_s00530_024_01352_6 |
Cites_doi | 10.1016/j.neucom.2018.05.107 10.1117/1.JEI.28.3.033015 10.1371/journal.pone.0086041 10.1109/TPAMI.2019.2938758 10.1109/TPAMI.2007.1110 10.1117/1.JEI.31.1.013021 10.1371/journal.pone.0124674 10.1109/TPAMI.2019.2913372 10.1016/j.image.2019.02.005 10.3389/fpsyg.2017.01745 10.1016/j.image.2017.11.006 10.1109/TPAMI.2002.1017623 10.3390/s17122913 10.1109/TAFFC.2016.2573832 10.1109/LGRS.2019.2891076 10.1109/TAFFC.2018.2854166 10.1109/TIP.2018.2797479 10.1109/TAFFC.2015.2485205 10.1109/CVPR.2018.00255 10.1109/FG.2019.8756567 10.1109/FG.2018.00103 10.1109/CVPR.2017.634 10.1109/CVPR.2015.7298594 10.1109/ICACCI.2018.8554604 10.1109/CVPR.2009.5206821 10.1109/CVPR.2016.90 10.1145/2964284.2967247 10.1109/FG.2018.00105 10.1109/FG.2019.8756544 10.1007/978-3-319-16865-4_34 10.1109/FG.2019.8756579 10.1109/FG.2013.6553717 10.1109/CVPR42600.2020.01155 10.1007/978-3-642-21735-7_6 10.1109/CVPRW.2006.85 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
DBID | AAYXX CITATION |
DOI | 10.1007/s00530-023-01068-z |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1432-1882 |
EndPage | 1601 |
ExternalDocumentID | 10_1007_s00530_023_01068_z |
GrantInformation_xml | – fundername: the National Natural Science Foundation of China grantid: 62276118; 62276118; 62276118 |
GroupedDBID | --Z -4Z -59 -5G -BR -EM -ET -Y2 -~C -~X .4S .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 203 28- 29M 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 78A 85S 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYOK AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFFNX AFGCZ AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. BA0 BBWZM BDATZ BGNMA BSONS CAG COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EDO EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ H~9 I-F I09 IHE IJ- IKXTQ ITG ITH ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV KOW LAS LLZTM M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P2P P9O PF0 PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TN5 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YIN YLTOR Z45 Z7R Z7X Z83 Z88 Z8M Z8R Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG AETEA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION ABRTQ |
ID | FETCH-LOGICAL-c319t-28d430e5fd6bbe35c766f00a184251bfc0f28181947f5977c1768b098a9d2ad23 |
IEDL.DBID | U2A |
ISSN | 0942-4962 |
IngestDate | Fri Jul 25 07:43:15 EDT 2025 Tue Jul 01 02:28:15 EDT 2025 Thu Apr 24 23:12:15 EDT 2025 Fri Feb 21 02:43:22 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Keywords | Deep learning SENet Micro-expression recognition Capsule network Res2Net |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-28d430e5fd6bbe35c766f00a184251bfc0f28181947f5977c1768b098a9d2ad23 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 2821009345 |
PQPubID | 2043725 |
PageCount | 9 |
ParticipantIDs | proquest_journals_2821009345 crossref_citationtrail_10_1007_s00530_023_01068_z crossref_primary_10_1007_s00530_023_01068_z springer_journals_10_1007_s00530_023_01068_z |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-06-01 |
PublicationDateYYYYMMDD | 2023-06-01 |
PublicationDate_xml | – month: 06 year: 2023 text: 2023-06-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Berlin/Heidelberg |
PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
PublicationTitle | Multimedia systems |
PublicationTitleAbbrev | Multimedia Systems |
PublicationYear | 2023 |
Publisher | Springer Berlin Heidelberg Springer Nature B.V |
Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
References | Ojala, Pietikainen, Maenpaa (CR15) 2002; 24 Wang, Li, Liu, Yan, Ou, Huang, Xu, Fu (CR23) 2018 Liu, Zhang, Yan, Wang, Zhao, Fu (CR18) 2016; 7 CR19 Borza, Danescu, Itu, Darabant (CR29) 2017; 17 CR39 CR16 Wang, See, Phan, Oh (CR17) 2015; 10 Gan, Liong, Yau, Huang, Tan (CR24) 2019; 74 Yin, Li, Zhu, Luo (CR27) 2019; 16 CR38 Wu, Fu (CR1) 2010; 18 CR37 Yan, Li, Wang, Zhao, Liu, Chen, Fu (CR33) 2014; 9 CR36 CR13 Hu, Shen, Albanie, Sun, Wu (CR12) 2020; 42 CR34 Liong, See, Wong, Phan (CR21) 2018; 62 CR10 CR32 Xie, Yu, Niu, Li (CR2) 2019; 28 CR31 Gao, Cheng, Zhao, Zhang, Yang, Torr (CR11) 2021; 43 CR30 Davison, Lansley, Costen, Tan, Yap (CR35) 2018; 9 Xie, Shi, Cheng, Fan, Zhan (CR7) 2022 Peng, Wang, Chen, Liu, Fu (CR4) 2017; 8 CR6 CR5 CR8 CR28 CR9 CR26 CR25 CR22 Liu, Li, Lai (CR20) 2021; 12 CR40 Zhao, Pietikainen (CR14) 2007; 29 Zong, Zheng, Huang, Shi, Cui, Zhao (CR3) 2018; 27 D Borza (1068_CR29) 2017; 17 1068_CR40 1068_CR22 AK Davison (1068_CR35) 2018; 9 M Peng (1068_CR4) 2017; 8 Z Xie (1068_CR2) 2019; 28 Q Wu (1068_CR1) 2010; 18 S-T Liong (1068_CR21) 2018; 62 S-J Wang (1068_CR23) 2018 Y Zong (1068_CR3) 2018; 27 1068_CR28 1068_CR26 1068_CR25 YS Gan (1068_CR24) 2019; 74 Z Xie (1068_CR7) 2022 Y-J Liu (1068_CR18) 2016; 7 1068_CR8 1068_CR31 J Yin (1068_CR27) 2019; 16 1068_CR30 1068_CR6 T Ojala (1068_CR15) 2002; 24 1068_CR5 J Hu (1068_CR12) 2020; 42 1068_CR13 Y-J Liu (1068_CR20) 2021; 12 1068_CR34 1068_CR9 1068_CR10 1068_CR32 S-H Gao (1068_CR11) 2021; 43 G Zhao (1068_CR14) 2007; 29 W-J Yan (1068_CR33) 2014; 9 1068_CR39 1068_CR16 1068_CR38 1068_CR37 1068_CR36 Y Wang (1068_CR17) 2015; 10 1068_CR19 |
References_xml | – ident: CR22 – year: 2018 ident: CR23 article-title: Micro-expression recognition with small sample size by transferring long-term convolutional neural network publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.05.107 – ident: CR39 – volume: 28 start-page: 1 year: 2019 ident: CR2 article-title: Facial microexpression recognition based on adaptive key frame representation publication-title: J. Electron. Imaging doi: 10.1117/1.JEI.28.3.033015 – ident: CR16 – ident: CR37 – ident: CR30 – volume: 9 start-page: 1 year: 2014 end-page: 8 ident: CR33 article-title: CASME II: an improved spontaneous micro-expression database and the baseline evaluation publication-title: PLoS One doi: 10.1371/journal.pone.0086041 – volume: 43 start-page: 652 year: 2021 end-page: 662 ident: CR11 article-title: Res2Net: a new multi-scale backbone architecture publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2019.2938758 – ident: CR10 – volume: 29 start-page: 915 year: 2007 end-page: 928 ident: CR14 article-title: Dynamic texture recognition using local binary patterns with an application to facial expressions publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2007.1110 – year: 2022 ident: CR7 article-title: Micro-expression recognition based on deep capsule adversarial domain adaptation network publication-title: J. Electron. Imaging doi: 10.1117/1.JEI.31.1.013021 – volume: 10 year: 2015 ident: CR17 article-title: Efficient spatio-temporal local binary patterns for spontaneous facial micro-expression recognition publication-title: PLoS One doi: 10.1371/journal.pone.0124674 – volume: 42 start-page: 2011 year: 2020 end-page: 2023 ident: CR12 article-title: Squeeze-and-excitation networks publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2019.2913372 – volume: 74 start-page: 129 year: 2019 end-page: 139 ident: CR24 article-title: OFF-ApexNet on micro-expression recognition system publication-title: Signal Process. Image Commun. doi: 10.1016/j.image.2019.02.005 – volume: 8 start-page: 1745 year: 2017 ident: CR4 article-title: Dual temporal scale convolutional neural network for micro-expression recognition publication-title: Front. Psychol. doi: 10.3389/fpsyg.2017.01745 – ident: CR6 – ident: CR8 – volume: 62 start-page: 82 year: 2018 end-page: 92 ident: CR21 article-title: Less is more: micro-expression recognition from video using apex frame publication-title: Signal Process. Image Commun. doi: 10.1016/j.image.2017.11.006 – volume: 18 start-page: 1359 issue: 09 year: 2010 ident: CR1 article-title: Micro-expression and its applications publication-title: Adv. Psychol. Sci. – ident: CR40 – ident: CR25 – volume: 24 start-page: 971 year: 2002 end-page: 987 ident: CR15 article-title: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2002.1017623 – ident: CR19 – volume: 17 start-page: 2913 year: 2017 ident: CR29 article-title: High-speed video system for micro-expression detection and recognition publication-title: Sensors. doi: 10.3390/s17122913 – ident: CR38 – ident: CR31 – ident: CR13 – volume: 9 start-page: 116 year: 2018 end-page: 129 ident: CR35 article-title: SAMM: A Spontaneous Micro-Facial Movement Dataset publication-title: IEEE Trans Affect Comput. doi: 10.1109/TAFFC.2016.2573832 – ident: CR9 – volume: 16 start-page: 1095 year: 2019 end-page: 1099 ident: CR27 article-title: Hyperspectral image classification using CapsNet with well-initialized shallow layers publication-title: IEEE Geosci. Remote Sens. Lett. doi: 10.1109/LGRS.2019.2891076 – ident: CR32 – ident: CR34 – ident: CR36 – volume: 12 start-page: 254 year: 2021 end-page: 261 ident: CR20 article-title: Sparse MDMO: learning a discriminative feature for micro-expression recognition publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2018.2854166 – ident: CR5 – volume: 27 start-page: 2484 year: 2018 end-page: 2498 ident: CR3 article-title: Domain regeneration for cross-database micro-expression recognition publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2018.2797479 – volume: 7 start-page: 299 year: 2016 end-page: 310 ident: CR18 article-title: A main directional mean optical flow feature for spontaneous micro-expression recognition publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2015.2485205 – ident: CR28 – ident: CR26 – ident: 1068_CR32 doi: 10.1109/CVPR.2018.00255 – volume: 62 start-page: 82 year: 2018 ident: 1068_CR21 publication-title: Signal Process. Image Commun. doi: 10.1016/j.image.2017.11.006 – volume: 17 start-page: 2913 year: 2017 ident: 1068_CR29 publication-title: Sensors. doi: 10.3390/s17122913 – year: 2018 ident: 1068_CR23 publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.05.107 – ident: 1068_CR39 doi: 10.1109/FG.2019.8756567 – ident: 1068_CR5 doi: 10.1109/FG.2018.00103 – ident: 1068_CR31 doi: 10.1109/CVPR.2017.634 – ident: 1068_CR36 – volume: 24 start-page: 971 year: 2002 ident: 1068_CR15 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2002.1017623 – volume: 9 start-page: 1 year: 2014 ident: 1068_CR33 publication-title: PLoS One doi: 10.1371/journal.pone.0086041 – ident: 1068_CR38 – ident: 1068_CR9 – volume: 74 start-page: 129 year: 2019 ident: 1068_CR24 publication-title: Signal Process. Image Commun. doi: 10.1016/j.image.2019.02.005 – volume: 43 start-page: 652 year: 2021 ident: 1068_CR11 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2019.2938758 – ident: 1068_CR37 doi: 10.1109/CVPR.2015.7298594 – ident: 1068_CR26 doi: 10.1109/ICACCI.2018.8554604 – ident: 1068_CR19 doi: 10.1109/CVPR.2009.5206821 – volume: 12 start-page: 254 year: 2021 ident: 1068_CR20 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2018.2854166 – ident: 1068_CR30 doi: 10.1109/CVPR.2016.90 – ident: 1068_CR22 doi: 10.1145/2964284.2967247 – ident: 1068_CR6 doi: 10.1109/FG.2018.00105 – ident: 1068_CR10 doi: 10.1109/FG.2019.8756544 – volume: 7 start-page: 299 year: 2016 ident: 1068_CR18 publication-title: IEEE Trans. Affect. Comput. doi: 10.1109/TAFFC.2015.2485205 – ident: 1068_CR16 doi: 10.1007/978-3-319-16865-4_34 – volume: 9 start-page: 116 year: 2018 ident: 1068_CR35 publication-title: IEEE Trans Affect Comput. doi: 10.1109/TAFFC.2016.2573832 – volume: 42 start-page: 2011 year: 2020 ident: 1068_CR12 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2019.2913372 – volume: 16 start-page: 1095 year: 2019 ident: 1068_CR27 publication-title: IEEE Geosci. Remote Sens. Lett. doi: 10.1109/LGRS.2019.2891076 – volume: 18 start-page: 1359 issue: 09 year: 2010 ident: 1068_CR1 publication-title: Adv. Psychol. Sci. – ident: 1068_CR40 doi: 10.1109/FG.2019.8756579 – volume: 29 start-page: 915 year: 2007 ident: 1068_CR14 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2007.1110 – ident: 1068_CR34 doi: 10.1109/FG.2013.6553717 – ident: 1068_CR13 doi: 10.1109/CVPR42600.2020.01155 – ident: 1068_CR8 – ident: 1068_CR25 doi: 10.1007/978-3-642-21735-7_6 – ident: 1068_CR28 doi: 10.1109/CVPRW.2006.85 – volume: 8 start-page: 1745 year: 2017 ident: 1068_CR4 publication-title: Front. Psychol. doi: 10.3389/fpsyg.2017.01745 – volume: 27 start-page: 2484 year: 2018 ident: 1068_CR3 publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2018.2797479 – year: 2022 ident: 1068_CR7 publication-title: J. Electron. Imaging doi: 10.1117/1.JEI.31.1.013021 – volume: 28 start-page: 1 year: 2019 ident: 1068_CR2 publication-title: J. Electron. Imaging doi: 10.1117/1.JEI.28.3.033015 – volume: 10 year: 2015 ident: 1068_CR17 publication-title: PLoS One doi: 10.1371/journal.pone.0124674 |
SSID | ssj0017630 |
Score | 2.3826795 |
Snippet | Micro-expression is a type of facial expression that reveals the deepest feeling held within the human heart. Despite the substantial improvement that has been... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1593 |
SubjectTerms | Computer Communication Networks Computer Graphics Computer Science Cryptology Data Storage Representation Feature extraction Machine learning Multimedia Information Systems Operating Systems Recognition Regular Paper |
Title | RES-CapsNet: an improved capsule network for micro-expression recognition |
URI | https://link.springer.com/article/10.1007/s00530-023-01068-z https://www.proquest.com/docview/2821009345 |
Volume | 29 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFH_odvHitzidIwdvGkjSJGu9bWNzKuzkYJ5Km6QgzG64DWR_vUnWdigqeCq0aQ6_vJf3_R7AtXJdzKkkWGVKYm40xamytCxIoELh5iEpn20xksMxf5yISVEUtiiz3cuQpL-pq2I3Ry8EWxmDnR0T4vUu1IWz3S0Vj1mnih1YjvGelYgzzCPJilKZn_f4Ko62Oua3sKiXNoND2C_URNTZnOsR7Jj8GA7KEQyo4MgTeLAA4l4yX4zM8g4lOXr1TgKjkbIvV1OD8k2eN7LKKXpz2XfYfBTJrzmq0odm-SmMB_3n3hAX0xGwsmyzxCzUPCBGZFqmqQmEakuZEZJQF1ijaaZI5jo90Yi3M9dkTllgwpREYRJplmgWnEEtn-XmHBDVScQUjbIskTwVJBVUaRevCxJrOweyAbQEKVZF63A3wWIaV02PPbCxBTb2wMbrBtxU_8w3jTP-XN0ssY8LJlrE1hqkzuPCRQNuy_PYfv59t4v_Lb-EPeZJwvlWmlBbvq_MlVU1lmkL6p1Btztyz_uXp37LU9oneozLpg |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwIxEG4UD3rxbURRe_CmTdput-x6I0QCipwg4dbs9pGY4EIEEsOvty3dJRo18brb7eHrzM505psZAG6l62JOOEbSSI6YVgTl0spyjCOZxG4ekvRsiwHvjtjTOB6HorB5yXYvU5L-T10Vuzl5wcjaGOTuMQlabYMd6wwkjsg1oq0qd2A1xkdWUkYRSzkNpTI_7_HVHG18zG9pUW9tOodgP7iJsLU-1yOwpYtjcFCOYIBBI09AzwKI2tlsPtCLB5gV8NUHCbSC0j5cTjQs1jxvaJ1T-ObYd0h_BPJrASv60LQ4BaPO47DdRWE6ApJWbRaIJopFWMdG8TzXUSybnBuMM-ISayQ3EhvX6YmkrGlckzlpgUlynCZZqmimaHQGasW00OcAEpWlVJLUmIyzPMZ5TKRy-boos3fniNcBKUESMrQOdxMsJqJqeuyBFRZY4YEVqzq4q76ZrRtn_Lm6UWIvghLNhb0NEhdxYXEd3JfnsXn9-24X_1t-A3a7w5e-6PcGz5dgj3rxcHGWBqgt3pf6yrodi_zaS9kndbbLiQ |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFA86Qbz4LU6n5uBNw5K0zVpvYzo2leHBwW6l-QJhZsN1IPvrTbK2U1HBa5sE-st7zcv7-D0ALoVjMScMI6EFQ6GSBHFhZTnCgYgj1w9J-GyLAesNw_tRNPpUxe-z3cuQ5LKmwbE0mbw5lbpZFb452cHInjfI3WlitFgHG_Z3TJxcD2m7iiNY7fFeliSkKEwYLcpmfl7j69G0sje_hUj9ydPdBduFyQjbyz3eA2vK7IOdsh0DLLTzAPQtmKiTTWcDld_AzMAX7zBQEgr7cD5W0CxzvqE1VOGry8RD6r1IhDWwSiWamEMw7N49d3qo6JSAhP3mHNFYhgFWkZaMcxVEosWYxjgjLshGuBZYO9YnkoQt7QjnhAUm5jiJs0TSTNLgCNTMxKhjAInMEipIonXGQh5hHhEhXewuyOw9OmB1QEqQUlHQiLtuFuO0IkD2wKYW2NQDmy7q4KqaM12SaPw5ulFinxYKNUvtzZA470sY1cF1uR-r17-vdvK_4Rdg8-m2mz72Bw-nYIt66XAulwao5W9zdWYtkJyfeyH7AIaCz8U |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=RES-CapsNet%3A+an+improved+capsule+network+for+micro-expression+recognition&rft.jtitle=Multimedia+systems&rft.au=Shu+Xin&rft.au=Li%2C+Jia&rft.au=Shi%2C+Liang&rft.au=Huang+Shucheng&rft.date=2023-06-01&rft.pub=Springer+Nature+B.V&rft.issn=0942-4962&rft.eissn=1432-1882&rft.volume=29&rft.issue=3&rft.spage=1593&rft.epage=1601&rft_id=info:doi/10.1007%2Fs00530-023-01068-z&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0942-4962&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0942-4962&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0942-4962&client=summon |