MESNet: A Convolutional Neural Network for Spotting Multi-Scale Micro-Expression Intervals in Long Videos

Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (ME...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 30; pp. 3956 - 3969
Main Authors Wang, Su-Jing, He, Ying, Li, Jingting, Fu, Xiaolan
Format Journal Article
LanguageEnglish
Published United States IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1057-7149
1941-0042
1941-0042
DOI10.1109/TIP.2021.3064258

Cover

Loading…
Abstract Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME) 2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.
AbstractList Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME)2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.
Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME) and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.
Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME)2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME)2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.
Author Fu, Xiaolan
Li, Jingting
Wang, Su-Jing
He, Ying
Author_xml – sequence: 1
  givenname: Su-Jing
  orcidid: 0000-0002-8774-6328
  surname: Wang
  fullname: Wang, Su-Jing
  email: wangsujing@psych.ac.cn
  organization: Key Laboratory of Behavior Sciences, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
– sequence: 2
  givenname: Ying
  orcidid: 0000-0002-7098-7598
  surname: He
  fullname: He, Ying
  organization: Key Laboratory of Behavior Sciences, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
– sequence: 3
  givenname: Jingting
  orcidid: 0000-0001-8742-8488
  surname: Li
  fullname: Li, Jingting
  organization: Key Laboratory of Behavior Sciences, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
– sequence: 4
  givenname: Xiaolan
  orcidid: 0000-0002-6944-1037
  surname: Fu
  fullname: Fu, Xiaolan
  organization: Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33788686$$D View this record in MEDLINE/PubMed
BookMark eNp9kc9v2yAYhtHUaf2x3SdVqpB26cXZB9gGequitI2UdJPS7Wph-_NE65gMcLf-9yVN1kMPO30IngfE-x6Tg8ENSMhnBhPGQH-9m3-fcOBsIqDMeaHekSOmc5YB5PwgraGQmWS5PiTHIdwDsLxg5QdyKIRUqlTlEbHL2eoW4wW9pFM3PLp-jNYNpqe3OPqXEf84_0A75-lq42K0wy-6HPtos1VjeqRL23iXzf5uPIaQVDofIvpH0wdqB7pwCf9pW3ThI3nfpV38tJ8n5MfV7G56ky2-Xc-nl4usEUrHrGWd7kpVlzWXrNWGaWgLblRrZKFVUSIztUDOJei8aWUDdZvXCAVIITpMZyfkfHfvxrvfI4ZYrW1osO_NgG4MFU-o5FprSOiXN-i9G336_Quluea50ok621Njvca22ni7Nv6p-hdiAmAHpCRC8Ni9IgyqbU9V6qna9lTte0pK-UZpbDTb6KM3tv-feLoTLSK-vqOF5gKEeAZzNp5n
CODEN IIPRE4
CitedBy_id crossref_primary_10_1016_j_neucom_2023_126439
crossref_primary_10_1016_j_patrec_2022_09_010
crossref_primary_10_1016_j_jvcir_2025_104436
crossref_primary_10_1016_j_neucom_2023_126998
crossref_primary_10_1016_j_image_2022_116875
crossref_primary_10_1016_j_asoc_2023_110035
crossref_primary_10_1016_j_compbiomed_2021_104698
crossref_primary_10_1016_j_neucom_2025_129676
crossref_primary_10_1007_s10462_025_11159_0
crossref_primary_10_1109_ACCESS_2025_3530114
crossref_primary_10_1016_j_patrec_2022_12_001
crossref_primary_10_1007_s42235_021_0068_1
crossref_primary_10_3390_electronics12020434
crossref_primary_10_1109_TAFFC_2023_3266808
crossref_primary_10_1007_s11042_023_15626_0
crossref_primary_10_1007_s10489_023_05052_y
crossref_primary_10_1016_j_compbiomed_2021_104582
crossref_primary_10_1016_j_patrec_2022_11_001
crossref_primary_10_1109_TIP_2024_3472502
crossref_primary_10_1016_j_enconman_2021_114484
crossref_primary_10_1109_TAFFC_2023_3340016
crossref_primary_10_1007_s00366_021_01542_0
crossref_primary_10_1007_s11063_022_11123_x
crossref_primary_10_1007_s00366_021_01464_x
crossref_primary_10_1016_j_cmpb_2023_107923
crossref_primary_10_1364_OE_555406
crossref_primary_10_2478_caim_2024_0016
crossref_primary_10_1038_s41598_021_99840_6
crossref_primary_10_1109_JPROC_2023_3275192
crossref_primary_10_1007_s11042_023_17313_6
crossref_primary_10_1007_s42235_021_00114_8
crossref_primary_10_3390_mi14030634
crossref_primary_10_1007_s41870_023_01662_4
crossref_primary_10_1016_j_neucom_2021_10_074
crossref_primary_10_2478_amns_2024_1393
crossref_primary_10_1007_s11063_025_11727_z
crossref_primary_10_1109_ACCESS_2021_3108447
crossref_primary_10_1109_ACCESS_2021_3079204
crossref_primary_10_1109_ACCESS_2022_3214808
crossref_primary_10_1109_TCSVT_2024_3402728
crossref_primary_10_1007_s10489_022_03553_w
crossref_primary_10_1109_TCSVT_2024_3433415
crossref_primary_10_1016_j_patrec_2022_09_008
crossref_primary_10_1109_ACCESS_2021_3120542
crossref_primary_10_1016_j_ins_2023_119831
crossref_primary_10_1016_j_neucom_2023_03_055
crossref_primary_10_1007_s00530_023_01076_z
crossref_primary_10_3389_fnins_2022_903448
crossref_primary_10_1109_ACCESS_2021_3077616
crossref_primary_10_1109_TPAMI_2022_3174895
crossref_primary_10_1109_TIP_2023_3345177
crossref_primary_10_1007_s00530_023_01145_3
Cites_doi 10.1007/s11263-013-0620-5
10.1109/ICCV.2011.6126401
10.1109/CVPR.2016.293
10.1007/s11031-009-9129-1
10.1049/ic.2009.0244
10.1186/1472-6920-9-47
10.1109/TAFFC.2020.3023821
10.1162/neco.1997.9.8.1735
10.1007/s10979-008-9166-4
10.1007/978-81-322-1934-7_11
10.1016/j.neucom.2016.12.034
10.1109/ICCV.2015.169
10.1109/TPAMI.2016.2577031
10.1109/ICCV.2017.617
10.1093/acprof:oso/9780195327939.003.0008
10.1109/ACCESS.2018.2879485
10.1371/journal.pone.0124674
10.1109/TAFFC.2017.2654440
10.1109/WACV.2018.00014
10.1016/j.cviu.2015.12.006
10.1109/CVPR.2017.155
10.1109/CVPR.2017.502
10.1109/ICPR.2014.303
10.1007/s11063-013-9288-7
10.1080/00332747.1969.11023575
10.1109/TAFFC.2017.2667642
10.1016/j.neucom.2018.05.107
10.1109/TNNLS.2018.2876865
10.1109/TIP.2019.2922108
10.1109/FG.2018.00101
10.1016/j.image.2017.11.006
10.1109/WACV.2009.5403044
10.1109/CVPR.2014.81
10.1109/FG.2019.8756626
10.1109/ICIP.2018.8451376
10.1111/1467-9280.00147
10.1109/ICSP.2016.7878004
10.1109/CVPR.2018.00124
10.1109/ICCV.2019.00719
10.1109/TAFFC.2015.2485205
10.1007/978-1-4684-6045-2_14
10.1109/ICIP.2018.8451065
10.3390/s17122913
10.1109/ICCVW.2015.10
10.1196/annals.1280.010
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2021.3064258
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList Technology Research Database
PubMed
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Xplore Digital Library
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 3969
ExternalDocumentID 33788686
10_1109_TIP_2021_3064258
9392303
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: U19B2032; 61772511; 62061136001
  funderid: 10.13039/501100001809
– fundername: China Postdoctoral Science Foundation
  grantid: 2020M680738
  funderid: 10.13039/501100002858
– fundername: National Key Research and Development Project
  grantid: 2018AAA0100205
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYOK
AAYXX
CITATION
RIG
NPM
PKN
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c389t-d1f9f68b6b271d9a190d52a8da759856e1ab3e227094cd7c0bd4be050733feab3
IEDL.DBID RIE
ISSN 1057-7149
1941-0042
IngestDate Thu Jul 10 18:30:20 EDT 2025
Mon Jun 30 10:14:40 EDT 2025
Wed Feb 19 02:28:01 EST 2025
Tue Jul 01 02:03:25 EDT 2025
Thu Apr 24 22:56:28 EDT 2025
Wed Aug 27 02:44:54 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c389t-d1f9f68b6b271d9a190d52a8da759856e1ab3e227094cd7c0bd4be050733feab3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-6944-1037
0000-0002-7098-7598
0000-0001-8742-8488
0000-0002-8774-6328
OpenAccessLink http://ir.psych.ac.cn/handle/311026/38736
PMID 33788686
PQID 2509292489
PQPubID 85429
PageCount 14
ParticipantIDs crossref_primary_10_1109_TIP_2021_3064258
crossref_citationtrail_10_1109_TIP_2021_3064258
proquest_miscellaneous_2507729990
proquest_journals_2509292489
ieee_primary_9392303
pubmed_primary_33788686
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20210000
2021-00-00
20210101
PublicationDateYYYYMMDD 2021-01-01
PublicationDate_xml – year: 2021
  text: 20210000
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref56
ref59
ref15
ref58
ref14
ref53
ref52
ref55
glorot (ref61) 2011; 15
ref54
ref16
frank (ref11) 2009
ref19
ref18
liong (ref21) 2016
ref51
ref50
lang (ref62) 1990
yu (ref63) 2016
ref48
ref47
ref44
ekman (ref10) 2002
ref49
ref8
ref7
ref9
ref4
pan (ref46) 2020
ref3
wu (ref12) 2011
ref6
ref5
yap (ref40) 2020
see (ref41) 2019
ref34
ref36
ref30
ref32
he (ref43) 2020
ref2
ref1
ref39
ref38
zhang (ref31) 2020
patel (ref27) 2015
wang (ref17) 2014
ref24
ref23
ref26
ref25
ref64
ref20
verburg (ref45) 2019
ref22
ref65
ref28
lu (ref33) 2017; 25
husák (ref37) 2017
ref29
tran (ref35) 2017
buch (ref57) 2017
ref60
li (ref42) 2020
References_xml – ident: ref49
  doi: 10.1007/s11263-013-0620-5
– ident: ref18
  doi: 10.1109/ICCV.2011.6126401
– year: 2017
  ident: ref57
  article-title: End-to-end, single-stream temporal action detection in untrimmed videos
  publication-title: Proc Brit Mach Vis Conf
– ident: ref58
  doi: 10.1109/CVPR.2016.293
– start-page: 325
  year: 2014
  ident: ref17
  article-title: Micro-expression recognition using robust principal component analysis and local spatiotemporal directional features
  publication-title: Proc Eur Conf Comput Vis
– start-page: 245
  year: 2020
  ident: ref31
  article-title: Spatio-temporal fusion for macro-and micro-expression spotting in long video sequences
  publication-title: Proc 15th IEEE Int Conf Autom Face Gesture Recognit (FG)
– ident: ref7
  doi: 10.1007/s11031-009-9129-1
– start-page: 178
  year: 1990
  ident: ref62
  article-title: Dimensionality reduction and prior knowledge in E-set recognition
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref13
  doi: 10.1049/ic.2009.0244
– volume: 25
  start-page: 87
  year: 2017
  ident: ref33
  article-title: Micro-expression detection using integral projections
  publication-title: J WSCG
– ident: ref5
  doi: 10.1186/1472-6920-9-47
– ident: ref38
  doi: 10.1109/TAFFC.2020.3023821
– start-page: 542
  year: 2017
  ident: ref35
  article-title: Sliding window based micro-expression spotting: A benchmark
  publication-title: Proc Int Conf Adv Concepts Intell Vis Syst
– ident: ref59
  doi: 10.1162/neco.1997.9.8.1735
– ident: ref8
  doi: 10.1007/s10979-008-9166-4
– ident: ref6
  doi: 10.1007/978-81-322-1934-7_11
– ident: ref29
  doi: 10.1016/j.neucom.2016.12.034
– start-page: 1
  year: 2019
  ident: ref45
  article-title: Micro-expression detection in long videos using optical flow and recurrent neural networks
  publication-title: Proc 14th IEEE Int Conf Autom Face Gesture Recognit (FG)
– start-page: 742
  year: 2020
  ident: ref43
  article-title: Spotting macro-and micro-expression intervals in long video sequences
  publication-title: Proc 15th IEEE Int Conf Autom Face Gesture Recognit (FG)
– start-page: 234
  year: 2020
  ident: ref42
  article-title: MEGC2020-The third facial micro-expression grand challenge
  publication-title: Proc 15th IEEE Int Conf Autom Face Gesture Recognit (FG)
– ident: ref50
  doi: 10.1109/ICCV.2015.169
– ident: ref51
  doi: 10.1109/TPAMI.2016.2577031
– start-page: 1
  year: 2016
  ident: ref63
  article-title: Multi-scale context aggregation by dilated convolutions
  publication-title: Proc Int Conf Learn Represent (ICLR)
– ident: ref55
  doi: 10.1109/ICCV.2017.617
– ident: ref4
  doi: 10.1093/acprof:oso/9780195327939.003.0008
– volume: 15
  start-page: 315
  year: 2011
  ident: ref61
  article-title: Deep sparse rectifier neural networks
  publication-title: Proc Int Conf Artif Intell Statist
– ident: ref24
  doi: 10.1109/ACCESS.2018.2879485
– start-page: 343
  year: 2020
  ident: ref46
  article-title: Local bilinear convolutional neural network for spotting macro- and micro-expression intervals in long video sequences
  publication-title: Proc 15th IEEE Int Conf Autom Face Gesture Recognit (FG)
– ident: ref20
  doi: 10.1371/journal.pone.0124674
– start-page: 1
  year: 2019
  ident: ref41
  article-title: MEGC 2019-The second facial micro-expressions grand challenge
  publication-title: Proc 14th IEEE Int Conf Autom Face Gesture Recognit (FG)
– ident: ref39
  doi: 10.1109/TAFFC.2017.2654440
– ident: ref32
  doi: 10.1109/WACV.2018.00014
– ident: ref34
  doi: 10.1016/j.cviu.2015.12.006
– ident: ref54
  doi: 10.1109/CVPR.2017.155
– ident: ref64
  doi: 10.1109/CVPR.2017.502
– ident: ref65
  doi: 10.1109/ICPR.2014.303
– ident: ref16
  doi: 10.1007/s11063-013-9288-7
– ident: ref2
  doi: 10.1080/00332747.1969.11023575
– ident: ref25
  doi: 10.1109/TAFFC.2017.2667642
– ident: ref60
  doi: 10.1016/j.neucom.2018.05.107
– ident: ref47
  doi: 10.1109/TNNLS.2018.2876865
– ident: ref53
  doi: 10.1109/TIP.2019.2922108
– ident: ref26
  doi: 10.1109/FG.2018.00101
– start-page: 1
  year: 2009
  ident: ref11
  article-title: I see how you feel: Training laypeople and professionals to recognize fleeting emotions
  publication-title: Proc Annu Meeting Int Commun Assoc
– ident: ref22
  doi: 10.1016/j.image.2017.11.006
– ident: ref14
  doi: 10.1109/WACV.2009.5403044
– ident: ref48
  doi: 10.1109/CVPR.2014.81
– start-page: 369
  year: 2015
  ident: ref27
  article-title: Spatiotemporal integration of optical flow vectors for micro-expression detection
  publication-title: Proc Int Conf Adv Concepts Intell Vis Syst
– ident: ref44
  doi: 10.1109/FG.2019.8756626
– ident: ref23
  doi: 10.1109/ICIP.2018.8451376
– ident: ref9
  doi: 10.1111/1467-9280.00147
– ident: ref28
  doi: 10.1109/ICSP.2016.7878004
– ident: ref56
  doi: 10.1109/CVPR.2018.00124
– ident: ref52
  doi: 10.1109/ICCV.2019.00719
– ident: ref15
  doi: 10.1109/TAFFC.2015.2485205
– ident: ref1
  doi: 10.1007/978-1-4684-6045-2_14
– year: 2002
  ident: ref10
  publication-title: Microexpression training tool (METT)
– start-page: 152
  year: 2011
  ident: ref12
  article-title: The machine knows what you are hiding: An automatic micro-expression recognition system
  publication-title: Proc Int Conf Affect Comput Intell Interact
– start-page: 345
  year: 2016
  ident: ref21
  article-title: Automatic micro-expression recognition from long video using a single spotted apex
  publication-title: Proc Asian Conf Comput Vis
– year: 2017
  ident: ref37
  article-title: Spotting facial micro-expressions' in the wild
  publication-title: Proc 22nd Comput Vis Winter Workshop
– ident: ref30
  doi: 10.1109/ICIP.2018.8451065
– ident: ref36
  doi: 10.3390/s17122913
– ident: ref19
  doi: 10.1109/ICCVW.2015.10
– start-page: 194
  year: 2020
  ident: ref40
  article-title: SAMM long videos: A spontaneous facial Micro- and macro-expressions dataset
  publication-title: Proc 15th IEEE Int Conf Autom Face Gesture Recognit (FG)
– ident: ref3
  doi: 10.1196/annals.1280.010
SSID ssj0014516
Score 2.577297
Snippet Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN)...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 3956
SubjectTerms Artificial neural networks
Clips
Convolution
Convolutional neural network
Convolutional neural networks
Datasets
deep learning
detection
Feature extraction
Intervals
long videos
Measurement
micro-expression spotting
Modules
Neural networks
Performance evaluation
Regression analysis
Spatiotemporal phenomena
Two dimensional displays
Video
Videos
Title MESNet: A Convolutional Neural Network for Spotting Multi-Scale Micro-Expression Intervals in Long Videos
URI https://ieeexplore.ieee.org/document/9392303
https://www.ncbi.nlm.nih.gov/pubmed/33788686
https://www.proquest.com/docview/2509292489
https://www.proquest.com/docview/2507729990
Volume 30
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3dS9xAEB-sT_ahWm1r_GILvgjuXT43Wd8OOTmlJ8Jp8S3sV-BoScTLifjXO7vZCyJt6VMSdrPZ8JtlfrMzOwNwjBqRi0hIapOF0ZSJiqKdpanUoSl0hArQnUqbXrPJXXp1n92vwWl_FsYY44LPzMDeOl--btTSbpUNOSrzxKb2_ICGW3dWq_cY2IKzzrOZ5TRH2r9ySYZ8eHt5g4ZgHA0c27bF3d-oIFdT5e_00qmZi02YribYRZf8GixbOVAv73I3_u8fbMEnzzfJqBOQz7Bm6m3Y9NyT-JW92IaPbxIT7sB8Op5dm_aMjMh5Uz958cRxbC4Pd3HB4wQZL5k9NC52mrizvHSGoBsytXF-dPzsw2xr4nYeUaoXZF6THw12_znXpll8gbuL8e35hPqiDFQht2mpjipesUIyGeeRRpx5qLNYFFrkGS8yZhD4xMRxjnaj0rkKpU6lCTNbHLIy2PYV1uumNrtAYqW4qLgQWaiQlQl8SBRnPFKJZqmSAQxXOJXKZyy3hTN-l85yCXmJyJYW2dIjG8BJ_8ZDl63jH313LD59Pw9NAAcrUSj9cl6UyBORRsZpwQP43jfjQrTeFVGbZun6WEsFtXsA3zoR6sdObNZ-VrC9P39zHzbszLqdnQNYbx-X5hC5TiuPnJC_AvIq-eA
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3da9RAEB9KfVAfrLZaU1tdwRfBvcvnJutbKVeuejmEu0rfwn4FDiUpXk6Kf72zm71QpJY-JWE3mw2_WeY3O7MzAB9QI3IRCUltsjCaMlFTtLM0lTo0hY5QAbpTaeWcTS_TL1fZ1Q58Gs7CGGNc8JkZ2Vvny9et2titsjFHZZ7Y1J6PUO9nUX9aa_AZ2JKzzreZ5TRH4r91SoZ8vLz4hqZgHI0c37bl3W8pIVdV5f8E0yma8z0ot1Ps40t-jDadHKk__2RvfOg_PIdnnnGS015EXsCOafZhz7NP4tf2eh-e3kpNeACrcrKYm-4zOSVnbfPbCyiOY7N5uIsLHyfIecniunXR08Sd5qULhN2Q0kb60cmND7RtiNt7RLlek1VDZi12_77Spl2_hMvzyfJsSn1ZBqqQ3XRURzWvWSGZjPNII9I81FksCi3yjBcZMwh9YuI4R8tR6VyFUqfShJktD1kbbHsFu03bmNdAYqW4qLkQWaiQlwl8SBRnPFKJZqmSAYy3OFXK5yy3pTN-Vs52CXmFyFYW2cojG8DH4Y3rPl_HPX0PLD5DPw9NAMdbUaj8gl5XyBSRSMZpwQN4PzTjUrT-FdGYduP6WFsF9XsAh70IDWMnNm8_K9jR3d98B4-ny3JWzS7mX9_AEzvLfp_nGHa7Xxtzgsynk2-dwP8FHXf9KQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MESNet%3A+A+Convolutional+Neural+Network+for+Spotting+Multi-Scale+Micro-Expression+Intervals+in+Long+Videos&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Wang%2C+Su-Jing&rft.au=He%2C+Ying&rft.au=Li%2C+Jingting&rft.au=Fu%2C+Xiaolan&rft.date=2021&rft.issn=1057-7149&rft.eissn=1941-0042&rft.volume=30&rft.spage=3956&rft.epage=3969&rft_id=info:doi/10.1109%2FTIP.2021.3064258&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIP_2021_3064258
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon