MESNet: A Convolutional Neural Network for Spotting Multi-Scale Micro-Expression Intervals in Long Videos

Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (ME...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 30; pp. 3956 - 3969
Main Authors Wang, Su-Jing, He, Ying, Li, Jingting, Fu, Xiaolan
Format Journal Article
LanguageEnglish
Published United States IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME) 2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2021.3064258