Action Decouple Multi-Tasking for Micro-Expression Recognition
Micro-expressions are brief, involuntary facial movements that reveal genuine emotions. However, extracting and learning features from micro-expressions poses challenges due to their short duration and low intensity. To address this problem, we propose the ADMME (Action Decouple Multi-tasking for Mi...
Saved in:
Published in | IEEE access Vol. 11; pp. 82978 - 82988 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Micro-expressions are brief, involuntary facial movements that reveal genuine emotions. However, extracting and learning features from micro-expressions poses challenges due to their short duration and low intensity. To address this problem, we propose the ADMME (Action Decouple Multi-tasking for Micro-Expression Recognition) method. In our model, we adopt a pseudo-siamese network architecture and leverage contrastive learning to obtain a better representation of micro-expression motion features. During model training, we utilize focal loss to handle the class imbalance issue in micro-expression datasets. Additionally, we introduce an AU (Action Unit) detection task, which provides a new inductive bias for micro-expression detection, enhancing the model's generalization and robustness. Through five-class classification experiments conducted on the CASMEII and SAMM datasets, we achieve accuracy rates of 86.34% and 81.28%, with F1 scores of 0.8635 and 0.8168, respectively. These results validate the effectiveness of our method in micro-expression recognition tasks. Furthermore, we validate the effectiveness of each component of our approach through a series of ablation experiments. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3301950 |