Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion

A Compound Expression Recognition (CER) as a sub-field of affective computing is a novel task in intelligent human-computer interaction and multimodal user interfaces. We propose a novel audio-visual method for CER. Our method relies on emotion recognition models that fuse modalities at the emotion...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) pp. 4752 - 4760
Main Authors Ryumina, Elena, Markitantov, Maxim, Ryumin, Dmitry, Kaya, Heysem, Karpov, Alexey
Format Conference Proceeding
LanguageEnglish
Published IEEE 17.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A Compound Expression Recognition (CER) as a sub-field of affective computing is a novel task in intelligent human-computer interaction and multimodal user interfaces. We propose a novel audio-visual method for CER. Our method relies on emotion recognition models that fuse modalities at the emotion probability level, while decisions regarding the prediction of compound expressions are based on the pair-wise sum of weighted emotion probability distributions. Notably, our method does not use any training data specific to the target task. Thus, the problem is a zero-shot classification task. The method is evaluated in multi-corpus training and cross-corpus validation setups. We achieved F1 scores of 32.15% and 25.56% for the AffWild2 and C-EXPR-DB test subsets without training on target corpus and target task, respectively. Therefore, our method is on par with methods developed training target corpus or target task. The source code is publicly available from https://elenaryumina.github.io/AVCER/.
ISSN:2160-7516
DOI:10.1109/CVPRW63382.2024.00478