Deep convolutional BiLSTM fusion network for facial expression recognition
Deep learning algorithms have shown significant performance improvements for facial expression recognition (FER). Most deep learning-based methods, however, focus more attention on spatial appearance features for classification, discarding much useful temporal information. In this work, we present a...
Saved in:
Published in | The Visual computer Vol. 36; no. 3; pp. 499 - 508 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.03.2020
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep learning algorithms have shown significant performance improvements for facial expression recognition (FER). Most deep learning-based methods, however, focus more attention on spatial appearance features for classification, discarding much useful temporal information. In this work, we present a novel framework that jointly learns spatial features and temporal dynamics for FER. Given the image sequence of an expression, spatial features are extracted from each frame using a deep network, while the temporal dynamics are modeled by a convolutional network, which takes a pair of consecutive frames as input. Finally, the framework accumulates clues from fused features by a BiLSTM network. In addition, the framework is end-to-end learnable, and thus temporal information can be adapted to complement spatial features. Experimental results on three benchmark databases, CK+, Oulu-CASIA and MMI, show that the proposed framework outperforms state-of-the-art methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0178-2789 1432-2315 |
DOI: | 10.1007/s00371-019-01636-3 |