Fusion of Attentional and Traditional Convolutional Networks for Facial Expression Recognition
INTRODUCTION: The facial expression classification problem has been performed by many researchers. However, it is still a difficult problem to effectively classify facial expressions in highly challenging datasets. In recent years, the self-weighted Squeeze-and-Excitation block (SE-block) technique...
Saved in:
Published in | EAI endorsed transactions on pervasive health and technology Vol. 7; no. 27; p. e2 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
European Alliance for Innovation (EAI)
01.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | INTRODUCTION: The facial expression classification problem has been performed by many researchers. However, it is still a difficult problem to effectively classify facial expressions in highly challenging datasets. In recent years, the self-weighted Squeeze-and-Excitation block (SE-block) technique has evaluated the importance of each feature map in the Convolutional Neural Networks (CNNs) model, corresponding to the output of the Convolution layer, that has shown high efficiency in many practical applications.OBJECTIVES: In this paper, with the aim of balancing speed and accuracy for the problem of facial expression classification, we proposed two novel model architectures to solve these problems.METHODS: Two models proposed in this paper is: (1) a SqueezeNet model combined with a Squeeze-and- Excitation block, (2) SqueezeNet with Complex Bypass combined with a Squeeze-and-Excitation block. These models will have experimented with complex facial expression datasets. Furthermore, the ensemble learning method has also been evidenced to be effective in combining models. Therefore, in order to improve the efficiency of facial expression classification, and aim to compare with the state-of-the-art methods, we use more of the Inception-Resnet V1 model (3). Next, we combine three models (1),(2), and (3) for the classification of facial expressions.RESULTS: The proposed model gives out high accuracy for datasets: namely, with The Extended Cohn-Kanade (CK+) dataset, there are seven basic types of emotions, reaching 99.10 % (using the last 3 frames), 94.20% for the Oulu-CASIA dataset (from 7th frame) with six basic types of emotions, 74.89% for FER2013.CONCLUSION: Experimental results on highly challenging data sets (The Extended Cohn-Kanade, FER2013, Oulu-CASIA) show the effectiveness of the technique of combining three models and two proposed models. |
---|---|
ISSN: | 2411-7145 2411-7145 |
DOI: | 10.4108/eai.17-3-2021.169033 |