OAENet: Oriented attention ensemble for accurate facial expression recognition
•We propose a Oriented Attention Enable Network (OAENet) architecture for FER, which aggreates ROI aware and attention mechanism, ensuring the sufficient utilization of both global and local features.•We propose a weighed mask that combines the facial landmarks and correlation coefficients coefficie...
Saved in:
Published in | Pattern recognition Vol. 112; p. 107694 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.04.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | •We propose a Oriented Attention Enable Network (OAENet) architecture for FER, which aggreates ROI aware and attention mechanism, ensuring the sufficient utilization of both global and local features.•We propose a weighed mask that combines the facial landmarks and correlation coefficients coefficients, which prove to be effective to improve the attention on local regions.•Our method has achieved state-of-the-art performances on several leading datasets such as Ck+, RAF-DB and AffectNet.
Facial Expression Recognition (FER) is a challenging yet important research topic owing to its significance with respect to its academic and commercial potentials. In this work, we propose an oriented attention pseudo-siamese network that takes advantage of global and local facial information for high accurate FER. Our network consists of two branches, a maintenance branch that consisted of several convolutional blocks to take advantage of high-level semantic features, and an attention branch that possesses a UNet-like architecture to obtain local highlight information. Specifically, we first input the face image into the maintenance branch. For the attention branch, we calculate the correlation coefficient between a face and its sub-regions. Next, we construct a weighted mask by correlating the facial landmarks and the correlation coefficients. Then, the weighted mask is sent to the attention branch. Finally, the two branches are fused to output the classification results. As such, a direction-dependent attention mechanism is established to remedy the limitation of insufficient utilization of local information. With the help of our attention mechanism, our network not only grabs a global picture but can also concentrate on important local areas. Experiments are carried out on 4 leading facial expression datasets. Our method has achieved a very appealing performance compared to other state-of-the-art methods. |
---|---|
AbstractList | •We propose a Oriented Attention Enable Network (OAENet) architecture for FER, which aggreates ROI aware and attention mechanism, ensuring the sufficient utilization of both global and local features.•We propose a weighed mask that combines the facial landmarks and correlation coefficients coefficients, which prove to be effective to improve the attention on local regions.•Our method has achieved state-of-the-art performances on several leading datasets such as Ck+, RAF-DB and AffectNet.
Facial Expression Recognition (FER) is a challenging yet important research topic owing to its significance with respect to its academic and commercial potentials. In this work, we propose an oriented attention pseudo-siamese network that takes advantage of global and local facial information for high accurate FER. Our network consists of two branches, a maintenance branch that consisted of several convolutional blocks to take advantage of high-level semantic features, and an attention branch that possesses a UNet-like architecture to obtain local highlight information. Specifically, we first input the face image into the maintenance branch. For the attention branch, we calculate the correlation coefficient between a face and its sub-regions. Next, we construct a weighted mask by correlating the facial landmarks and the correlation coefficients. Then, the weighted mask is sent to the attention branch. Finally, the two branches are fused to output the classification results. As such, a direction-dependent attention mechanism is established to remedy the limitation of insufficient utilization of local information. With the help of our attention mechanism, our network not only grabs a global picture but can also concentrate on important local areas. Experiments are carried out on 4 leading facial expression datasets. Our method has achieved a very appealing performance compared to other state-of-the-art methods. |
ArticleNumber | 107694 |
Author | Liu, Shuaicheng Zeng, Bing Wang, Zhengning Zeng, Fanwei |
Author_xml | – sequence: 1 givenname: Zhengning orcidid: 0000-0003-4218-164X surname: Wang fullname: Wang, Zhengning organization: School of Information and Communication Engineering, University of Electronic Science and Technology of China, China – sequence: 2 givenname: Fanwei surname: Zeng fullname: Zeng, Fanwei organization: Ant Financial Services Group, China – sequence: 3 givenname: Shuaicheng orcidid: 0000-0002-8815-5335 surname: Liu fullname: Liu, Shuaicheng email: liushuaicheng@uestc.edu.cn organization: School of Information and Communication Engineering, University of Electronic Science and Technology of China, China – sequence: 4 givenname: Bing surname: Zeng fullname: Zeng, Bing organization: School of Information and Communication Engineering, University of Electronic Science and Technology of China, China |
BookMark | eNqFkEtOwzAQQC1UJNrCDVjkAil24thxF0hVVT5S1W5gbbnjCXKVJpVtENweR2HFAlYe2_Pm82Zk0vUdEnLL6IJRJu6Oi7OJ0L8tCloMT1IofkGmrJZlXjFeTMiU0pLlZUHLKzIL4Ugpk-ljSnb71WaHcZntvcMuos1MjClwfZdhF_B0aDFrep8ZgHdvYroYcKbN8PPsMYQhz2Pq3bmBuSaXjWkD3vycc_L6sHlZP-Xb_ePzerXNoaQi5qrgRtHamkMFslRpNpSNZRZULTmvGFYCayOlQCEs0AMIhQhK1jbtxhWUc7Ic64LvQ_DYaHDRDBNEb1yrGdWDGX3Uoxk9mNGjmQTzX_DZu5PxX_9h9yOGabEPh14HSM4ArUsGora9-7vAN4RTgl0 |
CitedBy_id | crossref_primary_10_1109_TAI_2022_3207450 crossref_primary_10_1109_TETCI_2021_3120513 crossref_primary_10_1007_s11042_024_19392_5 crossref_primary_10_1049_ipr2_13142 crossref_primary_10_1109_TAFFC_2023_3263886 crossref_primary_10_1109_TMM_2021_3121547 crossref_primary_10_1155_2024_7321175 crossref_primary_10_1007_s11042_022_14122_1 crossref_primary_10_1109_TCSVT_2021_3103782 crossref_primary_10_1016_j_neunet_2024_106937 crossref_primary_10_3390_electronics11081240 crossref_primary_10_3390_info14100548 crossref_primary_10_4018_IJSWIS_352418 crossref_primary_10_1016_j_patcog_2023_110173 crossref_primary_10_1007_s11042_022_13799_8 crossref_primary_10_1038_s41598_023_35446_4 crossref_primary_10_3390_info15030135 crossref_primary_10_3390_e24070882 crossref_primary_10_1007_s42979_024_03469_x crossref_primary_10_1177_18761364241296439 crossref_primary_10_20965_jaciii_2024_p0793 crossref_primary_10_1007_s00521_022_08040_4 crossref_primary_10_1007_s00371_022_02642_8 crossref_primary_10_34133_2021_9759601 crossref_primary_10_1109_TCSVT_2022_3165321 crossref_primary_10_1016_j_jvcir_2025_104427 crossref_primary_10_1016_j_patcog_2022_109157 crossref_primary_10_1016_j_knosys_2021_108024 crossref_primary_10_3389_fnins_2023_1219753 crossref_primary_10_1016_j_neucom_2024_129323 crossref_primary_10_1007_s00521_021_06613_3 crossref_primary_10_1109_TCSVT_2023_3234312 crossref_primary_10_1016_j_knosys_2023_110451 crossref_primary_10_1007_s11801_024_3090_9 crossref_primary_10_3390_s21030833 crossref_primary_10_1016_j_dsp_2023_103978 crossref_primary_10_1007_s41870_024_01872_4 crossref_primary_10_1016_j_infrared_2021_103823 crossref_primary_10_3390_educsci13090914 crossref_primary_10_1109_TMM_2023_3283856 crossref_primary_10_1109_ACCESS_2023_3325407 crossref_primary_10_1016_j_neunet_2023_11_033 crossref_primary_10_1016_j_asoc_2021_107930 crossref_primary_10_1109_TCSVT_2021_3083326 crossref_primary_10_1007_s00371_024_03345_y crossref_primary_10_1109_TIP_2024_3378459 crossref_primary_10_1177_18761364251315239 crossref_primary_10_3390_electronics12183837 crossref_primary_10_3390_electronics13163149 crossref_primary_10_1007_s00371_022_02483_5 crossref_primary_10_1016_j_imavis_2024_104952 crossref_primary_10_1109_ACCESS_2021_3107548 crossref_primary_10_1016_j_imavis_2022_104556 crossref_primary_10_1016_j_ins_2022_06_087 crossref_primary_10_1016_j_inffus_2022_03_009 |
Cites_doi | 10.1016/j.patcog.2017.06.028 10.1016/j.patcog.2018.07.016 10.1016/j.patrec.2020.08.021 10.1016/j.patcog.2018.12.011 10.1016/j.neucom.2018.03.034 10.1016/j.patrec.2019.04.002 10.1109/TIP.2018.2886767 10.1145/161468.161469 10.1109/TAFFC.2017.2695999 10.1016/j.patcog.2019.107111 10.1016/j.patcog.2018.11.001 10.1016/j.patrec.2020.01.016 10.1016/j.patcog.2019.03.019 |
ContentType | Journal Article |
Copyright | 2020 Elsevier Ltd |
Copyright_xml | – notice: 2020 Elsevier Ltd |
DBID | AAYXX CITATION |
DOI | 10.1016/j.patcog.2020.107694 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1873-5142 |
ExternalDocumentID | 10_1016_j_patcog_2020_107694 S0031320320304970 |
GroupedDBID | --K --M -D8 -DT -~X .DC .~1 0R~ 123 1B1 1RT 1~. 1~5 29O 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JN AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABEFU ABFNM ABFRF ABHFT ABJNI ABMAC ABTAH ABXDB ABYKQ ACBEA ACDAQ ACGFO ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADMXK ADTZH AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FD6 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q G8K GBLVA GBOLZ HLZ HVGLF HZ~ H~9 IHE J1W JJJVA KOM KZ1 LG9 LMP LY1 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SBC SDF SDG SDP SDS SES SEW SPC SPCBC SST SSV SSZ T5K TN5 UNMZH VOH WUQ XJE XPP ZMT ZY4 ~G- AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH |
ID | FETCH-LOGICAL-c306t-924a908dab5c739031e7fd1dc9874451e56e8a776e66dc0bc69eec978d07649c3 |
IEDL.DBID | .~1 |
ISSN | 0031-3203 |
IngestDate | Tue Jul 01 02:36:33 EDT 2025 Thu Apr 24 22:57:28 EDT 2025 Fri Feb 23 02:48:38 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Facial expression recognition Weighted mask Attention Oriented gradient |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c306t-924a908dab5c739031e7fd1dc9874451e56e8a776e66dc0bc69eec978d07649c3 |
ORCID | 0000-0002-8815-5335 0000-0003-4218-164X |
ParticipantIDs | crossref_citationtrail_10_1016_j_patcog_2020_107694 crossref_primary_10_1016_j_patcog_2020_107694 elsevier_sciencedirect_doi_10_1016_j_patcog_2020_107694 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | April 2021 2021-04-00 |
PublicationDateYYYYMMDD | 2021-04-01 |
PublicationDate_xml | – month: 04 year: 2021 text: April 2021 |
PublicationDecade | 2020 |
PublicationTitle | Pattern recognition |
PublicationYear | 2021 |
Publisher | Elsevier Ltd |
Publisher_xml | – name: Elsevier Ltd |
References | Zhong, Liu, Yang, Liu, Huang, Metaxas (bib0039) 2012 Shi, Zhang, Yao, Sun, Rao, Shu (bib0028) 2020; 138 Wen, Zhang, Li, Qiao (bib0033) 2016 Hasani, Mahoor (bib0009) 2017 Liu, Yuan, Gong, Xie, Fang, Luo (bib0019) 2018; 84 Wang, Jiang, Qian, Yang, Li, Zhang, Wang, Tang (bib0032) 2017 Liu, Geng, Ling, ming Cheung (bib0017) 2019; 88 Kim, Baddar, Jang, Ro (bib0012) 2017; 10 Ronneberger, Fischer, Brox (bib0025) 2015 Mollahosseini, Hasani, Mahoor (bib0021) 2017 Cai, Meng, Khan, Li, OReilly, Tong (bib0003) 2018 Ng, Nguyen, Vonikakis, Winkler (bib0022) 2015 Sandler, Howard, Zhu, Zhmoginov, Chen (bib0026) 2018 Shah, Sharif, Yasmin, Fernandes (bib0027) 2017 Valstar, Pantic (bib0031) 2010 Zeng, Shan, Chen (bib0037) 2018 Li, Zeng, Shan, Chen (bib0015) 2018; 28 Lucey, Cohn, Kanade, Saragih, Ambadar, Matthews (bib0020) 2010 Hadsell, Chopra, LeCun (bib0008) 2006 Gan, Chen, Xu (bib0007) 2019; 125 Ouellet (bib0023) 2014 Ding, Zhou, Chellappa (bib0004) 2017 Xu, Ba, Kiros, Cho, Courville, Salakhudinov, Zemel, Bengio (bib0035) 2015 Jung, Lee, Yim, Park, Kim (bib0011) 2015 Liu, Han, Meng, Tong (bib0016) 2014 Liu, Vijaya Kumar, Jia, You (bib0018) 2019; 88 Herlihy (bib0010) 1993; 15 Rodríguez, Cucurull, Gonfaus, Roca, Gonzlez (bib0024) 2017; 72 Levi, Hassner (bib0013) 2015 Fan, Lam, Li (bib0005) 2018 Li, Deng, Du (bib0014) 2017 Xie, Hu, Wu (bib0034) 2019; 92 Zheng, Fu, Mei, Luo (bib0038) 2017 Zhu, Zhao, Huang, Tu, Ma (bib0040) 2017 Bozorgtabar, Mahapatra, Thiran (bib0002) 2020; 100 Sun, Zhao, Jin (bib0030) 2018; 296 Yu, Zheng, Peng, Dong, Du (bib0036) 2020; 131 Berman, Rannen Triki, Blaschko (bib0001) 2018 Fu, Zheng, Mei (bib0006) 2017 Sun, Chen, Yang, Wang (bib0029) 2018 Gan (10.1016/j.patcog.2020.107694_bib0007) 2019; 125 Zeng (10.1016/j.patcog.2020.107694_bib0037) 2018 Li (10.1016/j.patcog.2020.107694_bib0015) 2018; 28 Yu (10.1016/j.patcog.2020.107694_bib0036) 2020; 131 Liu (10.1016/j.patcog.2020.107694_bib0017) 2019; 88 Levi (10.1016/j.patcog.2020.107694_bib0013) 2015 Zhong (10.1016/j.patcog.2020.107694_bib0039) 2012 Kim (10.1016/j.patcog.2020.107694_bib0012) 2017; 10 Valstar (10.1016/j.patcog.2020.107694_bib0031) 2010 Sun (10.1016/j.patcog.2020.107694_bib0030) 2018; 296 Zheng (10.1016/j.patcog.2020.107694_bib0038) 2017 Bozorgtabar (10.1016/j.patcog.2020.107694_bib0002) 2020; 100 Ding (10.1016/j.patcog.2020.107694_bib0004) 2017 Mollahosseini (10.1016/j.patcog.2020.107694_bib0021) 2017 Fan (10.1016/j.patcog.2020.107694_bib0005) 2018 Ng (10.1016/j.patcog.2020.107694_bib0022) 2015 Xu (10.1016/j.patcog.2020.107694_bib0035) 2015 Xie (10.1016/j.patcog.2020.107694_bib0034) 2019; 92 Wen (10.1016/j.patcog.2020.107694_bib0033) 2016 Jung (10.1016/j.patcog.2020.107694_bib0011) 2015 Hasani (10.1016/j.patcog.2020.107694_bib0009) 2017 Liu (10.1016/j.patcog.2020.107694_bib0019) 2018; 84 Hadsell (10.1016/j.patcog.2020.107694_bib0008) 2006 Sandler (10.1016/j.patcog.2020.107694_bib0026) 2018 Wang (10.1016/j.patcog.2020.107694_bib0032) 2017 Zhu (10.1016/j.patcog.2020.107694_bib0040) 2017 Fu (10.1016/j.patcog.2020.107694_bib0006) 2017 Ronneberger (10.1016/j.patcog.2020.107694_bib0025) 2015 Shi (10.1016/j.patcog.2020.107694_bib0028) 2020; 138 Li (10.1016/j.patcog.2020.107694_bib0014) 2017 Liu (10.1016/j.patcog.2020.107694_bib0018) 2019; 88 Liu (10.1016/j.patcog.2020.107694_bib0016) 2014 Ouellet (10.1016/j.patcog.2020.107694_bib0023) 2014 Herlihy (10.1016/j.patcog.2020.107694_bib0010) 1993; 15 Rodríguez (10.1016/j.patcog.2020.107694_bib0024) 2017; 72 Lucey (10.1016/j.patcog.2020.107694_bib0020) 2010 Sun (10.1016/j.patcog.2020.107694_bib0029) 2018 Cai (10.1016/j.patcog.2020.107694_bib0003) 2018 Shah (10.1016/j.patcog.2020.107694_bib0027) 2017 Berman (10.1016/j.patcog.2020.107694_bib0001) 2018 |
References_xml | – volume: 92 start-page: 177 year: 2019 end-page: 191 ident: bib0034 article-title: Deep multi-path convolutional neural network joint with salient region attention for facial expression recognition publication-title: Pattern Recognit – year: 2017 ident: bib0038 article-title: Learning multi-attention convolutional neural network for fine-grained image recognition publication-title: Proceedings of the IEEE International Conference on Computer Vison – year: 2017 ident: bib0040 article-title: Structured attentions for visual question answering publication-title: Proceedings of the IEEE International Conference on Computer Vison – volume: 100 start-page: 107111 year: 2020 ident: bib0002 article-title: Exprada: adversarial domain adaptation for facial expression analysis publication-title: Pattern Recognit – volume: 138 start-page: 520 year: 2020 end-page: 526 ident: bib0028 article-title: Can-gan: conditioned-attention normalized gan for face age synthesis publication-title: Pattern Recognit Lett – year: 2017 ident: bib0027 article-title: Facial expressions classification and false label reduction using lda and threefold svm publication-title: Pattern Recognit Lett – volume: 84 start-page: 251 year: 2018 end-page: 261 ident: bib0019 article-title: Conditional convolution neural network enhanced random forest for facial expression recognition publication-title: Pattern Recognit – year: 2012 ident: bib0039 article-title: Learning active facial patches for expression analysis publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – year: 2017 ident: bib0021 article-title: Affectnet: a database for facial expression, valence, and arousal computing in the wild publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – start-page: 234 year: 2015 end-page: 241 ident: bib0025 article-title: U-net: Convolutional networks for biomedical image segmentation publication-title: International Conference on Medical Image Computing and Computer-Assisted Intervention – volume: 10 start-page: 223 year: 2017 end-page: 236 ident: bib0012 article-title: Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition publication-title: IEEE Trans Affect Comput – volume: 131 start-page: 166 year: 2020 end-page: 171 ident: bib0036 article-title: Facial expression recognition based on a multi-task global-local network publication-title: Pattern Recognit Lett – volume: 88 start-page: 557 year: 2019 end-page: 568 ident: bib0017 article-title: Attention guided deep audio-face fusion for efficient speaker naming publication-title: Pattern Recognit – year: 2018 ident: bib0026 article-title: Mobilenetv2: Inverted residuals and linear bottlenecks publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – start-page: 65 year: 2010 ident: bib0031 article-title: Induced disgust, happiness and surprise: an addition to the mmi facial expression database publication-title: Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect – volume: 28 start-page: 2439 year: 2018 end-page: 2450 ident: bib0015 article-title: Occlusion aware facial expression recognition using cnn with attention mechanism publication-title: IEEE Trans. Image Process. – start-page: 84 year: 2018 end-page: 94 ident: bib0005 article-title: Multi-region ensemble convolutional neural network for facial expression recognition publication-title: International Conference on Artificial Neural Networks – year: 2015 ident: bib0022 article-title: Deep learning for emotion recognition on small datasets using transfer learning publication-title: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction – year: 2018 ident: bib0001 article-title: The lovász-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – year: 2018 ident: bib0029 article-title: Stacked u-nets with multi-output for road extraction publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition (CVPR) Workshops – year: 2014 ident: bib0023 article-title: Real-time emotion recognition for gaming using deep convolutional network features publication-title: arXiv preprint arXiv:1408.3750 – year: 2018 ident: bib0037 article-title: Facial expression recognition with inconsistently annotated datasets publication-title: Proceedings of the European conference on Computer Vison – year: 2017 ident: bib0032 article-title: Residual attention network for image classification publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – year: 2006 ident: bib0008 article-title: Dimensionality reduction by learning an invariant mapping publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – start-page: 2048 year: 2015 end-page: 2057 ident: bib0035 article-title: Show, attend and tell: Neural image caption generation with visual attention publication-title: International Conference on Machine Learning – volume: 125 start-page: 105 year: 2019 end-page: 112 ident: bib0007 article-title: Facial expression recognition boosted by soft label with a diverse ensemble publication-title: Pattern Recognit Lett – volume: 88 start-page: 1 year: 2019 end-page: 12 ident: bib0018 article-title: Hard negative generation for identity-disentangled facial expression recognition publication-title: Pattern Recognit – year: 2014 ident: bib0016 article-title: Facial expression recognition via a boosted deep belief network publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – start-page: 302 year: 2018 end-page: 309 ident: bib0003 article-title: Island loss for learning discriminative features in facial expression recognition publication-title: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018) – start-page: 118 year: 2017 end-page: 126 ident: bib0004 article-title: Facenet2expnet: Regularizing a deep face recognition net for expression recognition publication-title: 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017) – year: 2017 ident: bib0006 article-title: Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition – year: 2010 ident: bib0020 article-title: The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition (CVPR) Workshops – volume: 72 start-page: 563 year: 2017 end-page: 571 ident: bib0024 article-title: Age and gender recognition in the wild with deep attention publication-title: Pattern Recognit – start-page: 499 year: 2016 end-page: 515 ident: bib0033 article-title: A discriminative feature learning approach for deep face recognition publication-title: European Conference on Computer Vison – year: 2015 ident: bib0011 article-title: Joint fine-tuning in deep neural networks for facial expression recognition publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – volume: 15 start-page: 745 year: 1993 end-page: 770 ident: bib0010 article-title: A methodology for implementing highly concurrent data objects publication-title: ACM Trans. Program. Lang. Syst. – year: 2015 ident: bib0013 article-title: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns publication-title: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction – year: 2017 ident: bib0014 article-title: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – volume: 296 start-page: 12 year: 2018 end-page: 22 ident: bib0030 article-title: A visual attention based roi detection method for facial expression recognition publication-title: Neurocomputing – year: 2017 ident: bib0009 article-title: Facial expression recognition using enhanced deep 3D convolutional neural networks publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition (CVPR) Workshops – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0040 article-title: Structured attentions for visual question answering – year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0001 article-title: The lovász-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks – volume: 72 start-page: 563 year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0024 article-title: Age and gender recognition in the wild with deep attention publication-title: Pattern Recognit doi: 10.1016/j.patcog.2017.06.028 – year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0029 article-title: Stacked u-nets with multi-output for road extraction – volume: 84 start-page: 251 year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0019 article-title: Conditional convolution neural network enhanced random forest for facial expression recognition publication-title: Pattern Recognit doi: 10.1016/j.patcog.2018.07.016 – year: 2014 ident: 10.1016/j.patcog.2020.107694_bib0023 article-title: Real-time emotion recognition for gaming using deep convolutional network features publication-title: arXiv preprint arXiv:1408.3750 – start-page: 234 year: 2015 ident: 10.1016/j.patcog.2020.107694_bib0025 article-title: U-net: Convolutional networks for biomedical image segmentation – start-page: 2048 year: 2015 ident: 10.1016/j.patcog.2020.107694_bib0035 article-title: Show, attend and tell: Neural image caption generation with visual attention – volume: 138 start-page: 520 year: 2020 ident: 10.1016/j.patcog.2020.107694_bib0028 article-title: Can-gan: conditioned-attention normalized gan for face age synthesis publication-title: Pattern Recognit Lett doi: 10.1016/j.patrec.2020.08.021 – volume: 88 start-page: 557 year: 2019 ident: 10.1016/j.patcog.2020.107694_bib0017 article-title: Attention guided deep audio-face fusion for efficient speaker naming publication-title: Pattern Recognit doi: 10.1016/j.patcog.2018.12.011 – volume: 296 start-page: 12 year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0030 article-title: A visual attention based roi detection method for facial expression recognition publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.03.034 – start-page: 118 year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0004 article-title: Facenet2expnet: Regularizing a deep face recognition net for expression recognition – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0027 article-title: Facial expressions classification and false label reduction using lda and threefold svm publication-title: Pattern Recognit Lett – year: 2015 ident: 10.1016/j.patcog.2020.107694_bib0011 article-title: Joint fine-tuning in deep neural networks for facial expression recognition – year: 2015 ident: 10.1016/j.patcog.2020.107694_bib0013 article-title: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns – volume: 125 start-page: 105 year: 2019 ident: 10.1016/j.patcog.2020.107694_bib0007 article-title: Facial expression recognition boosted by soft label with a diverse ensemble publication-title: Pattern Recognit Lett doi: 10.1016/j.patrec.2019.04.002 – year: 2014 ident: 10.1016/j.patcog.2020.107694_bib0016 article-title: Facial expression recognition via a boosted deep belief network – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0032 article-title: Residual attention network for image classification – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0021 article-title: Affectnet: a database for facial expression, valence, and arousal computing in the wild publication-title: Proceedings of the IEEE Conference on Computer Vison and Pattern Recognition – year: 2015 ident: 10.1016/j.patcog.2020.107694_bib0022 article-title: Deep learning for emotion recognition on small datasets using transfer learning – year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0026 article-title: Mobilenetv2: Inverted residuals and linear bottlenecks – year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0037 article-title: Facial expression recognition with inconsistently annotated datasets – volume: 28 start-page: 2439 issue: 5 year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0015 article-title: Occlusion aware facial expression recognition using cnn with attention mechanism publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2018.2886767 – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0009 article-title: Facial expression recognition using enhanced deep 3D convolutional neural networks – year: 2010 ident: 10.1016/j.patcog.2020.107694_bib0020 article-title: The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression – volume: 15 start-page: 745 issue: 5 year: 1993 ident: 10.1016/j.patcog.2020.107694_bib0010 article-title: A methodology for implementing highly concurrent data objects publication-title: ACM Trans. Program. Lang. Syst. doi: 10.1145/161468.161469 – volume: 10 start-page: 223 issue: 2 year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0012 article-title: Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition publication-title: IEEE Trans Affect Comput doi: 10.1109/TAFFC.2017.2695999 – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0014 article-title: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild – start-page: 84 year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0005 article-title: Multi-region ensemble convolutional neural network for facial expression recognition – start-page: 302 year: 2018 ident: 10.1016/j.patcog.2020.107694_bib0003 article-title: Island loss for learning discriminative features in facial expression recognition – start-page: 65 year: 2010 ident: 10.1016/j.patcog.2020.107694_bib0031 article-title: Induced disgust, happiness and surprise: an addition to the mmi facial expression database – year: 2012 ident: 10.1016/j.patcog.2020.107694_bib0039 article-title: Learning active facial patches for expression analysis – volume: 100 start-page: 107111 year: 2020 ident: 10.1016/j.patcog.2020.107694_bib0002 article-title: Exprada: adversarial domain adaptation for facial expression analysis publication-title: Pattern Recognit doi: 10.1016/j.patcog.2019.107111 – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0006 article-title: Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition – year: 2006 ident: 10.1016/j.patcog.2020.107694_bib0008 article-title: Dimensionality reduction by learning an invariant mapping – volume: 88 start-page: 1 year: 2019 ident: 10.1016/j.patcog.2020.107694_bib0018 article-title: Hard negative generation for identity-disentangled facial expression recognition publication-title: Pattern Recognit doi: 10.1016/j.patcog.2018.11.001 – volume: 131 start-page: 166 year: 2020 ident: 10.1016/j.patcog.2020.107694_bib0036 article-title: Facial expression recognition based on a multi-task global-local network publication-title: Pattern Recognit Lett doi: 10.1016/j.patrec.2020.01.016 – start-page: 499 year: 2016 ident: 10.1016/j.patcog.2020.107694_bib0033 article-title: A discriminative feature learning approach for deep face recognition – volume: 92 start-page: 177 year: 2019 ident: 10.1016/j.patcog.2020.107694_bib0034 article-title: Deep multi-path convolutional neural network joint with salient region attention for facial expression recognition publication-title: Pattern Recognit doi: 10.1016/j.patcog.2019.03.019 – year: 2017 ident: 10.1016/j.patcog.2020.107694_bib0038 article-title: Learning multi-attention convolutional neural network for fine-grained image recognition |
SSID | ssj0017142 |
Score | 2.6129324 |
Snippet | •We propose a Oriented Attention Enable Network (OAENet) architecture for FER, which aggreates ROI aware and attention mechanism, ensuring the sufficient... |
SourceID | crossref elsevier |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 107694 |
SubjectTerms | Attention Facial expression recognition Oriented gradient Weighted mask |
Title | OAENet: Oriented attention ensemble for accurate facial expression recognition |
URI | https://dx.doi.org/10.1016/j.patcog.2020.107694 |
Volume | 112 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NS8MwFA9jXrz4Lc6PkYPXuLVNm8bbGBtTobs42K2kSSqT2Y3RgSf_dt9r06EgCl5DHiSPvK_2936PkNsgl1pmImRcaMO4jAMWhwimshB-uKkYvhFtkUSTGX-ch_MWGTa9MAirdL6_9umVt3YrPafN3nqxwB5fpB3EAeD4q0hg3c65wFd-97GDeeB875oxPPAYbm3a5yqM1xrc3eoFqkQfl0Qk-c_h6UvIGR-RA5cr0kF9nGPSssUJOWzmMFBnlqckmQ5GiS3v6RRJiyGFpEiaWcEYKVSp9i1bWgrJKVVab5EaguYKv5RT--5gsAXdAYlWxRmZjUfPwwlzcxKYhoS_ZFBCKdmPjcpCLQIJN7UiN57RErntQ8-GkY2VEJGNIqP7mY6ktRrKRwNX5lIH56RdrAp7QahSBiwerFhmMRSKQSZjmfvW65tQ-SDRIUGjnlQ7EnGcZbFMG7TYa1orNUWlprVSO4TtpNY1icYf-0Wj-fTbY0jBz_8qeflvySuy7yNcpQLlXJN2udnaG8g3yqxbPagu2Rs8PE2ST0pT054 |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3Na8IwFA-ih-2y7zH3mcOuRfuRptlNRNHp6kXBW2iTdDi2KqPC_vy916aywdhg15AHzSPvq_m93yPk3s-EEilnTsCVdgIR-U7EEExlIPwEumT4RrRFHI4WweOSLRukX_fCIKzS-v7Kp5fe2q50rDY7m9UKe3yRdhAHgONTEYe6vYXsVKxJWr3xZBTvHhO4G1Sk4b7r4O66g66EeW3A462foVD0cImHIvg5Qn2JOsMjcmDTRdqrvuiYNEx-Qg7rUQzUWuYpiWe9QWyKBzpD3mLIIinyZpZIRgqFqnlLXw2F_JQmSm2RHYJmCf4sp-bDImFzusMSrfMzshgO5v2RY0clOApy_sKBKioR3UgnKVPcF3BSwzPtaiWQ3p65hoUmSjgPTRhq1U1VKIxRUEFqOHIglH9Omvk6NxeEJokGowdDFmkEtaKfikhknnG7miUeSLSJX6tHKssjjuMsXmUNGHuRlVIlKlVWSm0TZye1qXg0_tjPa83Lb_dBgqv_VfLy35J3ZG80f5rK6TieXJF9D9ErJUbnmjSL9625gfSjSG_t9foEmmjWTw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=OAENet%3A+Oriented+attention+ensemble+for+accurate+facial+expression+recognition&rft.jtitle=Pattern+recognition&rft.au=Wang%2C+Zhengning&rft.au=Zeng%2C+Fanwei&rft.au=Liu%2C+Shuaicheng&rft.au=Zeng%2C+Bing&rft.date=2021-04-01&rft.issn=0031-3203&rft.volume=112&rft.spage=107694&rft_id=info:doi/10.1016%2Fj.patcog.2020.107694&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_patcog_2020_107694 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon |