Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order

Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine learning methods, since people can vary signifi...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 61; pp. 610 - 628
Main Authors Lopes, André Teixeira, de Aguiar, Edilson, De Souza, Alberto F., Oliveira-Santos, Thiago
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.01.2017
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine learning methods, since people can vary significantly in the way they show their expressions. Even images of the same person in the same facial expression can vary in brightness, background and pose, and these variations are emphasized if considering different subjects (because of variations in shape, ethnicity among others). Although facial expression recognition is very studied in the literature, few works perform fair evaluation avoiding mixing subjects while training and testing the proposed algorithms. Hence, facial expression recognition is still a challenging problem in computer vision. In this work, we propose a simple solution for facial expression recognition that uses a combination of Convolutional Neural Network and specific image pre-processing steps. Convolutional Neural Networks achieve better accuracy with big data. However, there are no publicly available datasets with sufficient data for facial expression recognition with deep architectures. Therefore, to tackle the problem, we apply some pre-processing techniques to extract only expression specific features from a face image and explore the presentation order of the samples during training. The experiments employed to evaluate our technique were carried out using three largely used public databases (CK+, JAFFE and BU-3DFE). A study of the impact of each image pre-processing operation in the accuracy rate is presented. The proposed method: achieves competitive results when compared with other facial expression recognition methods – 96.76% of accuracy in the CK+ database – it is fast to train, and it allows for real time facial expression recognition with standard computers. •A CNN based approach for facial expression recognition.•A set of pre-processing steps allowing for a simpler CNN architecture.•A study of the impact of each pre-processing step in the accuracy.•A study for lowering the impact of the sample presentation order during training.•High facial expression recognition accuracy (96.76%) with real time evaluation.
AbstractList Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine learning methods, since people can vary significantly in the way they show their expressions. Even images of the same person in the same facial expression can vary in brightness, background and pose, and these variations are emphasized if considering different subjects (because of variations in shape, ethnicity among others). Although facial expression recognition is very studied in the literature, few works perform fair evaluation avoiding mixing subjects while training and testing the proposed algorithms. Hence, facial expression recognition is still a challenging problem in computer vision. In this work, we propose a simple solution for facial expression recognition that uses a combination of Convolutional Neural Network and specific image pre-processing steps. Convolutional Neural Networks achieve better accuracy with big data. However, there are no publicly available datasets with sufficient data for facial expression recognition with deep architectures. Therefore, to tackle the problem, we apply some pre-processing techniques to extract only expression specific features from a face image and explore the presentation order of the samples during training. The experiments employed to evaluate our technique were carried out using three largely used public databases (CK+, JAFFE and BU-3DFE). A study of the impact of each image pre-processing operation in the accuracy rate is presented. The proposed method: achieves competitive results when compared with other facial expression recognition methods – 96.76% of accuracy in the CK+ database – it is fast to train, and it allows for real time facial expression recognition with standard computers. •A CNN based approach for facial expression recognition.•A set of pre-processing steps allowing for a simpler CNN architecture.•A study of the impact of each pre-processing step in the accuracy.•A study for lowering the impact of the sample presentation order during training.•High facial expression recognition accuracy (96.76%) with real time evaluation.
Author De Souza, Alberto F.
de Aguiar, Edilson
Lopes, André Teixeira
Oliveira-Santos, Thiago
Author_xml – sequence: 1
  givenname: André Teixeira
  surname: Lopes
  fullname: Lopes, André Teixeira
  email: andreteixeiralopes@gmail.com
  organization: Department of Informatics, Universidade Federal do Espírito Santo (Campus Vitória), 514 Fernando Ferrari Avenue, 29075910 Goiabeiras, Vitória, Espírito Santo, Brazil
– sequence: 2
  givenname: Edilson
  surname: de Aguiar
  fullname: de Aguiar, Edilson
  email: edilson.de.aguiar@gmail.com
  organization: Department of Computing and Electronics, Universidade Federal do Espírito Santo (Campus São Mateus), BR 101 North highway, km 60, 29932540 Bairro Litorâneo, São Mateus, Espírito Santo, Brazil
– sequence: 3
  givenname: Alberto F.
  surname: De Souza
  fullname: De Souza, Alberto F.
  email: alberto@lcad.inf.ufes.br
  organization: Department of Informatics, Universidade Federal do Espírito Santo (Campus Vitória), 514 Fernando Ferrari Avenue, 29075910 Goiabeiras, Vitória, Espírito Santo, Brazil
– sequence: 4
  givenname: Thiago
  surname: Oliveira-Santos
  fullname: Oliveira-Santos, Thiago
  email: todsantos@inf.ufes.br
  organization: Department of Informatics, Universidade Federal do Espírito Santo (Campus Vitória), 514 Fernando Ferrari Avenue, 29075910 Goiabeiras, Vitória, Espírito Santo, Brazil
BookMark eNqFkFFPwyAUhYmZidv0H_jAH2iFltJ1DyZmcWqy6Is-E0pvN2YHDbBN_73U-eSDPl3g3HPC-SZoZKwBhK4pSSmh_Gab9jIou06zeEtJmZKMn6ExnZV5UlCWjdCYkJwmeUbyCzTxfksILaMwRmEplZYdho_egffaGuwgRhkdhvNRhw1eWHOw3X54iJvPsHffIxyte_fzKPfarE-rLRxxI4PE0jQ4bAAHJ7UZZC93fQfYugbcJTpvZefh6mdO0dvy_nXxmKxeHp4Wd6tE5YSHhFV1oVTd1Cq2rFmmSl4p1rQMMsarlqt8pmTNGRQNZxXQjNcVZKSaSUaLuqzzKZqfcpWz3jtohdJBDj2Gb3WCEjHwE1tx4icGfoKUIvKLZvbL3Du9k-7zP9vtyQax2EGDE15pMAoaHcEG0Vj9d8AX_kmRiQ
CitedBy_id crossref_primary_10_1007_s11042_018_7030_1
crossref_primary_10_1016_j_patrec_2019_01_008
crossref_primary_10_1186_s13673_018_0156_3
crossref_primary_10_1007_s00138_024_01641_0
crossref_primary_10_1109_TMM_2021_3116434
crossref_primary_10_1007_s12046_022_01943_x
crossref_primary_10_1007_s12652_020_02311_5
crossref_primary_10_1007_s11760_020_01753_w
crossref_primary_10_1016_j_patcog_2019_01_044
crossref_primary_10_1109_TBIOM_2021_3120758
crossref_primary_10_1016_j_image_2019_02_005
crossref_primary_10_1109_TAFFC_2018_2890471
crossref_primary_10_1007_s11760_022_02381_2
crossref_primary_10_1007_s00521_020_05557_4
crossref_primary_10_1016_j_ins_2020_04_041
crossref_primary_10_1155_2022_6738068
crossref_primary_10_1111_2041_210X_13436
crossref_primary_10_1155_2021_5570870
crossref_primary_10_1080_10584587_2021_1911313
crossref_primary_10_1142_S0219622019300052
crossref_primary_10_1016_j_patcog_2023_109496
crossref_primary_10_2478_amns_2024_1419
crossref_primary_10_1088_1361_6463_ab8036
crossref_primary_10_3390_info13060268
crossref_primary_10_3390_info15030135
crossref_primary_10_1016_j_patcog_2018_10_014
crossref_primary_10_1109_TIA_2020_3028558
crossref_primary_10_1016_j_neucom_2019_05_005
crossref_primary_10_1007_s11042_023_16066_6
crossref_primary_10_1016_j_eswa_2018_06_033
crossref_primary_10_1016_j_procs_2020_07_101
crossref_primary_10_1080_10447318_2023_2254626
crossref_primary_10_3390_jtaer19020058
crossref_primary_10_1088_1742_6596_1815_1_012005
crossref_primary_10_1007_s13042_022_01681_w
crossref_primary_10_3934_mbe_2023357
crossref_primary_10_1016_j_patcog_2018_06_006
crossref_primary_10_3390_info10120375
crossref_primary_10_1016_j_atech_2024_100594
crossref_primary_10_1016_j_neuropsychologia_2019_04_022
crossref_primary_10_1007_s00500_020_05501_7
crossref_primary_10_3390_s20041087
crossref_primary_10_1109_ACCESS_2019_2960769
crossref_primary_10_56532_mjsat_v4i1_195
crossref_primary_10_1088_1742_6596_887_1_012089
crossref_primary_10_1109_ACCESS_2018_2889852
crossref_primary_10_1007_s11760_019_01568_4
crossref_primary_10_1016_j_patcog_2019_07_007
crossref_primary_10_1088_1742_6596_1966_1_012027
crossref_primary_10_1007_s11277_024_10993_9
crossref_primary_10_1007_s00371_019_01705_7
crossref_primary_10_24017_science_2024_2_9
crossref_primary_10_1017_dap_2023_30
crossref_primary_10_1007_s11042_019_7530_7
crossref_primary_10_1007_s42979_023_02423_7
crossref_primary_10_1051_e3sconf_202014601003
crossref_primary_10_1145_3688000
crossref_primary_10_1177_2056305120924771
crossref_primary_10_3390_math10040645
crossref_primary_10_1016_j_jvcir_2020_102949
crossref_primary_10_1016_j_patrec_2017_10_022
crossref_primary_10_1155_2022_7450637
crossref_primary_10_1111_exsy_13670
crossref_primary_10_1007_s11760_022_02246_8
crossref_primary_10_1109_TAFFC_2021_3087222
crossref_primary_10_1016_j_dt_2021_05_017
crossref_primary_10_3390_brainsci11050548
crossref_primary_10_48175_IJARSCT_19257
crossref_primary_10_1155_2020_7954393
crossref_primary_10_1039_C9NR03450A
crossref_primary_10_1016_j_optcom_2023_129287
crossref_primary_10_1016_j_patrec_2017_08_008
crossref_primary_10_1142_S0219691320500034
crossref_primary_10_1016_j_inffus_2018_06_003
crossref_primary_10_32604_csse_2022_019749
crossref_primary_10_1007_s11042_019_07959_6
crossref_primary_10_1016_j_chaos_2022_112429
crossref_primary_10_3390_app13137799
crossref_primary_10_1007_s12626_019_00059_9
crossref_primary_10_1109_TIFS_2020_3007327
crossref_primary_10_1155_2021_2689029
crossref_primary_10_3390_informatics7010006
crossref_primary_10_1016_j_apenergy_2021_117514
crossref_primary_10_1007_s10489_021_02254_0
crossref_primary_10_1016_j_tsep_2024_103197
crossref_primary_10_1017_S0954579419000312
crossref_primary_10_1142_S0219467822400058
crossref_primary_10_1007_s11042_019_08397_0
crossref_primary_10_1109_TAFFC_2018_2829707
crossref_primary_10_1109_TAFFC_2019_2953664
crossref_primary_10_1155_2022_4739897
crossref_primary_10_1007_s00521_017_3216_0
crossref_primary_10_1016_j_neucom_2018_12_037
crossref_primary_10_1007_s00521_019_04564_4
crossref_primary_10_1007_s11042_022_13117_2
crossref_primary_10_1109_TII_2020_3007629
crossref_primary_10_1007_s00521_022_08049_9
crossref_primary_10_1109_ACCESS_2019_2928364
crossref_primary_10_1016_j_engappai_2024_109014
crossref_primary_10_3390_electronics8030324
crossref_primary_10_1109_ACCESS_2019_2910605
crossref_primary_10_1007_s13735_023_00285_6
crossref_primary_10_1016_j_inffus_2022_03_009
crossref_primary_10_1109_ACCESS_2019_2907327
crossref_primary_10_1038_s41598_023_30442_0
crossref_primary_10_1007_s11042_023_16433_3
crossref_primary_10_1016_j_asoc_2021_107623
crossref_primary_10_1016_j_patrec_2017_06_025
crossref_primary_10_4236_jsip_2017_83009
crossref_primary_10_2139_ssrn_4120087
crossref_primary_10_3390_a15120444
crossref_primary_10_1364_OE_482489
crossref_primary_10_1007_s11042_018_6040_3
crossref_primary_10_1016_j_ifacol_2021_04_181
crossref_primary_10_1016_j_patcog_2020_107281
crossref_primary_10_1016_j_dibe_2020_100028
crossref_primary_10_1016_j_knosys_2022_108136
crossref_primary_10_3390_s20236716
crossref_primary_10_1016_j_compeleceng_2023_108583
crossref_primary_10_1142_S0218001422520280
crossref_primary_10_1007_s11042_023_15570_z
crossref_primary_10_1007_s11831_021_09551_4
crossref_primary_10_3390_s23156799
crossref_primary_10_1109_ACCESS_2021_3078258
crossref_primary_10_1007_s12652_023_04586_w
crossref_primary_10_1007_s11760_025_03889_z
crossref_primary_10_1016_j_patcog_2024_110741
crossref_primary_10_1007_s11042_020_09261_2
crossref_primary_10_1016_j_eswa_2020_113459
crossref_primary_10_3934_mbe_2023042
crossref_primary_10_1007_s12193_023_00410_z
crossref_primary_10_1016_j_asoc_2018_11_046
crossref_primary_10_1049_iet_bmt_2017_0160
crossref_primary_10_3233_IDA_194965
crossref_primary_10_3390_s21196438
crossref_primary_10_24012_dumf_679793
crossref_primary_10_1038_s41598_023_48250_x
crossref_primary_10_3390_s23052455
crossref_primary_10_1007_s11042_020_08901_x
crossref_primary_10_1007_s00521_021_06613_3
crossref_primary_10_1016_j_procs_2017_10_038
crossref_primary_10_1117_1_JEI_31_3_033039
crossref_primary_10_1016_j_apacoust_2020_107840
crossref_primary_10_3390_s23073577
crossref_primary_10_1007_s11571_022_09879_y
crossref_primary_10_1016_j_ins_2021_10_005
crossref_primary_10_1007_s11042_024_19364_9
crossref_primary_10_3390_sym12020319
crossref_primary_10_1007_s11042_018_6839_y
crossref_primary_10_1016_j_imavis_2020_104038
crossref_primary_10_3390_app10051897
crossref_primary_10_1109_TKDE_2020_3047894
crossref_primary_10_1080_03772063_2021_1902868
crossref_primary_10_1016_j_eswa_2021_114991
crossref_primary_10_1016_j_procs_2024_09_687
crossref_primary_10_3390_pr12020279
crossref_primary_10_1049_iet_ipr_2020_0591
crossref_primary_10_1049_iet_ipr_2019_1188
crossref_primary_10_3390_educsci13090914
crossref_primary_10_1080_13682199_2022_2157956
crossref_primary_10_1088_1742_6596_2083_3_032030
crossref_primary_10_1007_s10916_019_1500_5
crossref_primary_10_1109_ACCESS_2020_3015917
crossref_primary_10_1142_S0218001423570021
crossref_primary_10_1007_s11227_018_2554_8
crossref_primary_10_1109_TNSM_2020_3018303
crossref_primary_10_1016_j_patcog_2018_07_016
crossref_primary_10_1155_2022_8421434
crossref_primary_10_1016_j_neucom_2020_03_036
crossref_primary_10_1109_TCYB_2019_2917049
crossref_primary_10_3390_software2020009
crossref_primary_10_1007_s11760_018_1236_6
crossref_primary_10_1016_j_patcog_2019_04_007
crossref_primary_10_1007_s11042_018_5909_5
crossref_primary_10_1007_s42452_019_1538_5
crossref_primary_10_3390_sym10110626
crossref_primary_10_1007_s00521_024_09840_6
crossref_primary_10_1007_s10462_022_10160_1
crossref_primary_10_3390_bs12020055
crossref_primary_10_1007_s11042_017_5105_z
crossref_primary_10_4236_jcc_2023_1112006
crossref_primary_10_1108_JCMARS_07_2022_0018
crossref_primary_10_1016_j_bspc_2021_102459
crossref_primary_10_1109_JPROC_2017_2684460
crossref_primary_10_32604_cmc_2024_047326
crossref_primary_10_1007_s12652_022_03843_8
crossref_primary_10_1108_JEIM_03_2022_0074
crossref_primary_10_1155_2021_9940148
crossref_primary_10_1016_j_dsp_2023_104082
crossref_primary_10_31209_2018_100000014
crossref_primary_10_1007_s00371_019_01627_4
crossref_primary_10_3390_s18113993
crossref_primary_10_1145_3478078
crossref_primary_10_1007_s11036_019_01366_9
crossref_primary_10_1080_21642583_2019_1647577
crossref_primary_10_1007_s11554_021_01071_5
crossref_primary_10_1049_el_2017_3538
crossref_primary_10_1155_2020_7646527
crossref_primary_10_1186_s13640_017_0212_3
crossref_primary_10_1007_s11042_021_10951_8
crossref_primary_10_1016_j_ijcce_2024_05_003
crossref_primary_10_1515_sagmb_2024_0004
crossref_primary_10_1007_s00521_019_04437_w
crossref_primary_10_1109_ACCESS_2024_3422383
crossref_primary_10_1016_j_neucom_2019_04_050
crossref_primary_10_1088_1361_6579_ab55b3
crossref_primary_10_1093_pnasnexus_pgac039
crossref_primary_10_1142_S0129065721500428
crossref_primary_10_1063_5_0172297
crossref_primary_10_3233_JIFS_210762
crossref_primary_10_32604_cmc_2024_048304
crossref_primary_10_1109_ACCESS_2022_3204696
crossref_primary_10_3233_JIFS_212022
crossref_primary_10_3389_fbioe_2021_703048
crossref_primary_10_1109_ACCESS_2020_3012703
crossref_primary_10_1155_2022_9261438
crossref_primary_10_1016_j_yebeh_2018_02_010
crossref_primary_10_37391_ijeer_110419
crossref_primary_10_1177_00220221231196321
crossref_primary_10_3389_fpsyg_2021_713545
crossref_primary_10_1007_s42979_021_00868_2
crossref_primary_10_1016_j_neucom_2017_07_021
crossref_primary_10_1007_s11042_022_13799_8
crossref_primary_10_1007_s12559_019_09654_y
crossref_primary_10_3390_sym15061228
crossref_primary_10_1177_18761364241296439
crossref_primary_10_1007_s00138_018_0967_2
crossref_primary_10_1016_j_cosrev_2021_100374
crossref_primary_10_1109_TMC_2020_3001989
crossref_primary_10_1007_s00530_019_00628_6
crossref_primary_10_1016_j_neucom_2023_126866
crossref_primary_10_1145_3200572
crossref_primary_10_1007_s11042_022_12871_7
crossref_primary_10_1007_s11227_021_04058_y
crossref_primary_10_1155_2017_9846707
crossref_primary_10_1016_j_neucom_2020_12_070
crossref_primary_10_1016_j_patcog_2021_108207
crossref_primary_10_1007_s11042_020_09726_4
crossref_primary_10_1016_j_ins_2020_02_047
crossref_primary_10_1109_ACCESS_2022_3199358
crossref_primary_10_1016_j_compbiomed_2023_107457
crossref_primary_10_3390_app15010166
crossref_primary_10_1007_s10489_019_01427_2
crossref_primary_10_1016_j_jpdc_2019_04_017
crossref_primary_10_1016_j_patcog_2018_04_016
crossref_primary_10_1016_j_ifacol_2020_12_2754
crossref_primary_10_1016_j_imavis_2022_104572
crossref_primary_10_1109_JIOT_2020_3044031
crossref_primary_10_1016_j_asoc_2017_12_022
crossref_primary_10_1109_ACCESS_2017_2784096
crossref_primary_10_1142_S021800142056008X
crossref_primary_10_3390_su13169066
crossref_primary_10_1007_s10639_019_10004_6
crossref_primary_10_1166_jmihi_2022_3938
crossref_primary_10_1007_s00371_018_1585_8
crossref_primary_10_1016_j_patcog_2017_10_022
crossref_primary_10_1016_j_dsp_2023_103978
crossref_primary_10_3934_mbe_2023050
crossref_primary_10_1109_TCDS_2022_3150019
crossref_primary_10_7717_peerj_cs_894
crossref_primary_10_1109_TCYB_2017_2788081
crossref_primary_10_1109_TCSVT_2021_3073558
crossref_primary_10_1109_ACCESS_2019_2907271
crossref_primary_10_1007_s00371_022_02413_5
crossref_primary_10_1007_s12652_019_01310_5
crossref_primary_10_32604_cmc_2024_048688
crossref_primary_10_1007_s11042_019_07860_2
crossref_primary_10_1109_TAFFC_2019_2949559
crossref_primary_10_1007_s00530_022_00984_w
crossref_primary_10_1007_s11277_022_09616_y
crossref_primary_10_1155_2020_8886872
crossref_primary_10_1049_iet_ipr_2019_0293
crossref_primary_10_1109_TAFFC_2021_3106254
crossref_primary_10_1109_TIP_2020_2991510
crossref_primary_10_3233_JIFS_200713
crossref_primary_10_1016_j_patcog_2019_107038
crossref_primary_10_1007_s00779_019_01235_y
crossref_primary_10_1007_s10489_019_01491_8
crossref_primary_10_1080_02522667_2020_1809126
crossref_primary_10_1016_j_jvcir_2019_04_009
crossref_primary_10_1016_j_patcog_2017_10_013
crossref_primary_10_1007_s11042_017_5489_9
crossref_primary_10_1155_2023_9790005
crossref_primary_10_1016_j_engappai_2022_105486
crossref_primary_10_2478_amns_2021_1_00011
crossref_primary_10_1088_1742_6596_1237_3_032048
crossref_primary_10_1007_s00521_019_04089_w
crossref_primary_10_1016_j_ypmed_2023_107580
crossref_primary_10_1016_j_engappai_2023_106637
crossref_primary_10_1016_j_conengprac_2020_104630
crossref_primary_10_1016_j_patcog_2017_06_009
crossref_primary_10_1016_j_ins_2017_10_044
crossref_primary_10_1109_TMM_2020_2966858
crossref_primary_10_1142_S146902682250002X
crossref_primary_10_1109_TSMC_2019_2897330
crossref_primary_10_1049_joe_2020_0183
crossref_primary_10_1088_1742_6596_2319_1_012033
crossref_primary_10_1109_ACCESS_2021_3113464
crossref_primary_10_2139_ssrn_3370149
crossref_primary_10_1109_TMM_2021_3121547
crossref_primary_10_1007_s00500_023_08531_z
crossref_primary_10_1016_j_displa_2022_102330
crossref_primary_10_1142_S0129183121501370
crossref_primary_10_1109_ACCESS_2018_2870063
crossref_primary_10_1007_s11263_018_1097_z
crossref_primary_10_3233_JIFS_179049
crossref_primary_10_1016_j_eswa_2023_119957
crossref_primary_10_1016_j_neucom_2018_06_025
crossref_primary_10_1109_TIM_2021_3072144
crossref_primary_10_3390_electronics9111892
crossref_primary_10_1016_j_jvcir_2022_103458
crossref_primary_10_1016_j_patcog_2020_107208
crossref_primary_10_1186_s13640_020_00507_5
crossref_primary_10_1109_ACCESS_2021_3108029
crossref_primary_10_1109_TSMC_2023_3301001
crossref_primary_10_1016_j_cogsys_2018_06_017
crossref_primary_10_3390_app122312134
crossref_primary_10_1007_s00371_019_01635_4
crossref_primary_10_2139_ssrn_4126220
crossref_primary_10_3390_electronics8121487
crossref_primary_10_1016_j_eswa_2018_05_016
crossref_primary_10_1016_j_artmed_2021_102021
crossref_primary_10_1117_1_JEI_31_5_051416
crossref_primary_10_2139_ssrn_4482752
crossref_primary_10_1007_s11334_022_00437_7
crossref_primary_10_1142_S0218001422560079
crossref_primary_10_1016_j_ijleo_2019_01_020
crossref_primary_10_1016_j_neucom_2020_01_067
crossref_primary_10_1007_s10115_018_1176_z
crossref_primary_10_1007_s11760_018_1388_4
crossref_primary_10_1007_s12626_020_00061_6
crossref_primary_10_1016_j_aej_2023_01_017
crossref_primary_10_1016_j_jvcir_2019_05_009
crossref_primary_10_1109_TAFFC_2023_3286838
crossref_primary_10_3233_IA_180015
crossref_primary_10_1007_s11042_022_13940_7
crossref_primary_10_1007_s00371_021_02069_7
crossref_primary_10_1049_iet_ipr_2018_0009
crossref_primary_10_1049_ipr2_12817
crossref_primary_10_1055_a_1866_2943
crossref_primary_10_20965_jaciii_2020_p0792
crossref_primary_10_4018_IJDA_297520
crossref_primary_10_1016_j_procs_2020_08_065
crossref_primary_10_1016_j_jvcir_2021_103062
crossref_primary_10_1109_ACCESS_2019_2901521
crossref_primary_10_1109_ACCESS_2021_3054332
crossref_primary_10_3390_s24113484
crossref_primary_10_1109_TAFFC_2018_2880201
crossref_primary_10_32628_CSEIT228111
crossref_primary_10_1016_j_neunet_2018_05_016
crossref_primary_10_3390_s21030833
crossref_primary_10_3233_IDT_190181
crossref_primary_10_3390_app11041428
crossref_primary_10_1155_2023_6859284
crossref_primary_10_1016_j_entcom_2023_100609
crossref_primary_10_1155_2021_9991531
crossref_primary_10_1007_s12652_017_0636_8
crossref_primary_10_1007_s11042_022_12058_0
crossref_primary_10_1016_j_compeleceng_2021_107196
crossref_primary_10_3390_s23115304
crossref_primary_10_7717_peerj_cs_1216
crossref_primary_10_1140_epjp_s13360_023_04128_5
crossref_primary_10_1016_j_compeleceng_2018_04_006
crossref_primary_10_1109_TBIOM_2023_3250832
crossref_primary_10_36548_jtcsst_2021_2_003
crossref_primary_10_1002_spe_2955
crossref_primary_10_1007_s40031_022_00746_2
crossref_primary_10_1016_j_neucom_2022_04_019
crossref_primary_10_1007_s42488_024_00129_w
crossref_primary_10_1109_ACCESS_2018_2858278
crossref_primary_10_1007_s11277_023_10463_8
crossref_primary_10_1016_j_image_2022_116889
crossref_primary_10_1109_ACCESS_2021_3051403
crossref_primary_10_1109_TCSVT_2021_3063052
crossref_primary_10_1007_s11042_022_13327_8
Cites_doi 10.1109/TPAMI.2004.97
10.1109/ICPR.2002.1048231
10.1109/SACI.2013.6608958
10.1109/ICDAR.2003.1227801
10.1016/j.patcog.2015.12.016
10.1007/978-3-642-35289-8_25
10.1007/s11263-010-0380-4
10.1016/j.neucom.2015.02.011
10.1109/IROS.2005.1545532
10.1007/978-0-85729-932-1
10.1109/TAFFC.2014.2317711
10.1109/ICIP.2014.7026204
10.1109/LSP.2011.2171949
10.1109/CVPR.2012.6247974
10.1016/j.patrec.2015.05.005
10.1016/j.patcog.2015.04.025
10.1145/2557642.2579369
10.1109/TIP.2015.2405346
10.1016/j.eswa.2012.07.074
10.1109/CVPR.2014.226
10.1017/CBO9781139833813
10.1016/j.neucom.2015.09.083
10.1109/34.817413
10.1109/AFGR.2000.840611
10.1016/j.imavis.2008.08.005
10.1016/S0921-8890(99)00103-7
10.1109/FG.2015.7163106
10.1109/ICME.2012.61
10.1016/j.patcog.2011.02.012
10.1109/CVPR.2007.383059
10.1007/978-3-319-25751-8_32
10.7551/mitpress/7496.003.0016
10.1109/TIP.2012.2235848
10.1109/SIBGRAPI.2015.14
10.1162/08997660260293319
10.1109/ICCE.2014.6776135
10.1109/ICET.2013.6743520
10.1109/ICCVW.2011.6130508
10.1109/CVPRW.2010.5543262
10.1109/MCG.2012.41
10.14569/IJACSA.2014.051215
10.1109/CVPR.2014.233
10.1109/CVPR.2005.297
10.1109/AFGR.1998.670990
10.1109/IROS.2006.281791
10.1016/j.patcog.2011.05.006
10.1016/S0893-6080(03)00115-1
10.1109/ICCVW.2011.6130446
10.1109/WACV.2014.6835736
10.1007/978-3-642-33783-3_58
10.1109/ICICS.2011.6173539
10.1016/j.neucom.2011.01.008
10.1109/AFGR.1998.670949
10.1007/s00530-014-0400-2
10.1109/TVLSI.2010.2069575
10.1007/s11042-014-2333-3
10.1109/CNMT.2009.5374770
10.1016/j.ijleo.2015.11.187
10.1109/MMSP.1997.602642
10.1109/CNT.2014.7062726
10.1016/j.patcog.2016.01.032
10.1109/ICIP.2014.7025686
10.1016/j.patcog.2015.04.012
10.5244/C.29.41
10.1109/CVPR.2013.442
ContentType Journal Article
Copyright 2016 Elsevier Ltd
Copyright_xml – notice: 2016 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.patcog.2016.07.026
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-5142
EndPage 628
ExternalDocumentID 10_1016_j_patcog_2016_07_026
S0031320316301753
GroupedDBID --K
--M
-D8
-DT
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABFRF
ABHFT
ABJNI
ABMAC
ABTAH
ABXDB
ABYKQ
ACBEA
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FD6
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
JJJVA
KOM
KZ1
LG9
LMP
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WUQ
XJE
XPP
ZMT
ZY4
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
ID FETCH-LOGICAL-c306t-49b5ccbdbc101b42c769c4df4e2469f6c38cab64e5d649e126b9e2098a415b7b3
IEDL.DBID .~1
ISSN 0031-3203
IngestDate Thu Jul 03 08:26:48 EDT 2025
Thu Apr 24 23:02:55 EDT 2025
Fri Feb 23 02:25:23 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Facial expression recognition
Computer vision
Expression specific features
Convolutional Neural Networks
Machine learning
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c306t-49b5ccbdbc101b42c769c4df4e2469f6c38cab64e5d649e126b9e2098a415b7b3
PageCount 19
ParticipantIDs crossref_citationtrail_10_1016_j_patcog_2016_07_026
crossref_primary_10_1016_j_patcog_2016_07_026
elsevier_sciencedirect_doi_10_1016_j_patcog_2016_07_026
PublicationCentury 2000
PublicationDate January 2017
2017-01-00
PublicationDateYYYYMMDD 2017-01-01
PublicationDate_xml – month: 01
  year: 2017
  text: January 2017
PublicationDecade 2010
PublicationTitle Pattern recognition
PublicationYear 2017
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References W. Liu, C. Song, Y. Wang, Facial expression recognition based on discriminative dictionary learning, in: 2012 21st International Conference on Pattern Recognition (ICPR), 2012, pp. 1839–1842.
Y.-H. Byeon, K.-C. Kwak, Facial expression recognition using 3d convolutional neural network. International Journal of Advanced Computer Science and Applications(IJACSA), 5 (2014).
Chen, Wong, Chiu (bib18) 2011; 19
Y. Bengio, I.J. Goodfellow, A. Courville, Deep Learning, MIT Press, Cambridge, Massachusetts, USA, 2015.
Rivera, Castillo, Chae (bib43) 2013; 22
Y. Bengio, Y. LeCun, Scaling learning algorithms towards AI, in: L. Bottou, O. Chapelle, D. DeCoste, J. Weston (Eds.), Large-Scale Kernel Machines, MIT Press, Cambridge, Massachusetts, USA, 2007 (URL
S. Arivazhagan, R.A. Priyadharshini, S. Sowmiya, Facial expression recognition based on local directional number pattern and anfis classifier, in: 2014 International Conference on Communication and Network Technologies (ICCNT), 2014, pp. 62–67
M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, J. Movellan, Recognizing facial expression: machine learning and application to spontaneous behavior, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005 (CVPR 2005), vol. 2, 2005, pp. 568–573.
I. Song, H.-J. Kim, P.B. Jeon, Deep learning for real-time robust facial expression recognition on a smartphone, in: International Conference on Consumer Electronics (ICCE), Institute of Electrical & Electronics Engineers (IEEE), Las Vegas, NV, USA, 2014.
Gu, Xiang, Venkatesh, Huang, Lin (bib67) 2012; 45
P. Zhao-yi, W. Zhi-qiang, Z. Yu, Application of mean shift algorithm in real-time facial expression recognition, in: International Symposium on Computer Network and Multimedia Technology, 2009 (CNMT 2009), 2009, pp. 1–4.
P. Lucey, J. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended Cohn–Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression, in: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2010, pp. 94–101.
Lin, Song, Quynh, He, Chen (bib28) 2012; 32
S. Demyanov, J. Bailey, R. Kotagiri, C. Leckie, Invariant Backpropagation: How To Train a Transformation-Invariant Neural Network
J.-J.J. Lien, T. Kanade, J. Cohn, C. Li, Detection, tracking, and classification of action units in facial expression, J. Robot. Auton. Syst. 31(3), 2000, 131-146
D.C. Cirean, U. Meier, J. Masci, L.M. Gambardella, J. Schmidhuber, Flexible, high performance convolutional neural networks for image classification, in: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI'11), vol. 2, AAAI Press, Barcelona, Catalonia, Spain, 2011, pp. 1237–1242.
X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural networks, in: G.J. Gordon, D.B. Dunson (Eds.), Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS-11), vol. 15, 2011, pp. 315–323.
Zavaschi, Britto, Oliveira, Koerich (bib44) 2013; 40
Garcia, Delakis (bib19) 2004; 26
S. Jain, C. Hu, J. Aggarwal, Facial expression recognition with temporal modeling of shapes, in: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011, pp. 1642–1649
X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in: Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10), Society for Artificial Intelligence and Statistics, Sardinia, Italy, 2010.
J.-I. Choi, C.-W. La, P.-K. Rhee, Y.-L. Bae, Face and eye location algorithms for visual user interface, in: Proceedings of First Signal Processing Society Workshop on Multimedia Signal Processing, Institute of Electrical & Electronics Engineers (IEEE), Princeton, NJ, USA, 1997.
M. Demirkus, D. Precup, J. Clark, T. Arbel, Multi-layer temporal graphical model for head pose estimation in real-world videos, in: 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 3392–3396.
M. Valstar, M. Pantic, Induced disgust, happiness and surprise: an addition to the mmi facial expression database, in: Proceedings of the 3rd International Workshop on EMOTION (Satellite of LREC): Corpora for Research on Emotion and Affect, 2010, p. 65.
P. Yang, Q. Liu, D. Metaxas, Boosting coded dynamic features for facial action units and facial expression recognition, in: IEEE Conference on Computer Vision and Pattern Recognition, 2007 (CVPR’07), 2007, pp. 1–6.
B. Fasel, Robust face analysis using convolutional neural networks, in: Proceedings of the 16th International Conference on Pattern Recognition, 2002, vol. 2, 2002, pp. 40–43.
F. Beat, Head-pose invariant facial expression recognition using convolutional neural networks, in: Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces, 2002, 2002, pp. 529–534.
Lecun, Bottou, Bengio, Haffner (bib36) 1998; 86
Liu, Li, Shan, Chen (bib12) 2015; 159
A. Zafer, R. Nawaz, J. Iqbal, Face recognition with expression variation via robust ncc, in: 2013 IEEE 9th International Conference on Emerging Technologies (ICET), 2013, pp. 1–5
C. Darwin, The Expression of the Emotions in Man and Animals, CreateSpace Independent Publishing Platform, 2012.
L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, D. Metaxas, Learning active facial patches for expression analysis, in: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2562–2569.
.
J.Y.R. Cornejo, H. Pedrini, F. Florez-Revuelta, Facial expression recognition with occlusions based on geometric representation, in: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: Proceedings of the 20th Iberoamerican Congress (CIARP 2015), Montevideo, Uruguay, November 9–12, 2015, Springer International Publishing, Cham, 2015, pp. 263–270.
Lee, Baddar, Ro (bib69) 2016; 54
L. Bottou, Stochastic Gradient Descent Tricks, Springer, New York, NY, USA. 2012.
P. Liu, S. Han, Z. Meng, Y. Tong, Facial expression recognition via a boosted deep belief network, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1805–1812.
A. Dhall, R. Goecke, S. Lucey, T. Gedeon, Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark, in: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), IEEE, Barcelona, Catalonia, Spain, 2011, pp. 2106–2112.
Zhang, Zhang, Ma, Guan, Gong (bib17) 2015; 48
S. Cheng, A. Asthana, S. Zafeiriou, J. Shen, M. Pantic, Real-time generic face tracking in the wild with cuda, in: Proceedings of the 5th ACM Multimedia Systems Conference, ACM, Singapore, Singapore 2014, pp. 148–151.
F.D. la Torre, W.S. Chu, X. Xiong, F. Vicente, X. Ding, J. Cohn, Intraface, in: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, 2015, pp. 1–8
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, Caffe: Convolutional Architecture for Fast Feature Embedding
C.-D. Caleanu, Face expression recognition: a brief overview of the last decade, in: 2013 IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI), 2013, pp. 157–161.
Siddiqi, Ali, Idris, Khan, Kim, Whang, Lee (bib81) 2014; 75
Ekman, Friesen (bib66) 1978
A.T. Lopes, E. de Aguiar, T.O. Santos, A facial expression recognition system using convolutional networks, in: 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, Institute of Electrical & Electronics Engineers (IEEE), Salvador, Bahia, Brasil, 2015.
A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, Robust discriminative response map fitting with constrained local models, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3444–3451.
P. Liu, M. Reale, L. Yin, 3d head pose estimation based on scene flow and generic head model, in: 2012 IEEE International Conference on Multimedia and Expo (ICME), 2012, pp. 794–799.
Meguid, Levine (bib49) 2014; 5
Srivastava, Hinton, Krizhevsky, Sutskever, Salakhutdinov (bib52) 2014; 15
Lyons, Budynek, Akamatsu (bib5) 1999; 21
Y. Wu, H. Liu, H. Zha, Modeling facial expression space for recognition, in: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005 (IROS 2005), 2005, pp. 1968–1973.
S. Rifai, Y. Bengio, A. Courville, P. Vincent, M. Mirza, Disentangling factors of variation for facial expression recognition, in: A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, C. Schmid (Eds.), Computer Vision – ECCV 2012, Lecture Notes in Computer Science, vol. 7577, Springer, Berlin Heidelberg, 2012, pp. 808–822.
Patil, Kothari, Bhurchandi (bib39) 2016; 127
B.A. Wandell, Foundations of Vision, 1st ed., Sinauer Associates Inc, Sunderland, Mass, 1995.
Maalej, Amor, Daoudi, Srivastava, Berretti (bib74) 2011; 44
T. Kanade, Y. Tian, J.F. Cohn, Comprehensive database for facial expression analysis, in: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000 (FG’00), IEEE Computer Society, Washington, DC, USA, 2000, p. 46.
S.Z. Li, A.K. Jain, Handbook of Face Recognition, Springer Science & Business Media, Secaucus, NJ, USA, 2011.
C. Turan, K. M. Lam, Region-based feature fusion for facial-expression recognition, in: 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 5966–5970
M. Xue, A. Mian, W. Liu, L. Li, Fully automatic 3d facial expression recognition using local depth features, in: IEEE Winter Conference on Applications of Computer Vision, 2014, pp. 1096–1103
Saragih, Lucey, Cohn (bib59) 2010; 91
J. Cohn A. Zlochower, A Computerized Analysis of Facial Expression: Feasibility of Automated Discrimination, vol. 2. American Psychological Society, 1995, p. 6.
Zhang, Yi, Lei, Li (bib20) 2012; 19
Matsugu, Mori, Mitari, Kaneda (bib32) 2003; 16
M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with gabor wavelets, in: Proceedings of the T
Fan (10.1016/j.patcog.2016.07.026_bib16) 2015; 48
Utgoff (10.1016/j.patcog.2016.07.026_bib34) 2002; 14
10.1016/j.patcog.2016.07.026_bib35
10.1016/j.patcog.2016.07.026_bib79
10.1016/j.patcog.2016.07.026_bib38
10.1016/j.patcog.2016.07.026_bib37
Garcia (10.1016/j.patcog.2016.07.026_bib19) 2004; 26
10.1016/j.patcog.2016.07.026_bib83
10.1016/j.patcog.2016.07.026_bib82
10.1016/j.patcog.2016.07.026_bib40
10.1016/j.patcog.2016.07.026_bib42
10.1016/j.patcog.2016.07.026_bib45
Patil (10.1016/j.patcog.2016.07.026_bib39) 2016; 127
Siddiqi (10.1016/j.patcog.2016.07.026_bib78) 2015; 24
Liu (10.1016/j.patcog.2016.07.026_bib12) 2015; 159
Lee (10.1016/j.patcog.2016.07.026_bib69) 2016; 54
10.1016/j.patcog.2016.07.026_bib47
Shan (10.1016/j.patcog.2016.07.026_bib8) 2009; 27
10.1016/j.patcog.2016.07.026_bib46
Siddiqi (10.1016/j.patcog.2016.07.026_bib81) 2014; 75
10.1016/j.patcog.2016.07.026_bib48
Ali (10.1016/j.patcog.2016.07.026_bib13) 2016; 55
Gu (10.1016/j.patcog.2016.07.026_bib67) 2012; 45
10.1016/j.patcog.2016.07.026_bib50
Chen (10.1016/j.patcog.2016.07.026_bib18) 2011; 19
10.1016/j.patcog.2016.07.026_bib51
10.1016/j.patcog.2016.07.026_bib9
10.1016/j.patcog.2016.07.026_bib10
10.1016/j.patcog.2016.07.026_bib54
Lecun (10.1016/j.patcog.2016.07.026_bib36) 1998; 86
10.1016/j.patcog.2016.07.026_bib53
10.1016/j.patcog.2016.07.026_bib7
10.1016/j.patcog.2016.07.026_bib56
10.1016/j.patcog.2016.07.026_bib6
10.1016/j.patcog.2016.07.026_bib11
10.1016/j.patcog.2016.07.026_bib55
Sha (10.1016/j.patcog.2016.07.026_bib73) 2011; 74
10.1016/j.patcog.2016.07.026_bib4
10.1016/j.patcog.2016.07.026_bib3
10.1016/j.patcog.2016.07.026_bib2
10.1016/j.patcog.2016.07.026_bib1
Zavaschi (10.1016/j.patcog.2016.07.026_bib44) 2013; 40
Mery (10.1016/j.patcog.2016.07.026_bib77) 2015; 68
Maalej (10.1016/j.patcog.2016.07.026_bib74) 2011; 44
10.1016/j.patcog.2016.07.026_bib14
10.1016/j.patcog.2016.07.026_bib58
Rivera (10.1016/j.patcog.2016.07.026_bib43) 2013; 22
10.1016/j.patcog.2016.07.026_bib57
10.1016/j.patcog.2016.07.026_bib15
Matsugu (10.1016/j.patcog.2016.07.026_bib32) 2003; 16
Srivastava (10.1016/j.patcog.2016.07.026_bib52) 2014; 15
10.1016/j.patcog.2016.07.026_bib61
Lin (10.1016/j.patcog.2016.07.026_bib28) 2012; 32
Wang (10.1016/j.patcog.2016.07.026_bib41) 2016; 174
10.1016/j.patcog.2016.07.026_bib60
10.1016/j.patcog.2016.07.026_bib63
10.1016/j.patcog.2016.07.026_bib62
10.1016/j.patcog.2016.07.026_bib21
10.1016/j.patcog.2016.07.026_bib65
10.1016/j.patcog.2016.07.026_bib64
10.1016/j.patcog.2016.07.026_bib23
10.1016/j.patcog.2016.07.026_bib22
Lyons (10.1016/j.patcog.2016.07.026_bib5) 1999; 21
Saragih (10.1016/j.patcog.2016.07.026_bib59) 2010; 91
Ekman (10.1016/j.patcog.2016.07.026_bib66) 1978
10.1016/j.patcog.2016.07.026_bib25
10.1016/j.patcog.2016.07.026_bib24
10.1016/j.patcog.2016.07.026_bib68
10.1016/j.patcog.2016.07.026_bib27
10.1016/j.patcog.2016.07.026_bib26
10.1016/j.patcog.2016.07.026_bib29
Zhang (10.1016/j.patcog.2016.07.026_bib17) 2015; 48
Zhang (10.1016/j.patcog.2016.07.026_bib20) 2012; 19
Siddiqi (10.1016/j.patcog.2016.07.026_bib80) 2014; 21
10.1016/j.patcog.2016.07.026_bib72
10.1016/j.patcog.2016.07.026_bib71
10.1016/j.patcog.2016.07.026_bib30
10.1016/j.patcog.2016.07.026_bib76
10.1016/j.patcog.2016.07.026_bib31
10.1016/j.patcog.2016.07.026_bib33
Meguid (10.1016/j.patcog.2016.07.026_bib49) 2014; 5
10.1016/j.patcog.2016.07.026_bib70
References_xml – volume: 45
  start-page: 80
  year: 2012
  end-page: 91
  ident: bib67
  article-title: Facial expression recognition using radial encoding of local gabor features and classifier synthesis
  publication-title: Pattern Recognit.
– reference: S. Jain, C. Hu, J. Aggarwal, Facial expression recognition with temporal modeling of shapes, in: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011, pp. 1642–1649,
– volume: 40
  start-page: 646
  year: 2013
  end-page: 655
  ident: bib44
  article-title: Fusion of feature sets and classifiers for facial expression recognition
  publication-title: Expert Syst. Appl.
– reference: Y.-H. Byeon, K.-C. Kwak, Facial expression recognition using 3d convolutional neural network. International Journal of Advanced Computer Science and Applications(IJACSA), 5 (2014).
– reference: J.-I. Choi, C.-W. La, P.-K. Rhee, Y.-L. Bae, Face and eye location algorithms for visual user interface, in: Proceedings of First Signal Processing Society Workshop on Multimedia Signal Processing, Institute of Electrical & Electronics Engineers (IEEE), Princeton, NJ, USA, 1997.
– reference: P. Yang, Q. Liu, D. Metaxas, Boosting coded dynamic features for facial action units and facial expression recognition, in: IEEE Conference on Computer Vision and Pattern Recognition, 2007 (CVPR’07), 2007, pp. 1–6.
– reference: P. Burkert, F. Trier, M.Z. Afzal, A. Dengel, M. Liwicki, Dexpression: Deep Convolutional Neural Network for Expression Recognition, CoRR abs/1509.05371 (URL 〈
– volume: 22
  start-page: 1740
  year: 2013
  end-page: 1752
  ident: bib43
  article-title: Local directional number pattern for face analysis
  publication-title: IEEE Trans. Image Process.
– reference: M. Xue, A. Mian, W. Liu, L. Li, Fully automatic 3d facial expression recognition using local depth features, in: IEEE Winter Conference on Applications of Computer Vision, 2014, pp. 1096–1103
– reference: P. Lucey, J. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended Cohn–Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression, in: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2010, pp. 94–101.
– volume: 21
  start-page: 541
  year: 2014
  end-page: 555
  ident: bib80
  article-title: Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection
  publication-title: Multimed. Syst.
– reference: C. Darwin, The Expression of the Emotions in Man and Animals, CreateSpace Independent Publishing Platform, 2012.
– reference: C. Turan, K. M. Lam, Region-based feature fusion for facial-expression recognition, in: 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 5966–5970 (
– year: 1978
  ident: bib66
  article-title: Facial Action Coding System: A Technique for the Measurement of Facial Movement
– reference: J.Y.R. Cornejo, H. Pedrini, F. Florez-Revuelta, Facial expression recognition with occlusions based on geometric representation, in: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: Proceedings of the 20th Iberoamerican Congress (CIARP 2015), Montevideo, Uruguay, November 9–12, 2015, Springer International Publishing, Cham, 2015, pp. 263–270.
– reference: C.-D. Caleanu, Face expression recognition: a brief overview of the last decade, in: 2013 IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI), 2013, pp. 157–161.
– volume: 74
  start-page: 2135
  year: 2011
  end-page: 2141
  ident: bib73
  article-title: Feature level analysis for 3d facial expression recognition
  publication-title: Neurocomputing
– volume: 21
  start-page: 1357
  year: 1999
  end-page: 1362
  ident: bib5
  article-title: Automatic classification of single facial images
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– reference: P. Liu, M. Reale, L. Yin, 3d head pose estimation based on scene flow and generic head model, in: 2012 IEEE International Conference on Multimedia and Expo (ICME), 2012, pp. 794–799.
– reference: M. Valstar, M. Pantic, Induced disgust, happiness and surprise: an addition to the mmi facial expression database, in: Proceedings of the 3rd International Workshop on EMOTION (Satellite of LREC): Corpora for Research on Emotion and Affect, 2010, p. 65.
– volume: 26
  start-page: 1408
  year: 2004
  end-page: 1423
  ident: bib19
  article-title: Convolutional face finder
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– reference: P. Zhao-yi, W. Zhi-qiang, Z. Yu, Application of mean shift algorithm in real-time facial expression recognition, in: International Symposium on Computer Network and Multimedia Technology, 2009 (CNMT 2009), 2009, pp. 1–4.
– reference: A.T. Lopes, E. de Aguiar, T.O. Santos, A facial expression recognition system using convolutional networks, in: 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, Institute of Electrical & Electronics Engineers (IEEE), Salvador, Bahia, Brasil, 2015.
– reference: F.D. la Torre, W.S. Chu, X. Xiong, F. Vicente, X. Ding, J. Cohn, Intraface, in: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, 2015, pp. 1–8 (
– reference: B.A. Wandell, Foundations of Vision, 1st ed., Sinauer Associates Inc, Sunderland, Mass, 1995.
– reference: X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in: Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10), Society for Artificial Intelligence and Statistics, Sardinia, Italy, 2010.
– reference: Z. Zhang, M. Lyons, M. Schuster, S. Akamatsu, Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron, in: 1998 Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 454–459.
– reference: F. Beat, Head-pose invariant facial expression recognition using convolutional neural networks, in: Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces, 2002, 2002, pp. 529–534.
– volume: 75
  start-page: 935
  year: 2014
  end-page: 959
  ident: bib81
  article-title: Human facial expression recognition using curvelet feature extraction and normalized mutual information feature selection
  publication-title: Multimed. Tools Appl.
– volume: 174
  start-page: 756
  year: 2016
  end-page: 766
  ident: bib41
  article-title: Facial expression recognition using sparse local fisher discriminant analysis
  publication-title: Neurocomputing
– reference: A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, Robust discriminative response map fitting with constrained local models, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3444–3451.
– reference: M. Liu, S. Shan, R. Wang, X. Chen, Learning expressionlets on spatio-temporal manifold for dynamic facial expression recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1749–1756.
– volume: 55
  start-page: 14
  year: 2016
  end-page: 27
  ident: bib13
  article-title: Boosted NNE collections for multicultural facial expression recognition
  publication-title: Pattern Recognit.
– reference: O.M. Parkhi, A. Vedaldi, A. Zisserman, Deep face recognition, in: British Machine Vision Conference, 2015, 46-53.
– reference: Y. Bengio, Y. LeCun, Scaling learning algorithms towards AI, in: L. Bottou, O. Chapelle, D. DeCoste, J. Weston (Eds.), Large-Scale Kernel Machines, MIT Press, Cambridge, Massachusetts, USA, 2007 (URL 〈
– reference: Y. Bengio, I.J. Goodfellow, A. Courville, Deep Learning, MIT Press, Cambridge, Massachusetts, USA, 2015.
– reference: S. Arivazhagan, R.A. Priyadharshini, S. Sowmiya, Facial expression recognition based on local directional number pattern and anfis classifier, in: 2014 International Conference on Communication and Network Technologies (ICCNT), 2014, pp. 62–67 (
– reference: L. Bottou, Stochastic Gradient Descent Tricks, Springer, New York, NY, USA. 2012.
– reference: L. Yin, X. Wei, Y. Sun, J. Wang, M. Rosato, A 3d facial expression database for facial behavior research, in: 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Institute of Electrical & Electronics Engineers (IEEE), Southampton, UK, 2006.
– volume: 27
  start-page: 803
  year: 2009
  end-page: 816
  ident: bib8
  article-title: Facial expression recognition based on local binary patterns
  publication-title: Image Vis. Comput.
– reference: W.W. Kim, S. Park, J. Hwang, S. Lee, Automatic head pose estimation from a single camera using projective geometry, in: 2011 8th International Conference on Information, Communications and Signal Processing (ICICS), 2011, pp. 1–5.
– reference: A. Zafer, R. Nawaz, J. Iqbal, Face recognition with expression variation via robust ncc, in: 2013 IEEE 9th International Conference on Emerging Technologies (ICET), 2013, pp. 1–5
– reference: J. Cohn A. Zlochower, A Computerized Analysis of Facial Expression: Feasibility of Automated Discrimination, vol. 2. American Psychological Society, 1995, p. 6.
– volume: 19
  start-page: 131
  year: 2012
  end-page: 134
  ident: bib20
  article-title: Regularized transfer boosting for face detection across spectrum
  publication-title: IEEE Signal Process. Lett.
– reference: P. Liu, S. Han, Z. Meng, Y. Tong, Facial expression recognition via a boosted deep belief network, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1805–1812.
– reference: M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with gabor wavelets, in: Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, 1998, pp. 200–205.
– volume: 68
  start-page: 260
  year: 2015
  end-page: 269
  ident: bib77
  article-title: Automatic facial attribute analysis via adaptive sparse representation of random patches
  publication-title: Pattern Recognit. Lett.
– reference: J.-J.J. Lien, T. Kanade, J. Cohn, C. Li, Detection, tracking, and classification of action units in facial expression, J. Robot. Auton. Syst. 31(3), 2000, 131-146
– reference: Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, Caffe: Convolutional Architecture for Fast Feature Embedding (
– reference: M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, J. Movellan, Recognizing facial expression: machine learning and application to spontaneous behavior, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005 (CVPR 2005), vol. 2, 2005, pp. 568–573.
– reference: T. Kanade, Y. Tian, J.F. Cohn, Comprehensive database for facial expression analysis, in: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000 (FG’00), IEEE Computer Society, Washington, DC, USA, 2000, p. 46.
– volume: 19
  start-page: 1937
  year: 2011
  end-page: 1948
  ident: bib18
  article-title: A 0.64
  publication-title: IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
– reference: W. Liu, C. Song, Y. Wang, Facial expression recognition based on discriminative dictionary learning, in: 2012 21st International Conference on Pattern Recognition (ICPR), 2012, pp. 1839–1842.
– reference: G. Li, X. Cai, X. Li, Y. Liu, An efficient face normalization algorithm based on eyes detection, in: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Institute of Electrical & Electronics Engineers (IEEE), Beijing, China, 2006.
– volume: 5
  start-page: 141
  year: 2014
  end-page: 154
  ident: bib49
  article-title: Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers
  publication-title: IEEE Trans. Affect. Comput.
– reference: J.M. Girard, J.F. Cohn, L.A. Jeni, S. Lucey, F.D. la Torre, How much training data for facial action unit detection?, in: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, 2015, pp. 1–8 (
– reference: P. Simard, D. Steinkraus, J.C. Platt, Best practices for convolutional neural networks applied to visual document analysis, in: 2003 Proceedings of the Seventh International Conference on Document Analysis and Recognition, 2003, pp. 958–963.
– volume: 44
  start-page: 1581
  year: 2011
  end-page: 1589
  ident: bib74
  article-title: Shape analysis of local facial patches for 3d facial expression recognition
  publication-title: Pattern Recognit.
– reference: S. Cheng, A. Asthana, S. Zafeiriou, J. Shen, M. Pantic, Real-time generic face tracking in the wild with cuda, in: Proceedings of the 5th ACM Multimedia Systems Conference, ACM, Singapore, Singapore 2014, pp. 148–151.
– volume: 91
  start-page: 200
  year: 2010
  end-page: 215
  ident: bib59
  article-title: Deformable model fitting by regularized landmark mean-shift
  publication-title: Int. J. Comput. Vision.
– reference: M. Demirkus, D. Precup, J. Clark, T. Arbel, Multi-layer temporal graphical model for head pose estimation in real-world videos, in: 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 3392–3396.
– volume: 15
  start-page: 1929
  year: 2014
  end-page: 1958
  ident: bib52
  article-title: Dropout
  publication-title: J. Mach. Learn. Res.
– reference: 〉).
– reference: B. Fasel, Robust face analysis using convolutional neural networks, in: Proceedings of the 16th International Conference on Pattern Recognition, 2002, vol. 2, 2002, pp. 40–43.
– reference: X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural networks, in: G.J. Gordon, D.B. Dunson (Eds.), Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS-11), vol. 15, 2011, pp. 315–323.
– reference: S. Rifai, Y. Bengio, A. Courville, P. Vincent, M. Mirza, Disentangling factors of variation for facial expression recognition, in: A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, C. Schmid (Eds.), Computer Vision – ECCV 2012, Lecture Notes in Computer Science, vol. 7577, Springer, Berlin Heidelberg, 2012, pp. 808–822.
– volume: 14
  start-page: 2497
  year: 2002
  end-page: 2529
  ident: bib34
  article-title: Many-layered learning
  publication-title: Neural Comput.
– reference: Y. Wu, H. Liu, H. Zha, Modeling facial expression space for recognition, in: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005 (IROS 2005), 2005, pp. 1968–1973.
– volume: 48
  start-page: 3407
  year: 2015
  end-page: 3416
  ident: bib16
  article-title: A spatial–temporal framework based on histogram of gradients and optical flow for facial expression recognition in video sequences
  publication-title: Pattern Recognit.
– reference: A. Dhall, R. Goecke, S. Lucey, T. Gedeon, Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark, in: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), IEEE, Barcelona, Catalonia, Spain, 2011, pp. 2106–2112.
– reference: L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, D. Metaxas, Learning active facial patches for expression analysis, in: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2562–2569.
– volume: 24
  start-page: 1386
  year: 2015
  end-page: 1398
  ident: bib78
  article-title: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields
  publication-title: IEEE Trans. Image Process.
– reference: I. Song, H.-J. Kim, P.B. Jeon, Deep learning for real-time robust facial expression recognition on a smartphone, in: International Conference on Consumer Electronics (ICCE), Institute of Electrical & Electronics Engineers (IEEE), Las Vegas, NV, USA, 2014.
– reference: .
– reference: S. Demyanov, J. Bailey, R. Kotagiri, C. Leckie, Invariant Backpropagation: How To Train a Transformation-Invariant Neural Network (
– volume: 159
  start-page: 126
  year: 2015
  end-page: 136
  ident: bib12
  article-title: Au-inspired deep networks for facial expression feature learning
  publication-title: Neurocomputing
– volume: 48
  start-page: 3191
  year: 2015
  end-page: 3202
  ident: bib17
  article-title: Multimodal learning for facial expression recognition
  publication-title: Pattern Recognit.
– volume: 54
  start-page: 52
  year: 2016
  end-page: 67
  ident: bib69
  article-title: Collaborative expression representation using peak expression and intra class variation face images for practical subject-independent emotion recognition in videos
  publication-title: Pattern Recognit.
– reference: S.Z. Li, A.K. Jain, Handbook of Face Recognition, Springer Science & Business Media, Secaucus, NJ, USA, 2011.
– volume: 16
  start-page: 555
  year: 2003
  end-page: 559
  ident: bib32
  article-title: Subject independent facial expression recognition with robust face detection using a convolutional neural network
  publication-title: Neural Netw.: Off. J. Int. Neural Netw. Soc.
– volume: 86
  start-page: 2278
  year: 1998
  end-page: 2324
  ident: bib36
  publication-title: Gradient-based Learn. Appl. Doc. Recognit.
– reference: D.C. Cirean, U. Meier, J. Masci, L.M. Gambardella, J. Schmidhuber, Flexible, high performance convolutional neural networks for image classification, in: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI'11), vol. 2, AAAI Press, Barcelona, Catalonia, Spain, 2011, pp. 1237–1242.
– volume: 32
  start-page: 76
  year: 2012
  end-page: 88
  ident: bib28
  article-title: Sparse coding for flexible, robust 3d facial-expression synthesis
  publication-title: IEEE Comput. Graph. Appl.
– volume: 127
  start-page: 2670
  year: 2016
  end-page: 2678
  ident: bib39
  article-title: Expression invariant face recognition using local binary patterns and contourlet transform
  publication-title: Opt.-Int. J. Light Electron Opt.
– volume: 26
  start-page: 1408
  issue: 11
  year: 2004
  ident: 10.1016/j.patcog.2016.07.026_bib19
  article-title: Convolutional face finder
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2004.97
– ident: 10.1016/j.patcog.2016.07.026_bib30
  doi: 10.1109/ICPR.2002.1048231
– ident: 10.1016/j.patcog.2016.07.026_bib48
  doi: 10.1109/SACI.2013.6608958
– ident: 10.1016/j.patcog.2016.07.026_bib56
  doi: 10.1109/ICDAR.2003.1227801
– volume: 54
  start-page: 52
  year: 2016
  ident: 10.1016/j.patcog.2016.07.026_bib69
  article-title: Collaborative expression representation using peak expression and intra class variation face images for practical subject-independent emotion recognition in videos
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2015.12.016
– ident: 10.1016/j.patcog.2016.07.026_bib63
  doi: 10.1007/978-3-642-35289-8_25
– volume: 91
  start-page: 200
  issue: 2
  year: 2010
  ident: 10.1016/j.patcog.2016.07.026_bib59
  article-title: Deformable model fitting by regularized landmark mean-shift
  publication-title: Int. J. Comput. Vision.
  doi: 10.1007/s11263-010-0380-4
– volume: 159
  start-page: 126
  year: 2015
  ident: 10.1016/j.patcog.2016.07.026_bib12
  article-title: Au-inspired deep networks for facial expression feature learning
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2015.02.011
– ident: 10.1016/j.patcog.2016.07.026_bib1
  doi: 10.1109/IROS.2005.1545532
– ident: 10.1016/j.patcog.2016.07.026_bib3
  doi: 10.1007/978-0-85729-932-1
– ident: 10.1016/j.patcog.2016.07.026_bib11
– volume: 5
  start-page: 141
  issue: 2
  year: 2014
  ident: 10.1016/j.patcog.2016.07.026_bib49
  article-title: Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/TAFFC.2014.2317711
– ident: 10.1016/j.patcog.2016.07.026_bib54
– ident: 10.1016/j.patcog.2016.07.026_bib50
  doi: 10.1109/ICIP.2014.7026204
– volume: 19
  start-page: 131
  issue: 3
  year: 2012
  ident: 10.1016/j.patcog.2016.07.026_bib20
  article-title: Regularized transfer boosting for face detection across spectrum
  publication-title: IEEE Signal Process. Lett.
  doi: 10.1109/LSP.2011.2171949
– ident: 10.1016/j.patcog.2016.07.026_bib68
  doi: 10.1109/CVPR.2012.6247974
– volume: 68
  start-page: 260
  issue: Part 2
  year: 2015
  ident: 10.1016/j.patcog.2016.07.026_bib77
  article-title: Automatic facial attribute analysis via adaptive sparse representation of random patches
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2015.05.005
– volume: 48
  start-page: 3407
  issue: 11
  year: 2015
  ident: 10.1016/j.patcog.2016.07.026_bib16
  article-title: A spatial–temporal framework based on histogram of gradients and optical flow for facial expression recognition in video sequences
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2015.04.025
– ident: 10.1016/j.patcog.2016.07.026_bib61
  doi: 10.1145/2557642.2579369
– volume: 24
  start-page: 1386
  issue: 4
  year: 2015
  ident: 10.1016/j.patcog.2016.07.026_bib78
  article-title: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2015.2405346
– volume: 40
  start-page: 646
  issue: 2
  year: 2013
  ident: 10.1016/j.patcog.2016.07.026_bib44
  article-title: Fusion of feature sets and classifiers for facial expression recognition
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2012.07.074
– ident: 10.1016/j.patcog.2016.07.026_bib76
  doi: 10.1109/CVPR.2014.226
– ident: 10.1016/j.patcog.2016.07.026_bib2
  doi: 10.1017/CBO9781139833813
– ident: 10.1016/j.patcog.2016.07.026_bib37
– volume: 174
  start-page: 756
  issue: Part B
  year: 2016
  ident: 10.1016/j.patcog.2016.07.026_bib41
  article-title: Facial expression recognition using sparse local fisher discriminant analysis
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2015.09.083
– ident: 10.1016/j.patcog.2016.07.026_bib70
– volume: 21
  start-page: 1357
  issue: 12
  year: 1999
  ident: 10.1016/j.patcog.2016.07.026_bib5
  article-title: Automatic classification of single facial images
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/34.817413
– ident: 10.1016/j.patcog.2016.07.026_bib82
  doi: 10.1109/AFGR.2000.840611
– volume: 27
  start-page: 803
  issue: 6
  year: 2009
  ident: 10.1016/j.patcog.2016.07.026_bib8
  article-title: Facial expression recognition based on local binary patterns
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2008.08.005
– ident: 10.1016/j.patcog.2016.07.026_bib64
– ident: 10.1016/j.patcog.2016.07.026_bib15
  doi: 10.1016/S0921-8890(99)00103-7
– ident: 10.1016/j.patcog.2016.07.026_bib53
  doi: 10.1109/FG.2015.7163106
– ident: 10.1016/j.patcog.2016.07.026_bib22
  doi: 10.1109/ICME.2012.61
– ident: 10.1016/j.patcog.2016.07.026_bib47
– volume: 44
  start-page: 1581
  issue: 8
  year: 2011
  ident: 10.1016/j.patcog.2016.07.026_bib74
  article-title: Shape analysis of local facial patches for 3d facial expression recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2011.02.012
– ident: 10.1016/j.patcog.2016.07.026_bib26
  doi: 10.1109/CVPR.2007.383059
– ident: 10.1016/j.patcog.2016.07.026_bib40
  doi: 10.1007/978-3-319-25751-8_32
– ident: 10.1016/j.patcog.2016.07.026_bib33
  doi: 10.7551/mitpress/7496.003.0016
– volume: 22
  start-page: 1740
  issue: 5
  year: 2013
  ident: 10.1016/j.patcog.2016.07.026_bib43
  article-title: Local directional number pattern for face analysis
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2012.2235848
– ident: 10.1016/j.patcog.2016.07.026_bib6
– ident: 10.1016/j.patcog.2016.07.026_bib45
  doi: 10.1109/SIBGRAPI.2015.14
– volume: 14
  start-page: 2497
  issue: 10
  year: 2002
  ident: 10.1016/j.patcog.2016.07.026_bib34
  article-title: Many-layered learning
  publication-title: Neural Comput.
  doi: 10.1162/08997660260293319
– ident: 10.1016/j.patcog.2016.07.026_bib10
  doi: 10.1109/ICCE.2014.6776135
– ident: 10.1016/j.patcog.2016.07.026_bib79
  doi: 10.1109/ICET.2013.6743520
– ident: 10.1016/j.patcog.2016.07.026_bib55
  doi: 10.1109/ICCVW.2011.6130508
– ident: 10.1016/j.patcog.2016.07.026_bib4
  doi: 10.1109/CVPRW.2010.5543262
– volume: 32
  start-page: 76
  issue: 2
  year: 2012
  ident: 10.1016/j.patcog.2016.07.026_bib28
  article-title: Sparse coding for flexible, robust 3d facial-expression synthesis
  publication-title: IEEE Comput. Graph. Appl.
  doi: 10.1109/MCG.2012.41
– ident: 10.1016/j.patcog.2016.07.026_bib14
  doi: 10.14569/IJACSA.2014.051215
– ident: 10.1016/j.patcog.2016.07.026_bib7
  doi: 10.1109/CVPR.2014.233
– ident: 10.1016/j.patcog.2016.07.026_bib71
– ident: 10.1016/j.patcog.2016.07.026_bib21
  doi: 10.1109/CVPR.2005.297
– ident: 10.1016/j.patcog.2016.07.026_bib25
  doi: 10.1109/AFGR.1998.670990
– volume: 86
  start-page: 2278
  issue: 11
  year: 1998
  ident: 10.1016/j.patcog.2016.07.026_bib36
  publication-title: Gradient-based Learn. Appl. Doc. Recognit.
– volume: 15
  start-page: 1929
  issue: 1
  year: 2014
  ident: 10.1016/j.patcog.2016.07.026_bib52
  article-title: Dropout
  publication-title: J. Mach. Learn. Res.
– ident: 10.1016/j.patcog.2016.07.026_bib58
  doi: 10.1109/IROS.2006.281791
– volume: 45
  start-page: 80
  issue: 1
  year: 2012
  ident: 10.1016/j.patcog.2016.07.026_bib67
  article-title: Facial expression recognition using radial encoding of local gabor features and classifier synthesis
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2011.05.006
– volume: 16
  start-page: 555
  issue: 5
  year: 2003
  ident: 10.1016/j.patcog.2016.07.026_bib32
  article-title: Subject independent facial expression recognition with robust face detection using a convolutional neural network
  publication-title: Neural Netw.: Off. J. Int. Neural Netw. Soc.
  doi: 10.1016/S0893-6080(03)00115-1
– ident: 10.1016/j.patcog.2016.07.026_bib27
  doi: 10.1109/ICCVW.2011.6130446
– ident: 10.1016/j.patcog.2016.07.026_bib72
  doi: 10.1109/WACV.2014.6835736
– ident: 10.1016/j.patcog.2016.07.026_bib29
  doi: 10.1007/978-3-642-33783-3_58
– ident: 10.1016/j.patcog.2016.07.026_bib65
– ident: 10.1016/j.patcog.2016.07.026_bib46
– ident: 10.1016/j.patcog.2016.07.026_bib23
  doi: 10.1109/ICICS.2011.6173539
– ident: 10.1016/j.patcog.2016.07.026_bib9
– volume: 74
  start-page: 2135
  issue: 12–13
  year: 2011
  ident: 10.1016/j.patcog.2016.07.026_bib73
  article-title: Feature level analysis for 3d facial expression recognition
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2011.01.008
– ident: 10.1016/j.patcog.2016.07.026_bib51
  doi: 10.1109/AFGR.1998.670949
– volume: 21
  start-page: 541
  issue: 6
  year: 2014
  ident: 10.1016/j.patcog.2016.07.026_bib80
  article-title: Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection
  publication-title: Multimed. Syst.
  doi: 10.1007/s00530-014-0400-2
– volume: 19
  start-page: 1937
  issue: 11
  year: 2011
  ident: 10.1016/j.patcog.2016.07.026_bib18
  article-title: A 0.64mm real-time cascade face detection design based on reduced two-field extraction
  publication-title: IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
  doi: 10.1109/TVLSI.2010.2069575
– volume: 75
  start-page: 935
  issue: 2
  year: 2014
  ident: 10.1016/j.patcog.2016.07.026_bib81
  article-title: Human facial expression recognition using curvelet feature extraction and normalized mutual information feature selection
  publication-title: Multimed. Tools Appl.
  doi: 10.1007/s11042-014-2333-3
– ident: 10.1016/j.patcog.2016.07.026_bib38
  doi: 10.1109/CNMT.2009.5374770
– volume: 127
  start-page: 2670
  issue: 5
  year: 2016
  ident: 10.1016/j.patcog.2016.07.026_bib39
  article-title: Expression invariant face recognition using local binary patterns and contourlet transform
  publication-title: Opt.-Int. J. Light Electron Opt.
  doi: 10.1016/j.ijleo.2015.11.187
– ident: 10.1016/j.patcog.2016.07.026_bib35
– ident: 10.1016/j.patcog.2016.07.026_bib31
– ident: 10.1016/j.patcog.2016.07.026_bib57
  doi: 10.1109/MMSP.1997.602642
– ident: 10.1016/j.patcog.2016.07.026_bib42
  doi: 10.1109/CNT.2014.7062726
– volume: 55
  start-page: 14
  year: 2016
  ident: 10.1016/j.patcog.2016.07.026_bib13
  article-title: Boosted NNE collections for multicultural facial expression recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.01.032
– ident: 10.1016/j.patcog.2016.07.026_bib24
  doi: 10.1109/ICIP.2014.7025686
– volume: 48
  start-page: 3191
  issue: 10
  year: 2015
  ident: 10.1016/j.patcog.2016.07.026_bib17
  article-title: Multimodal learning for facial expression recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2015.04.012
– ident: 10.1016/j.patcog.2016.07.026_bib83
  doi: 10.5244/C.29.41
– ident: 10.1016/j.patcog.2016.07.026_bib60
  doi: 10.1109/CVPR.2013.442
– year: 1978
  ident: 10.1016/j.patcog.2016.07.026_bib66
– ident: 10.1016/j.patcog.2016.07.026_bib62
SSID ssj0017142
Score 2.6532426
Snippet Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 610
SubjectTerms Computer vision
Convolutional Neural Networks
Expression specific features
Facial expression recognition
Machine learning
Title Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order
URI https://dx.doi.org/10.1016/j.patcog.2016.07.026
Volume 61
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwELaqsrDwRpRH5YHVNA_badiqiqqA6ESlbpHtOKgIpRUNj4nfzp3jVCAhkFiSKPZFzuVyd7a--0zIueImLoQRTKtAMy6KiKU6FyyxoRahDhR3kP-7iRxP-c1MzFpk2NTCIKzS-_7apztv7e_0vDZ7y_kca3yRdhAOEowUsm6sYOcJWvnFxxrmgft714zhcciwd1M-5zBeS3B3iwcEeNUUnkix8FN4-hJyRjtky-eKdFAPZ5e0bLlHtpt9GKj_LfdJNVK48E3tu0e1lnSNC4JrXGqlw0X56q0MeiIlhzs5DPjqEpqxbqruWtg3isBRqsqcQn5Im20k6EohlzB1dJ0HZDq6uh-Omd9NgRmYFlSMp1oYo3Nt4OU1j0wiU8PzgtsIpsiFNHHfKC25FbnkqQ0jqVMbBWlfQYzXiY4PSbtclPaIUJAvLKSGBeSC0AiPVJh5ytwmeQqhsEPiRomZ8VTjONSnrMGUPWa16jNUfRYkGai-Q9haallTbfzRP2m-T_bNZDKIBr9KHv9b8oRsRhjX3RrMKWlXzy_2DLKSSned2XXJxuD6djz5BGiE5Fo
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT8MwDLbGOMCFN-JNDlwj1jZpV27TxLTx2GmTuEVJmiIQ6iYoj5-P3aYTSAgkLm3VxlXquraTfv4CcKaFjXJpJTe6Y7iQechTk0meuMDIwHS0qCD_t-N4OBVXd_KuBf2mFoZgld731z698tb-zLnX5vn84YFqfIl2EDcxGilm3UuwTOxUsg3LvdH1cLz4mZAEoiYNjwJOAk0FXQXzmqPHm90Txqtm8SSWhZ8i1JeoM9iANZ8usl7do01ouWIL1pulGJj_MrehHGia-2buwwNbC7aABuExzbay_qx484aGLYmVo9pVMPCXC7xMpVN109y9M8KOMl1kDFNE1qwkwV400QmzirFzB6aDy0l_yP2CCtziyKDkIjXSWpMZiw9vRGiTOLUiy4ULcZScxzbqWm1i4WQWi9QFYWxSF3bSrsYwbxIT7UK7mBVuDxjK5w6zwxzTQbyIt9SUfMaZS7IUo-E-RI0SlfVs49TVJ9XAyh5VrXpFqledRKHq94EvpOY128Yf7ZPm_ahvVqMwIPwqefBvyVNYGU5ub9TNaHx9CKshhflqSuYI2uXzqzvGJKU0J94IPwE9lucL
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Facial+expression+recognition+with+Convolutional+Neural+Networks%3A+Coping+with+few+data+and+the+training+sample+order&rft.jtitle=Pattern+recognition&rft.au=Lopes%2C+Andr%C3%A9+Teixeira&rft.au=de+Aguiar%2C+Edilson&rft.au=De+Souza%2C+Alberto+F.&rft.au=Oliveira-Santos%2C+Thiago&rft.date=2017-01-01&rft.issn=0031-3203&rft.volume=61&rft.spage=610&rft.epage=628&rft_id=info:doi/10.1016%2Fj.patcog.2016.07.026&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_patcog_2016_07_026
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon