Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy
Facial expression recognition is a challenging problem in image classification. Recently, the use of deep learning is gaining importance in image classification. This has led to increased efforts in solving the problem of facial expression recognition using convolutional neural networks (CNNs). A si...
Saved in:
Published in | The Visual computer Vol. 36; no. 2; pp. 405 - 412 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.02.2020
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Facial expression recognition is a challenging problem in image classification. Recently, the use of deep learning is gaining importance in image classification. This has led to increased efforts in solving the problem of facial expression recognition using convolutional neural networks (CNNs). A significant challenge in deep learning is to design a network architecture that is simple and effective. A simple architecture is fast to train and easy to implement. An effective architecture achieves good accuracy on the test data. CNN architectures are black boxes to us. VGGNet, AlexNet and Inception are well-known CNN architectures. These architectures have strongly influenced CNN model designs for new datasets. Almost all CNN models known to achieve high accuracy on facial expression recognition problem are influenced by these architectures. This work tries to overcome this limitation by using FER-2013 dataset as starting point to design new CNN models. In this work, the effect of CNN parameters namely kernel size and number of filters on the classification accuracy is investigated using FER-2013 dataset. Our major contribution is a thorough evaluation of different kernel sizes and number of filters to propose two novel CNN architectures which achieve a human-like accuracy of 65% (Goodfellow et al. in: Neural information processing, Springer, Berlin, pp 117–124,
2013
) on FER-2013 dataset. These architectures can serve as a basis for standardization of the base model for the much inquired FER-2013 dataset. |
---|---|
AbstractList | Facial expression recognition is a challenging problem in image classification. Recently, the use of deep learning is gaining importance in image classification. This has led to increased efforts in solving the problem of facial expression recognition using convolutional neural networks (CNNs). A significant challenge in deep learning is to design a network architecture that is simple and effective. A simple architecture is fast to train and easy to implement. An effective architecture achieves good accuracy on the test data. CNN architectures are black boxes to us. VGGNet, AlexNet and Inception are well-known CNN architectures. These architectures have strongly influenced CNN model designs for new datasets. Almost all CNN models known to achieve high accuracy on facial expression recognition problem are influenced by these architectures. This work tries to overcome this limitation by using FER-2013 dataset as starting point to design new CNN models. In this work, the effect of CNN parameters namely kernel size and number of filters on the classification accuracy is investigated using FER-2013 dataset. Our major contribution is a thorough evaluation of different kernel sizes and number of filters to propose two novel CNN architectures which achieve a human-like accuracy of 65% (Goodfellow et al. in: Neural information processing, Springer, Berlin, pp 117–124, 2013) on FER-2013 dataset. These architectures can serve as a basis for standardization of the base model for the much inquired FER-2013 dataset. Facial expression recognition is a challenging problem in image classification. Recently, the use of deep learning is gaining importance in image classification. This has led to increased efforts in solving the problem of facial expression recognition using convolutional neural networks (CNNs). A significant challenge in deep learning is to design a network architecture that is simple and effective. A simple architecture is fast to train and easy to implement. An effective architecture achieves good accuracy on the test data. CNN architectures are black boxes to us. VGGNet, AlexNet and Inception are well-known CNN architectures. These architectures have strongly influenced CNN model designs for new datasets. Almost all CNN models known to achieve high accuracy on facial expression recognition problem are influenced by these architectures. This work tries to overcome this limitation by using FER-2013 dataset as starting point to design new CNN models. In this work, the effect of CNN parameters namely kernel size and number of filters on the classification accuracy is investigated using FER-2013 dataset. Our major contribution is a thorough evaluation of different kernel sizes and number of filters to propose two novel CNN architectures which achieve a human-like accuracy of 65% (Goodfellow et al. in: Neural information processing, Springer, Berlin, pp 117–124, 2013 ) on FER-2013 dataset. These architectures can serve as a basis for standardization of the base model for the much inquired FER-2013 dataset. |
Author | Agrawal, Abhinav Mittal, Namita |
Author_xml | – sequence: 1 givenname: Abhinav orcidid: 0000-0001-7053-0144 surname: Agrawal fullname: Agrawal, Abhinav email: abhinav0653@gmail.com organization: Department of CSE, MNIT – sequence: 2 givenname: Namita surname: Mittal fullname: Mittal, Namita organization: Department of CSE, MNIT |
BookMark | eNp9kMtKAzEUhoMoWKsv4CrgejS3zkzcSfEGohu7DknmpKaOmZpkwPr0plYQXLgIJ4fzf-fyH6H9MARA6JSSc0pIc5EI4Q2tCJXl1ZxUcg9NqOCsYpzO9tGE0KatWNPKQ3SU0oqUvBFygvIi-bDE88dH7IaInbZe9xg-1hFS8kPAEeywDD6X_yXWOOWx2-DB4fwCGJwDm9M2fYUYoMfJfwLWocNhfDMQtxXn-wyxiALW1o5R280xOnC6T3DyE6docXP9PL-rHp5u7-dXD5XlVObKtLw2vGYtN5wZOQMDjWkF7WhNTStbwRgXHbFdrQkXhDjmtAQuLGPGCUr4FJ3t-q7j8D5Cymo1jDGUkYpJ2shZw4goqnansnFIKYJT1me9PThH7XtFidp6rHYeq-Kx-vZYyYKyP-g6-jcdN_9DfAelIg5LiL9b_UN9ASDEkHc |
CitedBy_id | crossref_primary_10_1007_s13198_023_02186_7 crossref_primary_10_1155_2022_9449328 crossref_primary_10_1007_s00371_021_02136_z crossref_primary_10_1007_s11760_025_03984_1 crossref_primary_10_1109_TIM_2023_3267367 crossref_primary_10_1088_1748_0221_19_03_P03018 crossref_primary_10_1016_j_conbuildmat_2024_138745 crossref_primary_10_3390_jpm12030361 crossref_primary_10_1007_s11042_021_11273_5 crossref_primary_10_3390_s23135775 crossref_primary_10_1007_s11356_024_32228_x crossref_primary_10_3390_ijgi9060357 crossref_primary_10_1007_s11042_023_16220_0 crossref_primary_10_1155_2022_9261438 crossref_primary_10_3390_rs14092243 crossref_primary_10_1007_s00371_021_02350_9 crossref_primary_10_3390_app14156471 crossref_primary_10_1007_s44196_024_00406_x crossref_primary_10_1007_s12652_020_01779_5 crossref_primary_10_1007_s11063_023_11322_0 crossref_primary_10_4081_jae_2022_1366 crossref_primary_10_1016_j_procs_2020_07_101 crossref_primary_10_4015_S1016237224500297 crossref_primary_10_1016_j_apenergy_2023_122399 crossref_primary_10_1016_j_engappai_2023_106661 crossref_primary_10_1007_s00371_023_02827_9 crossref_primary_10_1016_j_procs_2019_11_154 crossref_primary_10_1049_htl2_12040 crossref_primary_10_1109_ACCESS_2021_3082694 crossref_primary_10_1016_j_ins_2021_08_043 crossref_primary_10_1007_s00530_023_01062_5 crossref_primary_10_32604_csse_2023_036377 crossref_primary_10_1016_j_knosys_2020_106172 crossref_primary_10_1016_j_matpr_2021_05_658 crossref_primary_10_1093_molbev_msad157 crossref_primary_10_1007_s00371_022_02655_3 crossref_primary_10_1007_s42979_023_02368_x crossref_primary_10_3390_w13040503 crossref_primary_10_1007_s10489_020_02125_0 crossref_primary_10_1007_s11042_024_19012_2 crossref_primary_10_1007_s00530_022_00907_9 crossref_primary_10_1109_ACCESS_2023_3313973 crossref_primary_10_1007_s00371_022_02413_5 crossref_primary_10_1109_ACCESS_2024_3391057 crossref_primary_10_3390_app132212418 crossref_primary_10_1007_s00530_022_00984_w crossref_primary_10_1007_s11334_022_00438_6 crossref_primary_10_3390_app132312737 crossref_primary_10_1007_s11760_024_03801_1 crossref_primary_10_1007_s00371_021_02163_w crossref_primary_10_3390_math10162872 crossref_primary_10_1088_1742_6596_2129_1_012083 crossref_primary_10_3390_s24123972 crossref_primary_10_1016_j_procs_2023_01_142 crossref_primary_10_3390_electronics13183665 crossref_primary_10_1007_s00542_023_05420_1 crossref_primary_10_1109_TMM_2020_2966858 crossref_primary_10_1007_s10489_021_02205_9 crossref_primary_10_1016_j_cag_2022_10_007 crossref_primary_10_1155_2020_4065207 crossref_primary_10_1515_pdtc_2024_0061 crossref_primary_10_3390_ijerph19053085 crossref_primary_10_1007_s00530_023_01232_5 crossref_primary_10_3102_10769986241268907 crossref_primary_10_3389_fnagi_2021_720226 crossref_primary_10_1007_s11042_022_14122_1 crossref_primary_10_1007_s00138_025_01674_z crossref_primary_10_54644_jte_2024_1547 crossref_primary_10_1186_s13244_021_01100_8 crossref_primary_10_1016_j_conbuildmat_2024_135151 crossref_primary_10_1007_s11334_022_00437_7 crossref_primary_10_1016_j_bbe_2024_06_006 crossref_primary_10_1142_S0218001422560079 crossref_primary_10_1007_s41870_021_00803_x crossref_primary_10_1007_s00371_021_02352_7 crossref_primary_10_1007_s00371_020_01802_y crossref_primary_10_1007_s00371_019_01759_7 crossref_primary_10_1016_j_engappai_2023_106730 crossref_primary_10_1007_s11042_023_16174_3 crossref_primary_10_1007_s00371_022_02543_w crossref_primary_10_1016_j_aej_2023_01_017 crossref_primary_10_1007_s00371_022_02690_0 crossref_primary_10_2166_hydro_2022_068 crossref_primary_10_1109_ACCESS_2023_3290620 crossref_primary_10_1007_s00371_021_02069_7 crossref_primary_10_1007_s11760_023_02657_1 crossref_primary_10_1016_j_inffus_2023_101847 crossref_primary_10_1007_s00521_021_06613_3 crossref_primary_10_1007_s11042_023_16342_5 crossref_primary_10_1016_j_imavis_2023_104677 crossref_primary_10_1007_s11042_023_14603_x crossref_primary_10_1007_s00530_023_01188_6 crossref_primary_10_1007_s00371_020_01993_4 crossref_primary_10_32628_CSEIT228111 crossref_primary_10_3390_app10051897 crossref_primary_10_1007_s41348_024_00861_w crossref_primary_10_1007_s00500_022_06804_7 crossref_primary_10_1002_nbm_4474 crossref_primary_10_1016_j_atmosenv_2024_120388 crossref_primary_10_32604_cmc_2023_032505 crossref_primary_10_1007_s13198_023_01945_w crossref_primary_10_3390_info16030195 crossref_primary_10_1088_2515_7620_ad27fa crossref_primary_10_36548_jtcsst_2021_2_003 crossref_primary_10_1142_S0218001423570021 crossref_primary_10_1007_s40031_022_00746_2 crossref_primary_10_1016_j_energy_2024_131276 crossref_primary_10_1142_S0219720022500019 crossref_primary_10_1109_LRA_2024_3405810 crossref_primary_10_3390_app9183904 crossref_primary_10_1007_s11571_023_10029_1 crossref_primary_10_1186_s12864_022_08772_6 crossref_primary_10_25046_aj050638 crossref_primary_10_1007_s40031_021_00681_8 crossref_primary_10_1016_j_cviu_2024_104010 crossref_primary_10_1007_s42979_025_03670_6 crossref_primary_10_3390_ijerph19042352 crossref_primary_10_1109_ACCESS_2024_3388911 crossref_primary_10_19126_suje_1435509 crossref_primary_10_1016_j_medengphy_2023_104041 crossref_primary_10_1016_j_advengsoft_2024_103719 crossref_primary_10_1007_s42600_024_00363_6 crossref_primary_10_3390_computers13040101 crossref_primary_10_1007_s11063_021_10636_1 crossref_primary_10_1111_coin_12586 |
Cites_doi | 10.1007/978-3-642-42051-1_16 10.1007/s00371-018-1585-8 10.1109/KSE.2017.8119447 10.1109/CW.2016.34 10.1109/ROMAN.2016.7745199 |
ContentType | Journal Article |
Copyright | Springer-Verlag GmbH Germany, part of Springer Nature 2019 Springer-Verlag GmbH Germany, part of Springer Nature 2019. |
Copyright_xml | – notice: Springer-Verlag GmbH Germany, part of Springer Nature 2019 – notice: Springer-Verlag GmbH Germany, part of Springer Nature 2019. |
DBID | AAYXX CITATION 8FE 8FG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ HCIFZ JQ2 K7- P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI PRINS |
DOI | 10.1007/s00371-019-01630-9 |
DatabaseName | CrossRef ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One Community College ProQuest Central ProQuest Central Student ProQuest SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China |
DatabaseTitle | CrossRef Advanced Technologies & Aerospace Collection Computer Science Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection ProQuest One Academic Eastern Edition SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central Advanced Technologies & Aerospace Database ProQuest One Applied & Life Sciences ProQuest One Academic UKI Edition ProQuest Central Korea ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) |
DatabaseTitleList | Advanced Technologies & Aerospace Collection |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1432-2315 |
EndPage | 412 |
ExternalDocumentID | 10_1007_s00371_019_01630_9 |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C -~X .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 2.D 203 28- 29R 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 6TJ 78A 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYOK AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDPE ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADQRH ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFFNX AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. BA0 BBWZM BDATZ BENPR BGLVJ BGNMA BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K7- KDC KOV KOW LAS LLZTM M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P2P P9O PF0 PT4 PT5 QOK QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TN5 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR YOT Z45 Z5O Z7R Z7S Z7X Z7Z Z83 Z86 Z88 Z8M Z8N Z8R Z8T Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 8FE 8FG ABRTQ AZQEC DWQXO GNUQQ JQ2 P62 PKEHL PQEST PQGLB PQQKQ PQUKI PRINS |
ID | FETCH-LOGICAL-c319t-b836b36283b32b95ebe7b841d161b89842234d0cd6a03400f2fa9e34c22bf4103 |
IEDL.DBID | U2A |
ISSN | 0178-2789 |
IngestDate | Fri Jul 25 22:32:12 EDT 2025 Tue Jul 01 01:05:47 EDT 2025 Thu Apr 24 22:53:27 EDT 2025 Fri Feb 21 02:34:56 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Keywords | Deep learning FER-2013 CNN |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-b836b36283b32b95ebe7b841d161b89842234d0cd6a03400f2fa9e34c22bf4103 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-7053-0144 |
PQID | 2917957204 |
PQPubID | 2043737 |
PageCount | 8 |
ParticipantIDs | proquest_journals_2917957204 crossref_citationtrail_10_1007_s00371_019_01630_9 crossref_primary_10_1007_s00371_019_01630_9 springer_journals_10_1007_s00371_019_01630_9 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20200200 2020-2-00 20200201 |
PublicationDateYYYYMMDD | 2020-02-01 |
PublicationDate_xml | – month: 2 year: 2020 text: 20200200 |
PublicationDecade | 2020 |
PublicationPlace | Berlin/Heidelberg |
PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
PublicationSubtitle | International Journal of Computer Graphics |
PublicationTitle | The Visual computer |
PublicationTitleAbbrev | Vis Comput |
PublicationYear | 2020 |
Publisher | Springer Berlin Heidelberg Springer Nature B.V |
Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
References | Goodfellow, Erhan, Carrier, Courville, Mirza, Hamner, Cukierski, Tang, Thaler, Lee, Zhou, Ramaiah, Feng, Li, Wang, Athanasakis, Shawe-Taylor, Milakov, Park, Ionescu, Popescu, Grozea, Bergstra, Xie, Romaszko, Xu, Chuang, Bengio (CR1) 2013 CR2 CR4 Wan, Yang, Li (CR6) 2016 CR3 CR5 CR8 CR7 Gogić, Manhart, Pandžić, Ahlberg (CR9) 2018; 36 CR15 CR14 CR13 CR12 CR11 CR10 1630_CR3 1630_CR2 1630_CR5 1630_CR4 W Wan (1630_CR6) 2016 1630_CR15 1630_CR7 1630_CR8 1630_CR10 1630_CR12 1630_CR11 Ivan Gogić (1630_CR9) 2018; 36 1630_CR14 1630_CR13 Ian J. Goodfellow (1630_CR1) 2013 |
References_xml | – start-page: 117 year: 2013 end-page: 124 ident: CR1 article-title: Challenges in Representation Learning: A Report on Three Machine Learning Contests publication-title: Neural Information Processing doi: 10.1007/978-3-642-42051-1_16 – ident: CR3 – ident: CR4 – ident: CR14 – ident: CR15 – ident: CR2 – ident: CR12 – ident: CR13 – ident: CR10 – ident: CR11 – year: 2016 ident: CR6 publication-title: Facial Expression Recognition Using Convolutional Neural Network. A Case Study of the Relationship Between Dataset Characteristics and Network Performance – ident: CR5 – ident: CR7 – ident: CR8 – volume: 36 start-page: 97 issue: 1 year: 2018 end-page: 112 ident: CR9 article-title: Fast facial expression recognition using local binary features and shallow neural networks publication-title: The Visual Computer doi: 10.1007/s00371-018-1585-8 – ident: 1630_CR13 – ident: 1630_CR14 – start-page: 117 volume-title: Neural Information Processing year: 2013 ident: 1630_CR1 doi: 10.1007/978-3-642-42051-1_16 – ident: 1630_CR7 – ident: 1630_CR15 doi: 10.1109/KSE.2017.8119447 – ident: 1630_CR8 – ident: 1630_CR12 – ident: 1630_CR4 – ident: 1630_CR5 – ident: 1630_CR2 – ident: 1630_CR3 – volume-title: Facial Expression Recognition Using Convolutional Neural Network. A Case Study of the Relationship Between Dataset Characteristics and Network Performance year: 2016 ident: 1630_CR6 – volume: 36 start-page: 97 issue: 1 year: 2018 ident: 1630_CR9 publication-title: The Visual Computer doi: 10.1007/s00371-018-1585-8 – ident: 1630_CR10 doi: 10.1109/CW.2016.34 – ident: 1630_CR11 doi: 10.1109/ROMAN.2016.7745199 |
SSID | ssj0017749 |
Score | 2.5623863 |
Snippet | Facial expression recognition is a challenging problem in image classification. Recently, the use of deep learning is gaining importance in image... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 405 |
SubjectTerms | Accuracy Artificial Intelligence Artificial neural networks Classification Computer Graphics Computer Science Data processing Datasets Deep learning Face recognition Image classification Image Processing and Computer Vision Machine learning Neural networks Original Article |
SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LSwMxEA7aXvQgPrFaJQdvGtxN0m3Wi2ixFMEiYqG3JckmUCy7tQ9Qf72TNG1VsMfdPA4zSeabzOQbhC6apqm4SSICpskSLpKYSGMSEnPDhWVNKTxvwVM36fT4Y7_RDxduk5BWuTgT_UGdl9rdkV9T8CvShiupcjt6J65qlIuuhhIam6gKR7AA56t6_9B9flnGEQDceAAcg6_k3nyGZzP-8ZxnqwNX2uULJSwi6W_TtMKbf0Kk3vK0d9FOgIz4bq7jPbRhin20_YNI8ABNfeQft7pdDCAUW-kuwrH5CFmuBV7mCZXFDZbYk8ri0mKAfzikdLjPNzMuzBBPBl8GyyLH83ohrsUOXFgdOhVYaj0bS_15iHrth9dWh4R6CkTDRpsSJViiwGAJphhVaQP011SCxzmgPiVSwQEq8DzSeSIjBnvbUitTw7imVFkeR-wIVYqyMMcIww_NpKaWQwsMkqniCdUKnDedU6VqKF6IMtOBbNzVvBhmS5pkL_4MxJ958WdpDV0ux4zmVBtre9cXGsrCtptkq0VSQ1cLra2a_5_tZP1sp2iLOj_bZ2vXUWU6npkzACNTdR5W3DdTL9or priority: 102 providerName: ProQuest |
Title | Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy |
URI | https://link.springer.com/article/10.1007/s00371-019-01630-9 https://www.proquest.com/docview/2917957204 |
Volume | 36 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4VuMABWh7qFrqaA7cSKbG9WYfbgnZBrbpCqCvBKbIdW0KgbLW7SMCv79jrhIdopZ6ixI9DxpP5JvPNDMBh3_a1sHmakGlyiZB5lihr8yQTVkjH-0qGugU_x_n5RHy_6l3FpLB5w3ZvQpLhS90mu4XqcuT6en5PztOkWIG1nvfd6RRP2KCNHRCgCaA3I__I53nGVJn393htjp4x5puwaLA2o4-wGWEiDpZy_QQfbL0NW00LBowauQ0bL-oJ7sAiEADwdDxGwqLolP8fjvYhkl1rbOlC0_oYFYbasjh1SCgQI7PD397aWW3vcH7zZFHVFS7bhvgRd-Oj6zSpRmXM_UyZx12YjIa_Ts-T2FYhMaRvi0RLnmuyW5JrznTRIzH2tRRZReBPy0IKQgyiSk2Vq5STijvmVGG5MIxpJ7KU78FqPa3tZ0B6YLgyzAkaoUWq0CJnRpMPZyqmdQey5u2WJtYc960v7sq2WnKQSEkSKYNEyqID39o1v5cVN_45-6ARWhm1b14y8kGLnm-_04GjRpDPw3_f7cv_Td-Hdebd70DiPoDVxezefiWMstBdWJGjsy6sDc6ufwzpejIcX1x2w0H9A5l33_4 |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LbxMxEB615UA5IJ5q2gJzgBNY7NrOxlsJIRQIKW1zaqXeFttrSxHRJk1SlfZH9Td27OwmgERvOa5f0o7Hnm88L4C3Hdcx0mUJI9HkmVRZyrRzGUulk8qLjlYxb8HJIOufyR_n7fMNuG1iYYJbZXMnxou6HNvwRv6Rk16Rt0NJlc-TCxaqRgXralNCY8EWR-76ilS22afDr7S_7zjvfTvt9lldVYBZYrc5M0pkhq5tJYzgJm_TX3SMkmlJ2MeoXEkSmLJMbJnpRBCHe-517oS0nBsv00TQupvwQAqRhxOlet-XVguCUhFup6SZhQjTOkgnhurF3HikuAfvpEwkLP9bEK7Q7T8G2Sjnek_gcQ1Q8cuCo57ChquewaM_0hY-h3n0M8DuYIAEedHr8OyO7nftU1vh0itpXB2gxpjCFsceCWxi7UASPn-5aeVGOBveONRViYvqJKHHD4MRnwZVqK29nGp7_QLO1kLnl7BVjSu3A0gNVmjLvaQemqRzIzNuDamKtuTGtCBtSFnYOrV5qLAxKpZJmSP5CyJ_Eclf5C14v5wzWST2uHf0frNDRX3IZ8WKJVvwodm1Vff_V9u9f7U38LB_enJcHB8OjvZgmwcNP_qJ78PWfHrpXhEMmpvXkfcQfq6b2e8A-I8ULg |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB5BkSo48FhALBQ6B240amJ78-C2WlgVClEPXam3yHZsadWVt9qmEu2vZ-w8tlQFiWPisQ8eO_NNZuYbgI-ZyZQwaRyRabKRyNMkksakUSKMyC3PZB54C36W6dFCfD-bnN2q4g_Z7n1Isq1p8CxNrjm8qO3hUPgWmObIDfa5PimPo-IhPKLPceLP9YJNhzgCgZsAgBPylXzNZ1c2c_8af5qmLd68EyINlmf-HJ52kBGnrY5fwAPjRvCsb8eA3e0cwZNb3IIvoQnJADgrSyRcilb6f-NofnWJrw6H1KG1-4wSA88sri0SIsQuy8M_npuNMyu8XN4YlK7GtoWIH7FLH2knIYdS66uN1NevYDH_ejo7iroWC5GmzWoilfNUkQ3LueJMFRNSaaZykdQEBFVe5ILQg6hjXacy5nTdLbOyMFxoxpQVScxfw45bO_MGkF5oLjWzgkZokiyUSJlW5M_pmik1hqTf3Up3_OO-DcaqGpiTg0Yq0kgVNFIVY_g0zLlo2Tf-Kb3XK63qbuJlxcgfLSa-Fc8YDnpFbof_vtrb_xPfh92TL_Pqx7fy-B08Zt4rD7nde7DTbK7Me4IujfoQTudvFDnjGQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Using+CNN+for+facial+expression+recognition%3A+a+study+of+the+effects+of+kernel+size+and+number+of+filters+on+accuracy&rft.jtitle=The+Visual+computer&rft.au=Agrawal%2C+Abhinav&rft.au=Mittal%2C+Namita&rft.date=2020-02-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=0178-2789&rft.eissn=1432-2315&rft.volume=36&rft.issue=2&rft.spage=405&rft.epage=412&rft_id=info:doi/10.1007%2Fs00371-019-01630-9&rft.externalDocID=10_1007_s00371_019_01630_9 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0178-2789&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0178-2789&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0178-2789&client=summon |