Can edges help convolution neural networks in emotion recognition?
Facial emotion recognition has gained importance for its applications in diverse areas. Facial expressions of a subject, when experiencing the same emotion, have wider variations. On the other hand, different subjects experiencing the same emotion may exhibit different facial features. All these mak...
Saved in:
Published in | Neurocomputing (Amsterdam) Vol. 433; pp. 162 - 168 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
14.04.2021
|
Subjects | |
Online Access | Get full text |
ISSN | 0925-2312 |
DOI | 10.1016/j.neucom.2020.12.092 |
Cover
Abstract | Facial emotion recognition has gained importance for its applications in diverse areas. Facial expressions of a subject, when experiencing the same emotion, have wider variations. On the other hand, different subjects experiencing the same emotion may exhibit different facial features. All these make facial emotion recognition challenging. The ability of convolutional neural network (CNN) has been exploited to analyze visual imagery for different applications. It has also been used in developing automatic facial emotion recognition systems. Our objective in this study is to check if an explicit use of edges can help emotion recognition from images using CNN. Edges in an image represent discriminatory information and hence their explicit use is likely to help the training of CNNs and improve emotion recognition. Keeping this in mind we propose a two-tower CNN architecture to classify images into seven basic classes of emotion including the neutral expression. The proposed CNN has an additional tower, called edge-tower, which is simpler in architecture compared to the other tower and it uses edge images as inputs. Our experiments on two benchmark datasets demonstrate that the explicit use of edge information improves the classifier performance. |
---|---|
AbstractList | Facial emotion recognition has gained importance for its applications in diverse areas. Facial expressions of a subject, when experiencing the same emotion, have wider variations. On the other hand, different subjects experiencing the same emotion may exhibit different facial features. All these make facial emotion recognition challenging. The ability of convolutional neural network (CNN) has been exploited to analyze visual imagery for different applications. It has also been used in developing automatic facial emotion recognition systems. Our objective in this study is to check if an explicit use of edges can help emotion recognition from images using CNN. Edges in an image represent discriminatory information and hence their explicit use is likely to help the training of CNNs and improve emotion recognition. Keeping this in mind we propose a two-tower CNN architecture to classify images into seven basic classes of emotion including the neutral expression. The proposed CNN has an additional tower, called edge-tower, which is simpler in architecture compared to the other tower and it uses edge images as inputs. Our experiments on two benchmark datasets demonstrate that the explicit use of edge information improves the classifier performance. |
Author | Bhandari, Arkaprabha Pal, Nikhil R. |
Author_xml | – sequence: 1 givenname: Arkaprabha surname: Bhandari fullname: Bhandari, Arkaprabha organization: Nissan Digital India LLP, Thiruvananthapuram, India – sequence: 2 givenname: Nikhil R. surname: Pal fullname: Pal, Nikhil R. email: nikhil@isical.ac.in organization: Electronics and Communication Sciences Unit and Centre for Artificial Intelligence and Machine Learning, Indian Statistical Institute, Calcutta, India |
BookMark | eNqFkM9OwzAMxnMYEtvgDTj0BVqcpE1bDiCY-CchcYFzlCXuyOiSKemGeHtSxokDXGzL9vfJP8_IxHmHhJxRKChQcb4uHO603xQMWGqxAlo2IdMUq5xxyo7JLMY1AK0pa6fkZqFchmaFMXvDfptp7_a-3w3WuywZBdWnNHz48B4zmzY3_nsUUPuVs2N9dUKOOtVHPP3Jc_J6d_uyeMifnu8fF9dPueYghlxgbUDzJZgKG0CouOigEQbKllaKNrzuSl62dcW5aDiDalkKg7RpWuBtzWo-J-XBVwcfY8BOboPdqPApKciRXa7lgV2O7JIymaiT7OKXTNtBjZcPQdn-P_HlQYwJbG8xyKgtOo3GphcM0nj7t8EXYT17BA |
CitedBy_id | crossref_primary_10_1007_s11042_023_16081_7 crossref_primary_10_1088_1742_6596_1962_1_012040 crossref_primary_10_1007_s40747_023_01296_w crossref_primary_10_1016_j_neucom_2022_03_058 crossref_primary_10_1038_s41598_023_43763_x crossref_primary_10_2339_politeknik_992720 crossref_primary_10_1038_s41598_022_21456_1 crossref_primary_10_1007_s11042_023_16556_7 crossref_primary_10_32604_cmc_2022_020084 crossref_primary_10_1016_j_neucom_2024_128196 crossref_primary_10_1155_2021_9991531 |
Cites_doi | 10.1016/j.ins.2017.11.061 10.1016/j.jestch.2016.03.005 10.1109/TPAMI.1986.4767851 10.1109/TSMCA.2003.817057 10.1016/j.eswa.2013.11.041 10.1016/j.neucom.2014.05.008 10.1080/02564602.2014.944588 10.1109/TIP.2002.999679 10.1109/TGRS.2011.2173939 10.1109/ICSMC.2004.1398371 10.1016/j.neucom.2020.01.034 10.1007/978-981-10-8237-5_28 10.1016/j.sbspro.2015.06.477 10.1109/TSMCA.2012.2207107 10.1007/978-3-540-76858-6_45 10.1016/j.neucom.2017.08.015 10.1109/TIP.2015.2405346 10.1109/TMM.2014.2321113 10.1016/j.neucom.2020.07.120 10.1016/j.imavis.2008.08.005 10.1109/WACV.2016.7477450 10.1109/SOCPAR.2014.7007973 10.1016/0262-8856(95)01036-X 10.1016/j.neucom.2017.08.043 10.1109/TPAMI.2007.1110 10.1109/34.643893 |
ContentType | Journal Article |
Copyright | 2021 Elsevier B.V. |
Copyright_xml | – notice: 2021 Elsevier B.V. |
DBID | AAYXX CITATION |
DOI | 10.1016/j.neucom.2020.12.092 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EndPage | 168 |
ExternalDocumentID | 10_1016_j_neucom_2020_12_092 S092523122032004X |
GroupedDBID | --- --K --M .DC .~1 0R~ 123 1B1 1~. 1~5 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JM 9JN AABNK AACTN AAEDT AAEDW AAIKJ AAKOC AALRI AAOAW AAQFI AATTM AAXKI AAXLA AAXUO AAYFN ABBOA ABCQJ ABFNM ABJNI ABMAC ACDAQ ACGFS ACRLP ACZNC ADBBV ADEZE AEBSH AEIPS AEKER AENEX AFTJW AFXIZ AGHFR AGUBO AGWIK AGYEJ AHHHB AHZHX AIALX AIEXJ AIKHN AITUG AKRWK ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU AOUOD AXJTR BKOJK BLXMC BNPGV CS3 DU5 EBS EFJIC EO8 EO9 EP2 EP3 F5P FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ IHE J1W KOM LG9 M41 MO0 MOBAO N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 ROL RPZ SDF SDG SDP SES SPC SPCBC SSH SSN SSV SSZ T5K ZMT ~G- 29N AAQXK AAYWO AAYXX ABWVN ABXDB ACNNM ACRPL ACVFH ADCNI ADJOM ADMUD ADNMO AEUPX AFJKZ AFPUW AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKYEP APXCP ASPBG AVWKF AZFZN CITATION EJD FEDTE FGOYB HLZ HVGLF HZ~ R2- RIG SBC SEW WUQ XPP |
ID | FETCH-LOGICAL-c306t-6e7d0c3b0d5e80e0536f086d04915a1837f43497533683205b46de18890397273 |
IEDL.DBID | AIKHN |
ISSN | 0925-2312 |
IngestDate | Tue Jul 01 01:46:56 EDT 2025 Thu Apr 24 23:16:10 EDT 2025 Sun Apr 06 06:58:50 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Emotion recognition Two-tower CNN Convolutional neural network Edge images Edge-tower |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c306t-6e7d0c3b0d5e80e0536f086d04915a1837f43497533683205b46de18890397273 |
PageCount | 7 |
ParticipantIDs | crossref_primary_10_1016_j_neucom_2020_12_092 crossref_citationtrail_10_1016_j_neucom_2020_12_092 elsevier_sciencedirect_doi_10_1016_j_neucom_2020_12_092 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-04-14 |
PublicationDateYYYYMMDD | 2021-04-14 |
PublicationDate_xml | – month: 04 year: 2021 text: 2021-04-14 day: 14 |
PublicationDecade | 2020 |
PublicationTitle | Neurocomputing (Amsterdam) |
PublicationYear | 2021 |
Publisher | Elsevier B.V |
Publisher_xml | – name: Elsevier B.V |
References | Kenji (b0060) 1991; 74 M. F. Valstar, M. Pantic, I. Patras, Motion history for facial action detection from face video, in: Proceedings of IEEE Int’l Conf. Systems, Man and Cybernetics (SMC’04), The Hague, Netherlands, 2004, pp. 635–640. Zhu (b0235) 1996; 14 W. Zhen, Y. Zilu, Facial expression recognition based on local phase quantization and sparse representation, in: Proceedings of Eighth International Conference on Natural Computation (ICNC2012), 2012, pp. 222–225. Siddiqi, Ali, Khan, Park, Lee (b0090) 2015; 24 Hegde, Seetha, Hegde (b0075) 2016; 19 S. Noh, H. Park, Y. Jin, J.-I. Park, Feature-adaptive motion energy analysis for facial expression recognition, in: International Symposium on Visual Computing, Springer, 2007, pp. 452–463. M. Kamachi, M. Lyons, J. Gyoba, The japanese female facial expression (jaffe) database, Available Challenges in representation learning: Facial expression recognition challenge Canny (b0195) 1986 Zhang, Tjondronegoro, Chandran (b0040) 2014; 145 Siddiqi, Ali, Sattar, Khan, Lee (b0085) 2014; 31 Heath, Sarkar, Sanocki, Bowyer (b0205) 1997; 19 Kobayashi, Hara (b0010) 1997 Owusu, Zhan, Mao (b0110) 2014; 41 D. I. Islam, S. N. Anal, A. Datta, Facial expression recognition using 2dpca on segmented images, in: Advanced Computational and Communication Paradigms, Springer, 2018, pp. 289–297. Wang, Huang, Hu, Anderson, Rollins, Makedon (b0120) 2010 Wang, Phillips, Dong, Zhang (b0070) 2018; 272 Zhu, Ramanan (b0170) 2012 E. Correa, A. Jonker, M. Ozo, R. Stolk, Emotion recognition using deep convolutional neural networks, Tech. rep., TU Delft (June 2016). R. Gonzalez, R. Woods, Digital Image Processing, Pearson Education, 2011. S. Wang, Y. Yuan, X. Zheng, X. Lu, Local and correlation attention learning for subtle facial expression recognition, Neurocomputing doi: 10.1016/j.neucom.2020.07.120. . Shan, Gong, McOwan (b0050) 2013; 27 A. Mollahosseini, D. Chan, M. H. Mahoor, Going deeper in facial expression recognition using deep neural networks, in: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016, pp. 1–10. R. Breuer, R. Kimmel, A deep learning perspective on the origin of facial expressions, 2017, arXiv:1705.01842. Chen, Ren, Wei, Cao, Sun (b0160) 2014 Rashid (b0105) 2016 Zhao, Wang, Men (b0115) 2007; 562–566 Zeng, Zhang, Song, Liu, Li, Dobaie (b0125) 2018; 273 S. Katiyar, A. V, Comparative analysis of common edge detection techniques in context of object extraction, IEEE Transactions on Geoscience and Remote Sensing 50 (2012) 68–78. S. Kherchaoui, A. Houacine, Facial expression identification system with euclidean distance of facial edges, in: 2014 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR), 2014, pp. 6–10. doi:10.1109/SOCPAR.2014.7007973. Khorrami, Paine, Huang (b0145) 2015 V. Ramesh, R. Haralick, Performance characterization of edge detectors, in: Defense, Security, and Sensing, 1992. Zhao, Pietikainen (b0055) 2007; 29 Karmakar, Pal (b0240) 2018; 430–431 Liu, Wechsler (b0035) 2002; 11 A. Gudi, Recognizing semantic features in faces using deep learning, CoRR abs/1512.00743. Halder, Konar, Mandal, Chakraborty, Bhowmik, Pal, Nagar (b0005) 2013; 43 Gao, Leung, Hui, Tananda (b0025) 2003; 33 Zhang, Zhang (b0165) 2014 Yu, Zhang (b0155) 2015 L. H. Thai, N. D. T. Nguyen, T. S. Hai, A facial expression classification system integrating canny, principal component analysis and artificial neural network, CoRR abs/1111.4052. Ekman, Friesen (b0015) 1978 Dahmane, Meunier (b0030) 2014; 16 Oztürka, Akdemira (b0220) 2015; 195 J. X. Y., A. Hoover, G. Jean-Baptiste, D. Goldgof, K. Bowyer, H. Bunke, A methodology for evaluating edge detection techniques for range images, in: Proc. Asian Conf. Computer Vision, 1995, pp. 415–419. P. EKMAN, Facial action coding system (facs), A Human Face. Fei, Yang, Li, Butler, Ijomah, Li, Zhou (b0130) 2020; 388 10.1016/j.neucom.2020.12.092_b0200 Zhao (10.1016/j.neucom.2020.12.092_b0055) 2007; 29 Liu (10.1016/j.neucom.2020.12.092_b0035) 2002; 11 10.1016/j.neucom.2020.12.092_b0225 Dahmane (10.1016/j.neucom.2020.12.092_b0030) 2014; 16 Canny (10.1016/j.neucom.2020.12.092_b0195) 1986 Oztürka (10.1016/j.neucom.2020.12.092_b0220) 2015; 195 10.1016/j.neucom.2020.12.092_b0140 10.1016/j.neucom.2020.12.092_b0020 Zeng (10.1016/j.neucom.2020.12.092_b0125) 2018; 273 Chen (10.1016/j.neucom.2020.12.092_b0160) 2014 10.1016/j.neucom.2020.12.092_b0185 10.1016/j.neucom.2020.12.092_b0065 10.1016/j.neucom.2020.12.092_b0045 10.1016/j.neucom.2020.12.092_b0100 Khorrami (10.1016/j.neucom.2020.12.092_b0145) 2015 Rashid (10.1016/j.neucom.2020.12.092_b0105) 2016 Siddiqi (10.1016/j.neucom.2020.12.092_b0090) 2015; 24 10.1016/j.neucom.2020.12.092_b0190 Halder (10.1016/j.neucom.2020.12.092_b0005) 2013; 43 10.1016/j.neucom.2020.12.092_b0135 Ekman (10.1016/j.neucom.2020.12.092_b0015) 1978 Gao (10.1016/j.neucom.2020.12.092_b0025) 2003; 33 Kenji (10.1016/j.neucom.2020.12.092_b0060) 1991; 74 10.1016/j.neucom.2020.12.092_b0215 Owusu (10.1016/j.neucom.2020.12.092_b0110) 2014; 41 Zhu (10.1016/j.neucom.2020.12.092_b0235) 1996; 14 10.1016/j.neucom.2020.12.092_b0095 10.1016/j.neucom.2020.12.092_b0150 Hegde (10.1016/j.neucom.2020.12.092_b0075) 2016; 19 Zhao (10.1016/j.neucom.2020.12.092_b0115) 2007; 562–566 Karmakar (10.1016/j.neucom.2020.12.092_b0240) 2018; 430–431 Siddiqi (10.1016/j.neucom.2020.12.092_b0085) 2014; 31 Zhang (10.1016/j.neucom.2020.12.092_b0040) 2014; 145 10.1016/j.neucom.2020.12.092_b0175 10.1016/j.neucom.2020.12.092_b0230 Wang (10.1016/j.neucom.2020.12.092_b0070) 2018; 272 Zhang (10.1016/j.neucom.2020.12.092_b0165) 2014 10.1016/j.neucom.2020.12.092_b0210 Kobayashi (10.1016/j.neucom.2020.12.092_b0010) 1997 Wang (10.1016/j.neucom.2020.12.092_b0120) 2010 Fei (10.1016/j.neucom.2020.12.092_b0130) 2020; 388 10.1016/j.neucom.2020.12.092_b0080 10.1016/j.neucom.2020.12.092_b0180 Yu (10.1016/j.neucom.2020.12.092_b0155) 2015 Zhu (10.1016/j.neucom.2020.12.092_b0170) 2012 Shan (10.1016/j.neucom.2020.12.092_b0050) 2013; 27 Heath (10.1016/j.neucom.2020.12.092_b0205) 1997; 19 |
References_xml | – reference: P. EKMAN, Facial action coding system (facs), A Human Face. – volume: 388 start-page: 212 year: 2020 end-page: 227 ident: b0130 article-title: Deep convolution network based emotion analysis towards mental health care publication-title: Neurocomputing – start-page: 3732 year: 1997 end-page: 3737 ident: b0010 article-title: Facial interaction between animated 3d face robot and human beings publication-title: Proceedings of 1997 IEEE International Conference on Computational Cybernetics and Simulation – reference: A. Gudi, Recognizing semantic features in faces using deep learning, CoRR abs/1512.00743. – reference: V. Ramesh, R. Haralick, Performance characterization of edge detectors, in: Defense, Security, and Sensing, 1992. – reference: S. Wang, Y. Yuan, X. Zheng, X. Lu, Local and correlation attention learning for subtle facial expression recognition, Neurocomputing doi: 10.1016/j.neucom.2020.07.120. – start-page: 1036 year: 2014 end-page: 1041 ident: b0165 article-title: Improving multiview face detection with multi-task deep convolutional neural networks publication-title: IEEE Winter Conference on Applications of Computer Vision – reference: D. I. Islam, S. N. Anal, A. Datta, Facial expression recognition using 2dpca on segmented images, in: Advanced Computational and Communication Paradigms, Springer, 2018, pp. 289–297. – volume: 16 start-page: 1574 year: 2014 end-page: 1584 ident: b0030 article-title: Prototype-based modeling for facial expression analysis publication-title: IEEE Transactions on Multimedia – reference: A. Mollahosseini, D. Chan, M. H. Mahoor, Going deeper in facial expression recognition using deep neural networks, in: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016, pp. 1–10. – volume: 272 start-page: 668 year: 2018 end-page: 676 ident: b0070 article-title: Intelligent facial emotion recognition based on stationary wavelet entropy and jaya algorithm publication-title: Neurocomputing – start-page: 7 year: 2010 ident: b0120 article-title: Emotion detection via discriminative kernel method publication-title: Proceedings of the 3rd International Conference on Pervasive Technologies Related to Assistive Environments – volume: 14 start-page: 21 year: 1996 end-page: 34 ident: b0235 article-title: Efficient evaluations of edge connectivity and width uniformity publication-title: Image and Vision Computing – reference: Challenges in representation learning: Facial expression recognition challenge, – year: 1978 ident: b0015 article-title: Facial Action Coding System – volume: 19 start-page: 1338 year: 1997 end-page: 1359 ident: b0205 article-title: A robust visual method for assessing the relative performance of edge detection algorithms publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – volume: 11 start-page: 467 year: 2002 end-page: 476 ident: b0035 article-title: Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition publication-title: IEEE Transactions on Image processing – volume: 195 start-page: 2675 year: 2015 end-page: 2682 ident: b0220 article-title: Comparison of edge detection algorithms for texture analysis on glass production publication-title: Procedia-Social and Behavioral Sciences – reference: W. Zhen, Y. Zilu, Facial expression recognition based on local phase quantization and sparse representation, in: Proceedings of Eighth International Conference on Natural Computation (ICNC2012), 2012, pp. 222–225. – reference: M. F. Valstar, M. Pantic, I. Patras, Motion history for facial action detection from face video, in: Proceedings of IEEE Int’l Conf. Systems, Man and Cybernetics (SMC’04), The Hague, Netherlands, 2004, pp. 635–640. – start-page: 19 year: 2015 end-page: 27 ident: b0145 article-title: Do deep neural networks learn facial action units when doing expression recognition? publication-title: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW) – start-page: 679 year: 1986 end-page: 698 ident: b0195 article-title: A computational approach to edge detection publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8 – volume: 33 start-page: 407 year: 2003 end-page: 412 ident: b0025 article-title: Facial expression recognition from line-based caricatures publication-title: IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans – reference: J. X. Y., A. Hoover, G. Jean-Baptiste, D. Goldgof, K. Bowyer, H. Bunke, A methodology for evaluating edge detection techniques for range images, in: Proc. Asian Conf. Computer Vision, 1995, pp. 415–419. – reference: E. Correa, A. Jonker, M. Ozo, R. Stolk, Emotion recognition using deep convolutional neural networks, Tech. rep., TU Delft (June 2016). – start-page: 435 year: 2015 end-page: 442 ident: b0155 article-title: Image based static facial expression recognition with multiple deep network learning publication-title: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction – volume: 430–431 start-page: 444 year: 2018 end-page: 466 ident: b0240 article-title: How to make a neural network say don’t know publication-title: Information Sciences – reference: S. Kherchaoui, A. Houacine, Facial expression identification system with euclidean distance of facial edges, in: 2014 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR), 2014, pp. 6–10. doi:10.1109/SOCPAR.2014.7007973. – reference: R. Gonzalez, R. Woods, Digital Image Processing, Pearson Education, 2011. – volume: 24 start-page: 1386 year: 2015 end-page: 1398 ident: b0090 article-title: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields publication-title: IEEE Transactions on Image Processing – volume: 41 start-page: 3383 year: 2014 end-page: 3390 ident: b0110 article-title: A neural-adaboost based facial expression recognition system publication-title: Expert Systems with Applications – volume: 273 start-page: 643 year: 2018 end-page: 649 ident: b0125 article-title: Facial expression recognition via learning deep sparse autoencoders publication-title: Neurocomputing – start-page: 109 year: 2014 end-page: 122 ident: b0160 article-title: Joint cascade face detection and alignment publication-title: Computer Vision – ECCV 2014 – start-page: 73 year: 2016 end-page: 84 ident: b0105 article-title: Convolutional neural networks based method for improving facial expression recognition publication-title: The International Symposium on Intelligent Systems Technologies and Applications – reference: M. Kamachi, M. Lyons, J. Gyoba, The japanese female facial expression (jaffe) database, Available: – volume: 145 start-page: 451 year: 2014 end-page: 464 ident: b0040 article-title: Random gabor based templates for facial expression recognition in images with facial occlusion publication-title: Neurocomputing – reference: S. Noh, H. Park, Y. Jin, J.-I. Park, Feature-adaptive motion energy analysis for facial expression recognition, in: International Symposium on Visual Computing, Springer, 2007, pp. 452–463. – reference: L. H. Thai, N. D. T. Nguyen, T. S. Hai, A facial expression classification system integrating canny, principal component analysis and artificial neural network, CoRR abs/1111.4052. – volume: 19 start-page: 1321 year: 2016 end-page: 1333 ident: b0075 article-title: Kernel locality preserving symmetrical weighted fisher discriminant analysis based subspace approach for expression recognition publication-title: Engineering Science and Technology, an International Journal – start-page: 2879 year: 2012 end-page: 2886 ident: b0170 article-title: Face detection, pose estimation, and landmark localization in the wild publication-title: 2012 IEEE Conference on Computer Vision and Pattern Recognition – reference: . – volume: 31 start-page: 277 year: 2014 end-page: 286 ident: b0085 article-title: Depth camera-based facial expression recognition system using multilayer scheme publication-title: IETE Technical Review – volume: 29 start-page: 915 year: 2007 end-page: 928 ident: b0055 article-title: Dynamic texture recognition using local binary patterns with an application to facial expressions publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – volume: 562–566 year: 2007 ident: b0115 article-title: Facial complex expression recognition based on fuzzy kernel clustering and support vector machine publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – volume: 27 start-page: 803 year: 2013 end-page: 816 ident: b0050 article-title: Facial expression recognition based on local binary patterns: a comprehensive study publication-title: Image and Vision Computing – volume: 74 start-page: 3474 year: 1991 end-page: 3483 ident: b0060 article-title: Recognition of facial expression from optical flow publication-title: IEICE Transactions on Information and Systems – volume: 43 start-page: 587 year: 2013 end-page: 605 ident: b0005 article-title: General and interval type-2 fuzzy face-space approach to emotion recognition publication-title: IEEE Transactions on Systems, Man, and Cybernetics: Systems – reference: R. Breuer, R. Kimmel, A deep learning perspective on the origin of facial expressions, 2017, arXiv:1705.01842. – reference: S. Katiyar, A. V, Comparative analysis of common edge detection techniques in context of object extraction, IEEE Transactions on Geoscience and Remote Sensing 50 (2012) 68–78. – volume: 430–431 start-page: 444 year: 2018 ident: 10.1016/j.neucom.2020.12.092_b0240 article-title: How to make a neural network say don’t know publication-title: Information Sciences doi: 10.1016/j.ins.2017.11.061 – volume: 19 start-page: 1321 issue: 3 year: 2016 ident: 10.1016/j.neucom.2020.12.092_b0075 article-title: Kernel locality preserving symmetrical weighted fisher discriminant analysis based subspace approach for expression recognition publication-title: Engineering Science and Technology, an International Journal doi: 10.1016/j.jestch.2016.03.005 – start-page: 679 year: 1986 ident: 10.1016/j.neucom.2020.12.092_b0195 article-title: A computational approach to edge detection publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8 doi: 10.1109/TPAMI.1986.4767851 – start-page: 3732 year: 1997 ident: 10.1016/j.neucom.2020.12.092_b0010 article-title: Facial interaction between animated 3d face robot and human beings – volume: 33 start-page: 407 issue: 3 year: 2003 ident: 10.1016/j.neucom.2020.12.092_b0025 article-title: Facial expression recognition from line-based caricatures publication-title: IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans doi: 10.1109/TSMCA.2003.817057 – volume: 41 start-page: 3383 issue: 7 year: 2014 ident: 10.1016/j.neucom.2020.12.092_b0110 article-title: A neural-adaboost based facial expression recognition system publication-title: Expert Systems with Applications doi: 10.1016/j.eswa.2013.11.041 – volume: 145 start-page: 451 year: 2014 ident: 10.1016/j.neucom.2020.12.092_b0040 article-title: Random gabor based templates for facial expression recognition in images with facial occlusion publication-title: Neurocomputing doi: 10.1016/j.neucom.2014.05.008 – volume: 31 start-page: 277 issue: 4 year: 2014 ident: 10.1016/j.neucom.2020.12.092_b0085 article-title: Depth camera-based facial expression recognition system using multilayer scheme publication-title: IETE Technical Review doi: 10.1080/02564602.2014.944588 – volume: 11 start-page: 467 issue: 4 year: 2002 ident: 10.1016/j.neucom.2020.12.092_b0035 article-title: Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition publication-title: IEEE Transactions on Image processing doi: 10.1109/TIP.2002.999679 – ident: 10.1016/j.neucom.2020.12.092_b0210 doi: 10.1109/TGRS.2011.2173939 – ident: 10.1016/j.neucom.2020.12.092_b0065 doi: 10.1109/ICSMC.2004.1398371 – volume: 388 start-page: 212 year: 2020 ident: 10.1016/j.neucom.2020.12.092_b0130 article-title: Deep convolution network based emotion analysis towards mental health care publication-title: Neurocomputing doi: 10.1016/j.neucom.2020.01.034 – start-page: 73 year: 2016 ident: 10.1016/j.neucom.2020.12.092_b0105 article-title: Convolutional neural networks based method for improving facial expression recognition – ident: 10.1016/j.neucom.2020.12.092_b0080 doi: 10.1007/978-981-10-8237-5_28 – ident: 10.1016/j.neucom.2020.12.092_b0180 – start-page: 1036 year: 2014 ident: 10.1016/j.neucom.2020.12.092_b0165 article-title: Improving multiview face detection with multi-task deep convolutional neural networks – start-page: 7 year: 2010 ident: 10.1016/j.neucom.2020.12.092_b0120 article-title: Emotion detection via discriminative kernel method – volume: 195 start-page: 2675 year: 2015 ident: 10.1016/j.neucom.2020.12.092_b0220 article-title: Comparison of edge detection algorithms for texture analysis on glass production publication-title: Procedia-Social and Behavioral Sciences doi: 10.1016/j.sbspro.2015.06.477 – start-page: 19 year: 2015 ident: 10.1016/j.neucom.2020.12.092_b0145 article-title: Do deep neural networks learn facial action units when doing expression recognition? – ident: 10.1016/j.neucom.2020.12.092_b0150 – start-page: 2879 year: 2012 ident: 10.1016/j.neucom.2020.12.092_b0170 article-title: Face detection, pose estimation, and landmark localization in the wild – volume: 43 start-page: 587 issue: 3 year: 2013 ident: 10.1016/j.neucom.2020.12.092_b0005 article-title: General and interval type-2 fuzzy face-space approach to emotion recognition publication-title: IEEE Transactions on Systems, Man, and Cybernetics: Systems doi: 10.1109/TSMCA.2012.2207107 – start-page: 109 year: 2014 ident: 10.1016/j.neucom.2020.12.092_b0160 article-title: Joint cascade face detection and alignment – ident: 10.1016/j.neucom.2020.12.092_b0175 – ident: 10.1016/j.neucom.2020.12.092_b0100 doi: 10.1007/978-3-540-76858-6_45 – ident: 10.1016/j.neucom.2020.12.092_b0020 – volume: 272 start-page: 668 year: 2018 ident: 10.1016/j.neucom.2020.12.092_b0070 article-title: Intelligent facial emotion recognition based on stationary wavelet entropy and jaya algorithm publication-title: Neurocomputing doi: 10.1016/j.neucom.2017.08.015 – year: 1978 ident: 10.1016/j.neucom.2020.12.092_b0015 – volume: 24 start-page: 1386 issue: 4 year: 2015 ident: 10.1016/j.neucom.2020.12.092_b0090 article-title: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2015.2405346 – ident: 10.1016/j.neucom.2020.12.092_b0215 – volume: 16 start-page: 1574 issue: 6 year: 2014 ident: 10.1016/j.neucom.2020.12.092_b0030 article-title: Prototype-based modeling for facial expression analysis publication-title: IEEE Transactions on Multimedia doi: 10.1109/TMM.2014.2321113 – ident: 10.1016/j.neucom.2020.12.092_b0095 doi: 10.1016/j.neucom.2020.07.120 – ident: 10.1016/j.neucom.2020.12.092_b0230 – ident: 10.1016/j.neucom.2020.12.092_b0045 – volume: 27 start-page: 803 issue: 6 year: 2013 ident: 10.1016/j.neucom.2020.12.092_b0050 article-title: Facial expression recognition based on local binary patterns: a comprehensive study publication-title: Image and Vision Computing doi: 10.1016/j.imavis.2008.08.005 – ident: 10.1016/j.neucom.2020.12.092_b0140 doi: 10.1109/WACV.2016.7477450 – ident: 10.1016/j.neucom.2020.12.092_b0190 – ident: 10.1016/j.neucom.2020.12.092_b0185 doi: 10.1109/SOCPAR.2014.7007973 – volume: 562–566 year: 2007 ident: 10.1016/j.neucom.2020.12.092_b0115 article-title: Facial complex expression recognition based on fuzzy kernel clustering and support vector machine publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – volume: 14 start-page: 21 year: 1996 ident: 10.1016/j.neucom.2020.12.092_b0235 article-title: Efficient evaluations of edge connectivity and width uniformity publication-title: Image and Vision Computing doi: 10.1016/0262-8856(95)01036-X – volume: 273 start-page: 643 year: 2018 ident: 10.1016/j.neucom.2020.12.092_b0125 article-title: Facial expression recognition via learning deep sparse autoencoders publication-title: Neurocomputing doi: 10.1016/j.neucom.2017.08.043 – volume: 74 start-page: 3474 issue: 10 year: 1991 ident: 10.1016/j.neucom.2020.12.092_b0060 article-title: Recognition of facial expression from optical flow publication-title: IEICE Transactions on Information and Systems – start-page: 435 year: 2015 ident: 10.1016/j.neucom.2020.12.092_b0155 article-title: Image based static facial expression recognition with multiple deep network learning – volume: 29 start-page: 915 issue: 6 year: 2007 ident: 10.1016/j.neucom.2020.12.092_b0055 article-title: Dynamic texture recognition using local binary patterns with an application to facial expressions publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2007.1110 – ident: 10.1016/j.neucom.2020.12.092_b0135 – volume: 19 start-page: 1338 issue: 12 year: 1997 ident: 10.1016/j.neucom.2020.12.092_b0205 article-title: A robust visual method for assessing the relative performance of edge detection algorithms publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/34.643893 – ident: 10.1016/j.neucom.2020.12.092_b0225 – ident: 10.1016/j.neucom.2020.12.092_b0200 |
SSID | ssj0017129 |
Score | 2.3757575 |
Snippet | Facial emotion recognition has gained importance for its applications in diverse areas. Facial expressions of a subject, when experiencing the same emotion,... |
SourceID | crossref elsevier |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 162 |
SubjectTerms | Convolutional neural network Edge images Edge-tower Emotion recognition Two-tower CNN |
Title | Can edges help convolution neural networks in emotion recognition? |
URI | https://dx.doi.org/10.1016/j.neucom.2020.12.092 |
Volume | 433 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwED71sbDwRpRH5YHVNHFsJxlLRVVAdIFK3SwncURRFSLarvx2fIlTgYRAYo18UXS5x2f7uzuAK8xhhumcytwYyhMtqY50ShPD4sjX2rAIq5Efp3Iy4_dzMW_BqKmFQVqli_11TK-itXsycNoclIvF4MmLmd1F-ayaAe7xeRu6LIil6EB3ePcwmW4vE0Kf1S33mKAo0FTQVTSvwmyQNsIsbKrOBWP2c4b6knXG-7Dr4CIZ1l90AC1THMJeM4qBOM88gpuRLggeja3Ii1mWBMnkzqgItqy07yhqwveKLOzKengP2dKHLOg-htn49nk0oW48Ak0tzl9TacLMS4PEy4SJPBzxIHO7Qcks5veFtq4a5jzgWDgbSOu3nki4zIwfRbFnQYiFLSfQKd4KcwokiI2UYc40kxkPsjzCH5jFHO8IhUlED4JGJSp1vcNxhMVSNSSxV1UrUqEilc-UVWQP6FaqrHtn_LE-bLStvtmAsuH9V8mzf0ueww5Dlgp2b-QX0Fm_b8ylhRnrpA_t6w-_74zpEyi_0SU |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwED6VMsDCG1GeHlhNE8d2khEqqgJtF1qpm-UkjiiqQkXbld-OL3EqkBBIrNHZii73-Ox8dwdwjTnMMJ1TmRtDeaIl1ZFOaWJYHPlaGxZhNfJgKHtj_jgRkwZ06loYpFW62F_F9DJauydtp832fDptP3sxs6con5UzwD0-2YBNLoIQeX03H2uehx_6rGq4xwRF8bp-riR5FWaFpBFmQVN5Kxizn_PTl5zT3YMdBxbJbfU--9AwxQHs1oMYiPPLQ7jr6ILgxdiCvJjZnCCV3JkUwYaVdo-ionsvyNRKVqN7yJo8ZCH3EYy796NOj7rhCDS1KH9JpQkzLw0SLxMm8nDAg8zt8SSziN8X2jpqmPOAY9lsIK3XeiLhMjN-FMWehSAWtBxDs3grzAmQIDZShjnTTGY8yPIIP18Wc_xDKEwiWhDUKlGp6xyOAyxmqqaIvapKkQoVqXymrCJbQNer5lXnjD_kw1rb6psFKBvcf115-u-VV7DVGw36qv8wfDqDbYZ8FezjyM-huXxfmQsLOJbJZWlQn-X60fA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Can+edges+help+convolution+neural+networks+in+emotion+recognition%3F&rft.jtitle=Neurocomputing+%28Amsterdam%29&rft.au=Bhandari%2C+Arkaprabha&rft.au=Pal%2C+Nikhil+R.&rft.date=2021-04-14&rft.issn=0925-2312&rft.volume=433&rft.spage=162&rft.epage=168&rft_id=info:doi/10.1016%2Fj.neucom.2020.12.092&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_neucom_2020_12_092 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0925-2312&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0925-2312&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0925-2312&client=summon |