Attention mechanism-based CNN for facial expression recognition

Facial expression recognition is a hot research topic and can be applied in many computer vision fields, such as human–computer interaction, affective computing and so on. In this paper, we propose a novel end-to-end network with attention mechanism for automatic facial expression recognition. The n...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing Vol. 411; pp. 340 - 350
Main Authors Li, Jing, Jin, Kan, Zhou, Dalin, Kubota, Naoyuki, Ju, Zhaojie
Format Journal Article
LanguageEnglish
Japanese
Published Elsevier B.V 21.10.2020
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Facial expression recognition is a hot research topic and can be applied in many computer vision fields, such as human–computer interaction, affective computing and so on. In this paper, we propose a novel end-to-end network with attention mechanism for automatic facial expression recognition. The new network architecture consists of four parts, i.e., the feature extraction module, the attention module, the reconstruction module and the classification module. The LBP features extract image texture information and then catch the small movements of the faces, which can improve the network performance. Attention mechanism can make the neural network pay more attention to useful features. We combine LBP features and attention mechanism to enhance the attention model to obtain better results. In addition, we collected and labelled a new facial expression dataset of seven expressions from 35 subjects aged from 20 to 25. For each subject, we captured both RGB images and depth images with a Microsoft Kinect sensor. For each image type, there are 245 image sequences, each of which contains 110 images, resulting in 26,950 images in total. We apply the newly proposed method to our own dataset and four representative expression datasets, i.e., JAFFE, CK+, FER2013 and Oulu-CASIA. The experimental results demonstrate the feasibility and effectiveness of the proposed method.
AbstractList Facial expression recognition is a hot research topic and can be applied in many computer vision fields, such as human–computer interaction, affective computing and so on. In this paper, we propose a novel end-to-end network with attention mechanism for automatic facial expression recognition. The new network architecture consists of four parts, i.e., the feature extraction module, the attention module, the reconstruction module and the classification module. The LBP features extract image texture information and then catch the small movements of the faces, which can improve the network performance. Attention mechanism can make the neural network pay more attention to useful features. We combine LBP features and attention mechanism to enhance the attention model to obtain better results. In addition, we collected and labelled a new facial expression dataset of seven expressions from 35 subjects aged from 20 to 25. For each subject, we captured both RGB images and depth images with a Microsoft Kinect sensor. For each image type, there are 245 image sequences, each of which contains 110 images, resulting in 26,950 images in total. We apply the newly proposed method to our own dataset and four representative expression datasets, i.e., JAFFE, CK+, FER2013 and Oulu-CASIA. The experimental results demonstrate the feasibility and effectiveness of the proposed method.
Author Li, Jing
Jin, Kan
Ju, Zhaojie
Kubota, Naoyuki
Zhou, Dalin
Author_xml – sequence: 1
  givenname: Jing
  surname: Li
  fullname: Li, Jing
  organization: School of Information Engineering, Nanchang University, Nanchang 330031, China
– sequence: 2
  givenname: Kan
  surname: Jin
  fullname: Jin, Kan
  organization: School of Information Engineering, Nanchang University, Nanchang 330031, China
– sequence: 3
  givenname: Dalin
  surname: Zhou
  fullname: Zhou, Dalin
  organization: School of Computing, University of Portsmouth, Portsmouth PO1 3HE, UK
– sequence: 4
  givenname: Naoyuki
  surname: Kubota
  fullname: Kubota, Naoyuki
  organization: Graduate School of Systems Design, Tokyo Metropolitan University, China
– sequence: 5
  givenname: Zhaojie
  surname: Ju
  fullname: Ju, Zhaojie
  email: zhaojie.ju@port.ac.uk
  organization: School of Computing, University of Portsmouth, Portsmouth PO1 3HE, UK
BackLink https://cir.nii.ac.jp/crid/1871146592963088128$$DView record in CiNii
BookMark eNqFkD1PwzAQhi1UJNrCP2DIwJrgcxzHYQBVFV9SVRaYLcc5g6vGqeyA4N-TKEwMsNwt73Mfz4LMfOeRkHOgGVAQl7vM47vp2oxRRjMqMgr8iMxBliyVTIoZmdOKFSnLgZ2QRYw7SqEEVs3Jzarv0feu80mL5k17F9u01hGbZL3dJrYLidXG6X2Cn4eAMY7JgKZ79W6kTsmx1fuIZz99SV7ubp_XD-nm6f5xvdqkhvOyTzVoURS5BWYrMBzyqhF5DUKiFSVa2TBb15yVdjiZ6hyLApBbaEqbc9lYli8Jn-aa0MUY0KpDcK0OXwqoGiWonZokqFGCokINEgbs6hdmXK_Hw_ug3f4_-GKCvXMDN9ZBKQAXRcUqkVMpgckhdj3FcPj_w2FQ0Tj0Bhs3iOpV07m_93wDJtmHOQ
CitedBy_id crossref_primary_10_1007_s13198_023_02186_7
crossref_primary_10_48084_etasr_8433
crossref_primary_10_21597_jist_976577
crossref_primary_10_1080_01969722_2022_2062850
crossref_primary_10_1109_ACCESS_2024_3399557
crossref_primary_10_1093_ijlct_ctae114
crossref_primary_10_3390_app13084952
crossref_primary_10_3390_bioengineering10080948
crossref_primary_10_1109_ACCESS_2023_3308047
crossref_primary_10_1007_s11042_022_14186_z
crossref_primary_10_1016_j_cscm_2024_e03183
crossref_primary_10_1002_inf2_12619
crossref_primary_10_1007_s11760_025_03984_1
crossref_primary_10_1016_j_bbe_2021_02_010
crossref_primary_10_1016_j_ijleo_2020_165499
crossref_primary_10_3390_info15070384
crossref_primary_10_3390_sym16040471
crossref_primary_10_1109_JSEN_2024_3389674
crossref_primary_10_3390_app14167356
crossref_primary_10_1109_TIM_2021_3134999
crossref_primary_10_3233_JIFS_235541
crossref_primary_10_1016_j_aej_2024_08_054
crossref_primary_10_1016_j_asoc_2021_108084
crossref_primary_10_1109_ACCESS_2024_3467286
crossref_primary_10_3233_JIFS_212022
crossref_primary_10_3390_electronics11091379
crossref_primary_10_3390_s24175487
crossref_primary_10_1016_j_compenvurbsys_2023_102009
crossref_primary_10_1038_s41598_024_74229_3
crossref_primary_10_1109_JSEN_2024_3493947
crossref_primary_10_3390_plants13243462
crossref_primary_10_1007_s11042_022_13799_8
crossref_primary_10_1007_s11554_023_01310_x
crossref_primary_10_1109_THMS_2022_3145097
crossref_primary_10_1177_18761364241296439
crossref_primary_10_1016_j_neucom_2021_02_074
crossref_primary_10_1038_s41598_025_85139_3
crossref_primary_10_3233_JIFS_230524
crossref_primary_10_1016_j_ins_2023_02_056
crossref_primary_10_1007_s11042_022_12871_7
crossref_primary_10_1007_s10055_023_00808_w
crossref_primary_10_4015_S1016237224500297
crossref_primary_10_1016_j_egyr_2024_07_023
crossref_primary_10_1038_s41598_023_38012_0
crossref_primary_10_1016_j_oceaneng_2024_119169
crossref_primary_10_1016_j_jhydrol_2024_132197
crossref_primary_10_1080_10618562_2023_2259806
crossref_primary_10_1109_TIM_2024_3488159
crossref_primary_10_1007_s11042_023_14666_w
crossref_primary_10_1177_07356331221115663
crossref_primary_10_1016_j_cscm_2024_e03722
crossref_primary_10_1109_ACCESS_2024_3407361
crossref_primary_10_1007_s00371_022_02655_3
crossref_primary_10_1007_s11042_024_20428_z
crossref_primary_10_2478_cait_2024_0010
crossref_primary_10_1088_1742_6596_2026_1_012029
crossref_primary_10_3390_rs14153670
crossref_primary_10_1007_s00371_023_03041_3
crossref_primary_10_1016_j_neucom_2022_05_019
crossref_primary_10_3390_app13010468
crossref_primary_10_1007_s11042_023_16837_1
crossref_primary_10_1111_exsy_13670
crossref_primary_10_1007_s10489_023_04490_y
crossref_primary_10_1016_j_neucom_2022_04_052
crossref_primary_10_1016_j_ins_2023_01_095
crossref_primary_10_1145_3664197
crossref_primary_10_3389_fncom_2022_980063
crossref_primary_10_1049_ell2_12884
crossref_primary_10_1109_ACCESS_2023_3285781
crossref_primary_10_1016_j_asoc_2022_108788
crossref_primary_10_1016_j_chb_2024_108228
crossref_primary_10_1109_ACCESS_2023_3284457
crossref_primary_10_1109_ACCESS_2023_3333381
crossref_primary_10_26634_jip_9_2_18968
crossref_primary_10_1007_s11042_023_17013_1
crossref_primary_10_1109_TFUZZ_2024_3472043
crossref_primary_10_1016_j_asoc_2021_107930
crossref_primary_10_3390_electronics10202539
crossref_primary_10_1016_j_oceaneng_2022_113018
crossref_primary_10_1007_s42600_024_00391_2
crossref_primary_10_3390_s24237549
crossref_primary_10_54047_bibted_1206885
crossref_primary_10_1209_0295_5075_ac7ba4
crossref_primary_10_3233_JIFS_211729
crossref_primary_10_1007_s00371_024_03345_y
crossref_primary_10_1007_s00521_021_06420_w
crossref_primary_10_3390_s25051478
crossref_primary_10_3390_electronics12183837
crossref_primary_10_1109_JOE_2022_3192047
crossref_primary_10_3390_biomimetics8020199
crossref_primary_10_1109_TCSS_2023_3305616
crossref_primary_10_3233_JIFS_221919
crossref_primary_10_1016_j_cmpb_2024_108518
crossref_primary_10_1016_j_ins_2022_06_087
crossref_primary_10_1038_s41598_023_30442_0
crossref_primary_10_1109_ACCESS_2023_3283597
crossref_primary_10_3390_biomedicines11010133
crossref_primary_10_1016_j_neunet_2024_106489
crossref_primary_10_1155_2021_5486328
crossref_primary_10_1007_s10489_023_04802_2
crossref_primary_10_1155_2020_4065207
crossref_primary_10_1016_j_neucom_2021_06_033
crossref_primary_10_1007_s11042_024_20157_3
crossref_primary_10_3390_info15010030
crossref_primary_10_3390_computers11060088
crossref_primary_10_3390_s23115204
crossref_primary_10_1109_TAFFC_2023_3267774
crossref_primary_10_1142_S0218126624501317
crossref_primary_10_4018_IJSWIS_352418
crossref_primary_10_1063_5_0140545
crossref_primary_10_1177_21582440251317833
crossref_primary_10_1007_s41666_021_00101_y
crossref_primary_10_1016_j_taml_2022_100384
crossref_primary_10_1109_ACCESS_2023_3325034
crossref_primary_10_3390_s24227404
crossref_primary_10_1016_j_aej_2023_01_017
crossref_primary_10_1109_TAFFC_2023_3286838
crossref_primary_10_1109_TR_2022_3190639
crossref_primary_10_3390_electronics10111289
crossref_primary_10_1155_2024_5576859
crossref_primary_10_1007_s12652_023_04627_4
crossref_primary_10_1109_TAFFC_2023_3312768
crossref_primary_10_1117_1_JEI_31_4_043056
crossref_primary_10_1016_j_neucom_2024_129323
crossref_primary_10_1109_TAFFC_2021_3122146
crossref_primary_10_1109_TII_2022_3145862
crossref_primary_10_7717_peerj_cs_2676
crossref_primary_10_1016_j_inffus_2023_102019
crossref_primary_10_1109_TCE_2023_3263672
crossref_primary_10_1177_15280837231225827
crossref_primary_10_3390_electronics13061138
crossref_primary_10_7717_peerj_cs_2272
crossref_primary_10_3390_bdcc6040122
crossref_primary_10_1007_s00371_023_03168_3
crossref_primary_10_3390_s21030833
crossref_primary_10_1038_s41539_022_00139_6
crossref_primary_10_1016_j_csite_2022_102670
crossref_primary_10_3233_JIFS_222252
crossref_primary_10_1007_s00500_022_06804_7
crossref_primary_10_1109_TCDS_2021_3064280
crossref_primary_10_1016_j_egyr_2022_12_043
crossref_primary_10_32604_cmc_2023_032505
crossref_primary_10_3390_sym16040451
crossref_primary_10_1007_s11042_022_12058_0
crossref_primary_10_1007_s41870_024_01872_4
crossref_primary_10_1109_TII_2023_3253188
crossref_primary_10_1016_j_asoc_2023_110530
crossref_primary_10_1007_s11042_024_20511_5
crossref_primary_10_1109_ACCESS_2024_3365521
crossref_primary_10_1016_j_compag_2025_110182
crossref_primary_10_1007_s10489_022_03225_9
crossref_primary_10_1016_j_eswa_2022_116705
crossref_primary_10_1016_j_imavis_2020_104044
crossref_primary_10_1016_j_jad_2021_07_029
crossref_primary_10_3390_s24134153
crossref_primary_10_7717_peerj_cs_2266
crossref_primary_10_1007_s00521_021_06520_7
crossref_primary_10_1016_j_eswa_2023_122784
crossref_primary_10_1007_s42979_024_03537_2
crossref_primary_10_1142_S0218001422560134
crossref_primary_10_1109_TAI_2022_3172272
crossref_primary_10_1016_j_jvcir_2024_104345
crossref_primary_10_1109_ACCESS_2024_3476110
crossref_primary_10_1016_j_engappai_2023_106795
crossref_primary_10_3934_mbe_2023140
crossref_primary_10_59782_aai_v1i3_333
crossref_primary_10_1007_s00371_024_03369_4
crossref_primary_10_1016_j_ymeth_2021_11_010
crossref_primary_10_1364_AO_546273
crossref_primary_10_1109_TII_2024_3353912
crossref_primary_10_3390_systems11020107
crossref_primary_10_1007_s11042_022_13361_6
crossref_primary_10_1117_1_JEI_33_3_033032
crossref_primary_10_1007_s11042_023_15962_1
crossref_primary_10_1007_s11042_025_20698_1
crossref_primary_10_1109_TIM_2023_3314815
crossref_primary_10_3390_electronics12040969
Cites_doi 10.1109/TPAMI.2002.1017623
10.1016/j.neucom.2019.05.005
10.1109/CVPRW.2018.00286
10.1109/CVPR.2014.233
10.1016/j.imavis.2011.07.002
10.1016/j.neucom.2018.03.034
10.1007/s11263-017-1055-1
10.1109/IJCNN.2015.7280539
10.1109/TIT.1967.1053964
10.1109/CVPR.2018.00231
10.1109/FG.2018.00046
10.1609/aaai.v31i1.11231
10.1109/CVPR.2017.243
10.1007/BF00994018
10.1109/CVPR.2019.00320
10.1109/5.726791
10.1016/0031-3203(95)00067-4
10.1109/34.927467
10.1109/CVPRW.2016.187
10.1109/CVPRW.2010.5543262
10.1007/978-3-319-69456-6_12
10.1109/CVPRW.2017.282
10.1109/TIP.2017.2689999
10.1007/978-981-13-7986-4_28
10.1016/j.neucom.2018.07.028
10.1109/FG.2019.8756615
10.1016/j.neucom.2017.06.050
10.1109/WACV.2016.7477450
10.1109/CVPR.2015.7298594
10.1109/CVPR.2016.90
ContentType Journal Article
Copyright 2020 Elsevier B.V.
Copyright_xml – notice: 2020 Elsevier B.V.
DBID RYH
AAYXX
CITATION
DOI 10.1016/j.neucom.2020.06.014
DatabaseName CiNii Complete
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1872-8286
EndPage 350
ExternalDocumentID 10_1016_j_neucom_2020_06_014
S0925231220309838
GroupedDBID ---
--K
--M
.DC
.~1
0R~
123
1B1
1~.
1~5
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAXLA
AAXUO
AAYFN
ABBOA
ABCQJ
ABFNM
ABJNI
ABMAC
ABYKQ
ACDAQ
ACGFS
ACRLP
ACZNC
ADBBV
ADEZE
AEBSH
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
AXJTR
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EO8
EO9
EP2
EP3
F5P
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
IHE
J1W
KOM
LG9
M41
MO0
MOBAO
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
ROL
RPZ
SDF
SDG
SDP
SES
SPC
SPCBC
SSN
SSV
SSZ
T5K
ZMT
~G-
AATTM
AAXKI
AAYWO
ACVFH
ADCNI
AEIPS
AEUPX
AFPUW
AGCQF
AGRNS
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
RYH
SSH
29N
AAQXK
AAYXX
ABWVN
ABXDB
ACNNM
ACRPL
ADJOM
ADMUD
ADNMO
AFJKZ
AGQPQ
AIGII
ASPBG
AVWKF
AZFZN
CITATION
EJD
FEDTE
FGOYB
HLZ
HVGLF
HZ~
R2-
RIG
SBC
SEW
WUQ
XPP
ID FETCH-LOGICAL-c447t-a1a6553f12f91c4139d63b168ef67ef8d2fbb427f8720a3e551e4f1d7f348df23
IEDL.DBID .~1
ISSN 0925-2312
IngestDate Thu Apr 24 22:53:28 EDT 2025
Tue Jul 01 01:46:50 EDT 2025
Thu Jun 26 22:39:34 EDT 2025
Fri Feb 23 02:47:04 EST 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Image Classification
Convolutional Neural Network
Facial Expression Recognition
Local Binary Patten
Attention Mechanism
Language English
Japanese
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c447t-a1a6553f12f91c4139d63b168ef67ef8d2fbb427f8720a3e551e4f1d7f348df23
ORCID 0000-0001-8829-037X
0000-0003-2363-9125
OpenAccessLink https://cir.nii.ac.jp/crid/1871146592963088128
PageCount 11
ParticipantIDs crossref_primary_10_1016_j_neucom_2020_06_014
crossref_citationtrail_10_1016_j_neucom_2020_06_014
nii_cinii_1871146592963088128
elsevier_sciencedirect_doi_10_1016_j_neucom_2020_06_014
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-10-21
PublicationDateYYYYMMDD 2020-10-21
PublicationDate_xml – month: 10
  year: 2020
  text: 2020-10-21
  day: 21
PublicationDecade 2020
PublicationTitle Neurocomputing
PublicationYear 2020
Publisher Elsevier B.V
Elsevier BV
Publisher_xml – name: Elsevier B.V
– name: Elsevier BV
References C.M. Kuo, S.H. Lai, M. Sarkis, A compact deep learning model for robust facial expression recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 2121–2129
Z. Zhang, M. Lyons, M. Schuster, S. Akamatsu, Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron, in Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998. Proceedings, Nara 1998, pp. 454_9.
Zhang, Huang, Du (b0135) 2017; 26
Zhong, Lei, et al., A graph-structured representation with BRNN for static-based facial expression recognition, 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), IEEE, 2019.
J. Li, Y. Mi, J. Yu, et al., A novel convolutional neural network for facial expression recognition, International Conference on Cognitive Systems and Signal Processing. Springer, Singapore, 2018, pp. 310–320
T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, IEEE Trans. Pattern Anal. Mach. Intell. 23 (6) (2011) 6815
G. Huang, Z. Liu, K. Q. Weinberger, L. van der Maaten, Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016
P.D.M. Fernandez, F.A.G. Peña, T.I. Ren, et al., FERAtt: facial expression recognition with attention net. arXiv preprint arXiv:1902.03284, 2019.
Simonyan, Zisserman (b0180) 2014
B.-K. Kim, S.-Y. Dong, J. Roh, G. Kim, S.-Y. Lee, Fusing aligned and non-aligned face information for automatic affect recognition in the wild: A deep learning approach, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 48–57
D. Hamester, P. Barros, S. Wermter, Face expression recognition with a 2-channel convolutional neural network, in: Neural Networks (IJCNN), 2015 International Joint Conference on, IEEE, 2015, pp. 1–8
Huiyuan Yang, Umur Ciftci, Lijun Yin, Facial expression recognition by de-expression residue learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018
Y. Guo, D. Tao, J. Yu, H. Xiong, Y. Li, D. Tao, Deep neural networks with relativity learning for facial expression recognition, in: Multimedia & Expo Workshops (ICMEW), 2016 IEEE International Conference on, IEEE, 2016, pp. 1–6.
Ojala, Harwood (b0060) 1996; 29
Y. Lecun, L. Bottou, Y. Bengio, et al., Gradient-based learning applied to document recognition, Proc. IEEE 86 (11) (1998) 2278–2324
C. Szegedy, S. Ioffe, V. Vanhoucke, et al., Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, 2016
Arriaga, Octavio, Matias Valdenegro-Toro, Paul Plöger, Real-time convolutional neural networks for emotion and gender classification. arXiv preprint arXiv:1710.07557 (2017).
Cortes, Vapnik (b0130) 1995; 20
P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, IEEE, 2010, pp. 94–101.
T. Zhao, X. Wu, Pyramid feature attention network for saliency detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, 3085–3094
Sun, Jin, Zhao (b0035) 2017; 267
B. Hasani, M.H. Mahoor, Spatio-temporal facial expression recognition using convolutional neural networks and conditional random fields, (2017) 790–795
Shao, Qian (b0170) 2019; 355
M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with Gabor wavelets, in: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 200–205
W. Wang, Q. Sun, T. Chen, et al., A Fine-Grained Facial Expression Database for End-to-End Multi-Pose Facial Expression Recognition, arXiv preprint arXiv:1907.10838, 2019.
T. Cover, P. Hart, Nearest neighbor pattern classification, IEEE Trans. Inform. Theor. 13 (1) (1967) pp. 21_7
H. Yang, U. Ciftci, L. Yin, Facial expression recognition by de-expression residue learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2168–2177
A. Mollahosseini, D. Chan, M.H. Mahoor, Going deeper in facial expression recognition using deep neural networks, (2016) 1–10
C. Turan, K.M. Lam, X. He, Soft Locality Preserving Map (SLPM) for Facial Expression Recognition. arXiv preprint arXiv:1801.03754, 2018.
Zhang, Luo, Chen, Tang (b0205) 2018; 126
T. Connie, M. Al-Shabi, W.P. Cheah, et al., Facial expression recognition using a hybrid CNN–SIFT aggregator, International Workshop on Multi-disciplinary Trends in Artificial Intelligence, Springer, Cham, 2017, pp. 139–149
P. Liu, S. Han, Z. Meng Y. Tong, Facial expression recognition via a boosted deep belief network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1805–1812
Dhwani Mehta, Mohammad Faridul Haque Siddiqui, Ahmad Y. Javaid, Recognition of emotion intensities using machine learning algorithms: a comparative study, Sensors 19 (8) (2019) 1897
Sun, Zhao, Jin (b0080) 2018; 296
https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data
T. Ojala, M. Pietikinen, T. Menp, Multiresolution grayscale and rotation invariant texture classification with local binary patterns, IEEE PAMI, vol. 24, no. 7, 2002
K. He, X. Zhang, S. Ren, et al., Deep residual learning for image recognition, (2015) 770–778
Zhao, Huang, Taini (b0145) 2011; 29
Y.-H. Lai, S.-H. Lai, Emotion-preserving representation learning via generative adversarial network for multi-view facial expression recognition, in: Automatic Face & Gesture Recognition (FG 2018), 2018 13th IEEE International Conference on. IEEE, 2018, pp. 263–270
Yu, Liu, Liu (b0050) 2018; 317
P. Rodriguez, G. Cucurull, J. Gonalez, et al., Deep pain: exploiting long short-term memory networks for facial expression classification, IEEE Trans. Cybern. PP (99) (2017) 1–11
C. Pramerdorfer, M. Kampel, Facial expression recognition using convolutional neural networks: State of the art, arXiv preprint arXiv:1612.02903, 2016.
R.R. Varior, B. Shuai, J. Tighe, et al., Scale-Aware Attention Network for Crowd Counting. arXiv preprint arXiv:1901.06026, 2019.
C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions, IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2015, pp. 1–9.
Zhang (10.1016/j.neucom.2020.06.014_b0205) 2018; 126
10.1016/j.neucom.2020.06.014_b0025
10.1016/j.neucom.2020.06.014_b0125
10.1016/j.neucom.2020.06.014_b0225
10.1016/j.neucom.2020.06.014_b0105
10.1016/j.neucom.2020.06.014_b0120
Zhao (10.1016/j.neucom.2020.06.014_b0145) 2011; 29
10.1016/j.neucom.2020.06.014_b0165
10.1016/j.neucom.2020.06.014_b0220
10.1016/j.neucom.2020.06.014_b0045
10.1016/j.neucom.2020.06.014_b0100
10.1016/j.neucom.2020.06.014_b0200
Yu (10.1016/j.neucom.2020.06.014_b0050) 2018; 317
10.1016/j.neucom.2020.06.014_b0160
10.1016/j.neucom.2020.06.014_b0040
Zhang (10.1016/j.neucom.2020.06.014_b0135) 2017; 26
10.1016/j.neucom.2020.06.014_b0085
10.1016/j.neucom.2020.06.014_b0140
10.1016/j.neucom.2020.06.014_b0020
10.1016/j.neucom.2020.06.014_b0185
10.1016/j.neucom.2020.06.014_b0240
10.1016/j.neucom.2020.06.014_b0090
10.1016/j.neucom.2020.06.014_b0190
10.1016/j.neucom.2020.06.014_b0070
Ojala (10.1016/j.neucom.2020.06.014_b0060) 1996; 29
Sun (10.1016/j.neucom.2020.06.014_b0080) 2018; 296
Simonyan (10.1016/j.neucom.2020.06.014_b0180) 2014
Sun (10.1016/j.neucom.2020.06.014_b0035) 2017; 267
Shao (10.1016/j.neucom.2020.06.014_b0170) 2019; 355
10.1016/j.neucom.2020.06.014_b0015
10.1016/j.neucom.2020.06.014_b0235
10.1016/j.neucom.2020.06.014_b0115
Cortes (10.1016/j.neucom.2020.06.014_b0130) 1995; 20
10.1016/j.neucom.2020.06.014_b0215
10.1016/j.neucom.2020.06.014_b0175
10.1016/j.neucom.2020.06.014_b0230
10.1016/j.neucom.2020.06.014_b0110
10.1016/j.neucom.2020.06.014_b0155
10.1016/j.neucom.2020.06.014_b0210
10.1016/j.neucom.2020.06.014_b0095
10.1016/j.neucom.2020.06.014_b0150
10.1016/j.neucom.2020.06.014_b0030
10.1016/j.neucom.2020.06.014_b0195
10.1016/j.neucom.2020.06.014_b0075
References_xml – reference: P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, IEEE, 2010, pp. 94–101.
– reference: Z. Zhang, M. Lyons, M. Schuster, S. Akamatsu, Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron, in Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998. Proceedings, Nara 1998, pp. 454_9.
– volume: 29
  start-page: 51
  year: 1996
  end-page: 59
  ident: b0060
  article-title: A comparative study of texture measures with classification based on feature distributions
  publication-title: Pattern Recogn.
– reference: C.M. Kuo, S.H. Lai, M. Sarkis, A compact deep learning model for robust facial expression recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 2121–2129
– reference: Dhwani Mehta, Mohammad Faridul Haque Siddiqui, Ahmad Y. Javaid, Recognition of emotion intensities using machine learning algorithms: a comparative study, Sensors 19 (8) (2019) 1897
– reference: J. Li, Y. Mi, J. Yu, et al., A novel convolutional neural network for facial expression recognition, International Conference on Cognitive Systems and Signal Processing. Springer, Singapore, 2018, pp. 310–320
– reference: Y. Lecun, L. Bottou, Y. Bengio, et al., Gradient-based learning applied to document recognition, Proc. IEEE 86 (11) (1998) 2278–2324
– volume: 317
  start-page: 50
  year: 2018
  end-page: 57
  ident: b0050
  article-title: Spatio-temporal convolutional features with nested LSTM for facial expression recognition[J]
  publication-title: Neurocomputing
– reference: https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data
– reference: Arriaga, Octavio, Matias Valdenegro-Toro, Paul Plöger, Real-time convolutional neural networks for emotion and gender classification. arXiv preprint arXiv:1710.07557 (2017).
– reference: K. He, X. Zhang, S. Ren, et al., Deep residual learning for image recognition, (2015) 770–778
– reference: B.-K. Kim, S.-Y. Dong, J. Roh, G. Kim, S.-Y. Lee, Fusing aligned and non-aligned face information for automatic affect recognition in the wild: A deep learning approach, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 48–57
– reference: Y. Guo, D. Tao, J. Yu, H. Xiong, Y. Li, D. Tao, Deep neural networks with relativity learning for facial expression recognition, in: Multimedia & Expo Workshops (ICMEW), 2016 IEEE International Conference on, IEEE, 2016, pp. 1–6.
– reference: Y.-H. Lai, S.-H. Lai, Emotion-preserving representation learning via generative adversarial network for multi-view facial expression recognition, in: Automatic Face & Gesture Recognition (FG 2018), 2018 13th IEEE International Conference on. IEEE, 2018, pp. 263–270
– volume: 20
  start-page: 273
  year: 1995
  end-page: 297
  ident: b0130
  article-title: Support vector networks
  publication-title: Mach. Learn.
– volume: 355
  start-page: 82
  year: 2019
  end-page: 92
  ident: b0170
  article-title: Three convolutional neural network models for facial expression recognition in the wild
  publication-title: Neurocomputing
– reference: R.R. Varior, B. Shuai, J. Tighe, et al., Scale-Aware Attention Network for Crowd Counting. arXiv preprint arXiv:1901.06026, 2019.
– reference: C. Turan, K.M. Lam, X. He, Soft Locality Preserving Map (SLPM) for Facial Expression Recognition. arXiv preprint arXiv:1801.03754, 2018.
– reference: G. Huang, Z. Liu, K. Q. Weinberger, L. van der Maaten, Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016
– reference: M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with Gabor wavelets, in: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 200–205
– reference: Huiyuan Yang, Umur Ciftci, Lijun Yin, Facial expression recognition by de-expression residue learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018
– reference: T. Cover, P. Hart, Nearest neighbor pattern classification, IEEE Trans. Inform. Theor. 13 (1) (1967) pp. 21_7
– reference: C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions, IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2015, pp. 1–9.
– reference: T. Ojala, M. Pietikinen, T. Menp, Multiresolution grayscale and rotation invariant texture classification with local binary patterns, IEEE PAMI, vol. 24, no. 7, 2002
– reference: B. Hasani, M.H. Mahoor, Spatio-temporal facial expression recognition using convolutional neural networks and conditional random fields, (2017) 790–795
– reference: T. Zhao, X. Wu, Pyramid feature attention network for saliency detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, 3085–3094
– volume: 26
  start-page: 4193
  year: 2017
  end-page: 4203
  ident: b0135
  article-title: Facial expression recognition based on deep evolutional spatial-temporal networks
  publication-title: IEEE Trans. Image Process.
– reference: W. Wang, Q. Sun, T. Chen, et al., A Fine-Grained Facial Expression Database for End-to-End Multi-Pose Facial Expression Recognition, arXiv preprint arXiv:1907.10838, 2019.
– reference: H. Yang, U. Ciftci, L. Yin, Facial expression recognition by de-expression residue learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2168–2177
– reference: P.D.M. Fernandez, F.A.G. Peña, T.I. Ren, et al., FERAtt: facial expression recognition with attention net. arXiv preprint arXiv:1902.03284, 2019.
– volume: 29
  start-page: 607
  year: 2011
  end-page: 619
  ident: b0145
  article-title: Facial expression recognition from near-infrared videos
  publication-title: Image Vision Comput.
– reference: D. Hamester, P. Barros, S. Wermter, Face expression recognition with a 2-channel convolutional neural network, in: Neural Networks (IJCNN), 2015 International Joint Conference on, IEEE, 2015, pp. 1–8
– reference: Zhong, Lei, et al., A graph-structured representation with BRNN for static-based facial expression recognition, 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), IEEE, 2019.
– volume: 296
  start-page: 12
  year: 2018
  end-page: 22
  ident: b0080
  article-title: A visual attention based ROI detection method for facial expression recognition
  publication-title: Neurocomputing
– volume: 126
  start-page: 1
  year: 2018
  end-page: 20
  ident: b0205
  article-title: From facial expression recognition to interpersonal relation prediction
  publication-title: Int. J. Comput. Vis.
– reference: A. Mollahosseini, D. Chan, M.H. Mahoor, Going deeper in facial expression recognition using deep neural networks, (2016) 1–10
– reference: T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, IEEE Trans. Pattern Anal. Mach. Intell. 23 (6) (2011) 6815
– reference: C. Szegedy, S. Ioffe, V. Vanhoucke, et al., Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, 2016
– reference: C. Pramerdorfer, M. Kampel, Facial expression recognition using convolutional neural networks: State of the art, arXiv preprint arXiv:1612.02903, 2016.
– reference: T. Connie, M. Al-Shabi, W.P. Cheah, et al., Facial expression recognition using a hybrid CNN–SIFT aggregator, International Workshop on Multi-disciplinary Trends in Artificial Intelligence, Springer, Cham, 2017, pp. 139–149
– year: 2014
  ident: b0180
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Comput. Sci.
– reference: P. Rodriguez, G. Cucurull, J. Gonalez, et al., Deep pain: exploiting long short-term memory networks for facial expression classification, IEEE Trans. Cybern. PP (99) (2017) 1–11
– reference: P. Liu, S. Han, Z. Meng Y. Tong, Facial expression recognition via a boosted deep belief network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1805–1812
– volume: 267
  start-page: 385
  year: 2017
  end-page: 395
  ident: b0035
  article-title: An efficient unconstrained facial expression recognition algorithm based on stack binarized auto-encoders and binarized neural networks
  publication-title: Neurocomputing
– ident: 10.1016/j.neucom.2020.06.014_b0215
– ident: 10.1016/j.neucom.2020.06.014_b0070
  doi: 10.1109/TPAMI.2002.1017623
– volume: 355
  start-page: 82
  year: 2019
  ident: 10.1016/j.neucom.2020.06.014_b0170
  article-title: Three convolutional neural network models for facial expression recognition in the wild
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2019.05.005
– ident: 10.1016/j.neucom.2020.06.014_b0110
– ident: 10.1016/j.neucom.2020.06.014_b0115
  doi: 10.1109/CVPRW.2018.00286
– ident: 10.1016/j.neucom.2020.06.014_b0155
  doi: 10.1109/CVPR.2014.233
– volume: 29
  start-page: 607
  issue: 9
  year: 2011
  ident: 10.1016/j.neucom.2020.06.014_b0145
  article-title: Facial expression recognition from near-infrared videos
  publication-title: Image Vision Comput.
  doi: 10.1016/j.imavis.2011.07.002
– volume: 296
  start-page: 12
  year: 2018
  ident: 10.1016/j.neucom.2020.06.014_b0080
  article-title: A visual attention based ROI detection method for facial expression recognition
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2018.03.034
– volume: 126
  start-page: 1
  issue: 5
  year: 2018
  ident: 10.1016/j.neucom.2020.06.014_b0205
  article-title: From facial expression recognition to interpersonal relation prediction
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-017-1055-1
– ident: 10.1016/j.neucom.2020.06.014_b0175
  doi: 10.1109/IJCNN.2015.7280539
– ident: 10.1016/j.neucom.2020.06.014_b0085
– ident: 10.1016/j.neucom.2020.06.014_b0140
  doi: 10.1109/TIT.1967.1053964
– ident: 10.1016/j.neucom.2020.06.014_b0230
– ident: 10.1016/j.neucom.2020.06.014_b0195
  doi: 10.1109/CVPR.2018.00231
– ident: 10.1016/j.neucom.2020.06.014_b0015
  doi: 10.1109/FG.2018.00046
– year: 2014
  ident: 10.1016/j.neucom.2020.06.014_b0180
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Comput. Sci.
– ident: 10.1016/j.neucom.2020.06.014_b0125
– ident: 10.1016/j.neucom.2020.06.014_b0100
– ident: 10.1016/j.neucom.2020.06.014_b0165
– ident: 10.1016/j.neucom.2020.06.014_b0040
  doi: 10.1609/aaai.v31i1.11231
– ident: 10.1016/j.neucom.2020.06.014_b0090
  doi: 10.1109/CVPR.2017.243
– volume: 20
  start-page: 273
  year: 1995
  ident: 10.1016/j.neucom.2020.06.014_b0130
  article-title: Support vector networks
  publication-title: Mach. Learn.
  doi: 10.1007/BF00994018
– ident: 10.1016/j.neucom.2020.06.014_b0095
  doi: 10.1109/CVPR.2019.00320
– ident: 10.1016/j.neucom.2020.06.014_b0200
  doi: 10.1109/5.726791
– volume: 29
  start-page: 51
  issue: 1
  year: 1996
  ident: 10.1016/j.neucom.2020.06.014_b0060
  article-title: A comparative study of texture measures with classification based on feature distributions
  publication-title: Pattern Recogn.
  doi: 10.1016/0031-3203(95)00067-4
– ident: 10.1016/j.neucom.2020.06.014_b0220
– ident: 10.1016/j.neucom.2020.06.014_b0210
– ident: 10.1016/j.neucom.2020.06.014_b0105
  doi: 10.1109/34.927467
– ident: 10.1016/j.neucom.2020.06.014_b0225
  doi: 10.1109/CVPRW.2016.187
– ident: 10.1016/j.neucom.2020.06.014_b0075
  doi: 10.1109/CVPRW.2010.5543262
– ident: 10.1016/j.neucom.2020.06.014_b0240
  doi: 10.1007/978-3-319-69456-6_12
– ident: 10.1016/j.neucom.2020.06.014_b0045
  doi: 10.1109/CVPRW.2017.282
– ident: 10.1016/j.neucom.2020.06.014_b0120
– volume: 26
  start-page: 4193
  issue: 9
  year: 2017
  ident: 10.1016/j.neucom.2020.06.014_b0135
  article-title: Facial expression recognition based on deep evolutional spatial-temporal networks
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2017.2689999
– ident: 10.1016/j.neucom.2020.06.014_b0150
  doi: 10.1109/CVPR.2018.00231
– ident: 10.1016/j.neucom.2020.06.014_b0190
  doi: 10.1007/978-981-13-7986-4_28
– volume: 317
  start-page: 50
  year: 2018
  ident: 10.1016/j.neucom.2020.06.014_b0050
  article-title: Spatio-temporal convolutional features with nested LSTM for facial expression recognition[J]
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2018.07.028
– ident: 10.1016/j.neucom.2020.06.014_b0160
  doi: 10.1109/FG.2019.8756615
– ident: 10.1016/j.neucom.2020.06.014_b0185
– volume: 267
  start-page: 385
  year: 2017
  ident: 10.1016/j.neucom.2020.06.014_b0035
  article-title: An efficient unconstrained facial expression recognition algorithm based on stack binarized auto-encoders and binarized neural networks
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2017.06.050
– ident: 10.1016/j.neucom.2020.06.014_b0235
– ident: 10.1016/j.neucom.2020.06.014_b0020
  doi: 10.1109/WACV.2016.7477450
– ident: 10.1016/j.neucom.2020.06.014_b0025
  doi: 10.1109/CVPR.2015.7298594
– ident: 10.1016/j.neucom.2020.06.014_b0030
  doi: 10.1109/CVPR.2016.90
SSID ssj0017129
ssib006546013
ssib042110509
ssib008068503
ssib005901258
ssib002043121
Score 2.6784928
Snippet Facial expression recognition is a hot research topic and can be applied in many computer vision fields, such as human–computer interaction, affective...
SourceID crossref
nii
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 340
SubjectTerms Attention Mechanism
Convolutional Neural Network
Facial Expression Recognition
Image Classification
Local Binary Patten
Title Attention mechanism-based CNN for facial expression recognition
URI https://dx.doi.org/10.1016/j.neucom.2020.06.014
https://cir.nii.ac.jp/crid/1871146592963088128
Volume 411
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF5qvXjxLVZt2YPXtdlHsslJSlGqQi9a6C0km1mI2LRoCp787c7mUfQgBS-BhN2wO5nMN5N8M0PINQcvAusB85QyTPk6ZZG0mgHCBRitbJZVBNlpMJmpx7k_75BxmwvjaJWN7a9temWtmyvDRprDVZ4Pn71IYBTFhXB_CULpEn6V0k7Lb742NA-uuajr7QmfudFt-lzF8Spg7TgjAn2mqoonV3_B006R5z-A5_6Q7DceIx3VizoiHSiOyUHbjYE2L-cJuR2VZc1dpAtw-bz5x4I5kMroeDql6JxSm7gP5BQ-G_ZrQTf8oWVxSmb3dy_jCWvaIzCD-yxZwpPA96XlwkbcIBhFWSBTHoRgAw02zIRNUyW0DbXwEgnoG4GyPNNWqjCzQp6RbrEs4JxQjPHQYiYKEKxUmojUDwMTZAY8z_iG8x6RrVRi09QOdy0s3uKWJPYa17KMnSxjx5XjqkfYZtaqrp2xZbxuBR7_0oEYzfuWmX18Prg0d-QYA-JufHT8AolGFBH44t93viR77syhleBXpFu-r6GPbkiZDio9G5Dd0cPTZPoNNvLZfQ
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb5wwEB4lm0N7yaMPNW8ferUWvzCcotUq0ebFpYmUGwIzlqgaEjVEys_PeDGr9FBFyoUDMMgMZr4Z-5sZgJ8Ckxx9gjzR2nFtbM1z5S1Hggt0VvumWRJki3Rxqy_uzN0azMdcmECrjLZ_sOlLax3PTKM2p49tO_2V5JKiKCFl2CXIVLYOG6E6lZnAxuz8clGsNhOskEPJPWl4EBgz6JY0rw6fA21Ektu0LOQp9P8Qar1r2zfYc7YNm9FpZLNhXDuwht0X2BobMrD4f36Fk1nfD_RFdo8hpbd9uucBpxo2LwpG_inzVVgjZ_gSCbAdW1GIHrpvcHt2ejNf8NghgTutbc8rUaXGKC-kz4UjPMqbVNUizdCnFn3WSF_XWlqfWZlUCsk9Qu1FY73SWeOl-g6T7qHDH8AozCOjWWkkvNJ1JWuTpS5tHCaJM06IXVCjVkoXy4eHLhZ_ypEn9rscdFkGXZaBLif0LvCV1ONQPuOd--2o8PKfaVCShX9H8pC-Dw0tHAWFgfQ2hny_VJEdJRDe-_CTj-HT4ub6qrw6Ly734XO4EsBLigOY9H-f8ZC8kr4-irPuFb9j3C4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Attention+mechanism-based+CNN+for+facial+expression+recognition&rft.jtitle=Neurocomputing+%28Amsterdam%29&rft.au=Li%2C+Jing&rft.au=Jin%2C+Kan&rft.au=Zhou%2C+Dalin&rft.au=Kubota%2C+Naoyuki&rft.date=2020-10-21&rft.issn=0925-2312&rft.volume=411&rft.spage=340&rft.epage=350&rft_id=info:doi/10.1016%2Fj.neucom.2020.06.014&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_neucom_2020_06_014
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0925-2312&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0925-2312&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0925-2312&client=summon