Face-Specific Data Augmentation for Unconstrained Face Recognition

We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing appearance variations in test images. The former is required to train the system for whatever appearance variations it will ultimately encounter and...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of computer vision Vol. 127; no. 6-7; pp. 642 - 667
Main Authors Masi, Iacopo, Trần, Anh Tuấn, Hassner, Tal, Sahin, Gozde, Medioni, Gérard
Format Journal Article
LanguageEnglish
Published New York Springer US 01.06.2019
Springer
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing appearance variations in test images. The former is required to train the system for whatever appearance variations it will ultimately encounter and is often addressed by collecting massive training sets with millions of face images. The latter involves various forms of appearance normalization for removing distracting nuisance factors at test time and making test faces easier to compare. We describe novel, efficient face-specific data augmentation techniques and show them to be ideally suited for both purposes. By using knowledge of faces, their 3D shapes, and appearances, we show the following: (a) We can artificially enrich training data for face recognition with face-specific appearance variations. (b) This synthetic training data can be efficiently produced online, thereby reducing the massive storage requirements of large-scale training sets and simplifying training for many appearance variations. Finally, (c) The same, fast data augmentation techniques can be applied at test time to reduce appearance variations and improve face representations. Together, with additional technical novelties, we describe a highly effective face recognition pipeline which, at the time of submission, obtains state-of-the-art results across multiple benchmarks. Portions of this paper were previously published by Masi et al. (European conference on computer vision, Springer, pp 579–596, 2016b , International conference on automatic face and gesture recognition, 2017 ).
AbstractList We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing appearance variations in test images. The former is required to train the system for whatever appearance variations it will ultimately encounter and is often addressed by collecting massive training sets with millions of face images. The latter involves various forms of appearance normalization for removing distracting nuisance factors at test time and making test faces easier to compare. We describe novel, efficient face-specific data augmentation techniques and show them to be ideally suited for both purposes. By using knowledge of faces, their 3D shapes, and appearances, we show the following: (a) We can artificially enrich training data for face recognition with face-specific appearance variations. (b) This synthetic training data can be efficiently produced online, thereby reducing the massive storage requirements of large-scale training sets and simplifying training for many appearance variations. Finally, (c) The same, fast data augmentation techniques can be applied at test time to reduce appearance variations and improve face representations. Together, with additional technical novelties, we describe a highly effective face recognition pipeline which, at the time of submission, obtains state-of-the-art results across multiple benchmarks. Portions of this paper were previously published by Masi et al. (European conference on computer vision, Springer, pp 579–596, 2016b, International conference on automatic face and gesture recognition, 2017).
We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing appearance variations in test images. The former is required to train the system for whatever appearance variations it will ultimately encounter and is often addressed by collecting massive training sets with millions of face images. The latter involves various forms of appearance normalization for removing distracting nuisance factors at test time and making test faces easier to compare. We describe novel, efficient face-specific data augmentation techniques and show them to be ideally suited for both purposes. By using knowledge of faces, their 3D shapes, and appearances, we show the following: (a) We can artificially enrich training data for face recognition with face-specific appearance variations. (b) This synthetic training data can be efficiently produced online, thereby reducing the massive storage requirements of large-scale training sets and simplifying training for many appearance variations. Finally, (c) The same, fast data augmentation techniques can be applied at test time to reduce appearance variations and improve face representations. Together, with additional technical novelties, we describe a highly effective face recognition pipeline which, at the time of submission, obtains state-of-the-art results across multiple benchmarks. Portions of this paper were previously published by Masi et al. (European conference on computer vision, Springer, pp 579–596, 2016b , International conference on automatic face and gesture recognition, 2017 ).
We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing appearance variations in test images. The former is required to train the system for whatever appearance variations it will ultimately encounter and is often addressed by collecting massive training sets with millions of face images. The latter involves various forms of appearance normalization for removing distracting nuisance factors at test time and making test faces easier to compare. We describe novel, efficient face-specific data augmentation techniques and show them to be ideally suited for both purposes. By using knowledge of faces, their 3D shapes, and appearances, we show the following: (a) We can artificially enrich training data for face recognition with face-specific appearance variations. (b) This synthetic training data can be efficiently produced online, thereby reducing the massive storage requirements of large-scale training sets and simplifying training for many appearance variations. Finally, (c) The same, fast data augmentation techniques can be applied at test time to reduce appearance variations and improve face representations. Together, with additional technical novelties, we describe a highly effective face recognition pipeline which, at the time of submission, obtains state-of-the-art results across multiple benchmarks. Portions of this paper were previously published by Masi et al. (European conference on computer vision, Springer, pp 579-596, 2016b (See CR43), International conference on automatic face and gesture recognition, 2017 (See CR44)).
Audience Academic
Author Masi, Iacopo
Sahin, Gozde
Medioni, Gérard
Trần, Anh Tuấn
Hassner, Tal
Author_xml – sequence: 1
  givenname: Iacopo
  surname: Masi
  fullname: Masi, Iacopo
  email: iacopo@isi.edu
  organization: Information Sciences Institute (ISI), USC
– sequence: 2
  givenname: Anh Tuấn
  surname: Trần
  fullname: Trần, Anh Tuấn
  organization: Institute for Robotics and Intelligent Systems, USC
– sequence: 3
  givenname: Tal
  surname: Hassner
  fullname: Hassner, Tal
  organization: Open University of Israel
– sequence: 4
  givenname: Gozde
  surname: Sahin
  fullname: Sahin, Gozde
  organization: Institute for Robotics and Intelligent Systems, USC
– sequence: 5
  givenname: Gérard
  surname: Medioni
  fullname: Medioni, Gérard
  organization: Institute for Robotics and Intelligent Systems, USC
BookMark eNp9kU1LAzEQhoNUsFX_gKcFTx5W87HZzR7rd6EgtHoO2WSyRGxSky3ovzd1BfEiIQSG58nM8M7QxAcPCJ0RfEkwbq4SIbRmJSZtvqQRJT5AU8IbVpIK8wma4pbiktctOUKzlF4xxlRQNkXX90pDud6Cdtbp4lYNqpjv-g34QQ0u-MKGWLx4HXwaonIeTLE3ihXo0Hu3R07QoVVvCU5_3mP0cn_3fPNYLp8eFjfzZalZS4dSWGFYq5S2rG46bJSobWdMVUPDa9sKoRQRFbGM4ooKRruuUaaDGkhjuqoz7Bidj_9uY3jfQRrka9hFn1tKSjGpuOCEZupypHr1BtJ5G_LcOh8DG5fXAOtyfc5FTTjlDGfh4o-QmQE-hl7tUpKL9eovS0dWx5BSBCu30W1U_JQEy30QcgxC5iDkdxByL7FRShn2PcTfuf-xvgD_KItz
CitedBy_id crossref_primary_10_1109_TPAMI_2021_3087709
crossref_primary_10_1016_j_aca_2023_341763
crossref_primary_10_3390_s22062388
crossref_primary_10_1007_s11263_021_01467_7
crossref_primary_10_3390_bdcc5040049
crossref_primary_10_3390_s23218914
crossref_primary_10_1109_TIP_2021_3120313
crossref_primary_10_1016_j_fsidi_2021_301119
crossref_primary_10_1088_1742_6596_1575_1_012009
crossref_primary_10_32628_CSEIT217433
crossref_primary_10_1007_s11042_022_13167_6
crossref_primary_10_1109_TPAMI_2024_3362821
crossref_primary_10_1109_TBIOM_2019_2949364
crossref_primary_10_1109_TPAMI_2022_3164131
crossref_primary_10_1109_TCSVT_2021_3078517
crossref_primary_10_1109_TMM_2021_3072786
crossref_primary_10_1109_TPAMI_2023_3239736
crossref_primary_10_1109_TPAMI_2023_3244023
crossref_primary_10_1007_s11831_021_09560_3
crossref_primary_10_1016_j_eswa_2023_122266
Cites_doi 10.1109/FG.2018.00024
10.1109/ICCVW.2017.188
10.1109/CVPR.2018.00092
10.1109/WACV.2016.7477555
10.1109/ICCV.2015.290
10.1109/BTAS.2017.8272731
10.1007/s11263-013-0636-x
10.1109/CVPRW.2017.87
10.1109/CVPR.2016.23
10.1109/WACV.2016.7477557
10.1109/BTAS.2017.8272686
10.1109/ICCVW.2015.55
10.1109/CVPRW.2013.116
10.1109/CVPR.2016.90
10.1109/CVPR.2017.713
10.1109/CVPR.2015.7298682
10.1016/j.imavis.2018.09.002
10.1111/j.1467-8659.2008.01160.x
10.5244/C.29.41
10.1109/CVPR.2014.426
10.1109/CVPR.2014.244
10.1109/ICCV.2017.405
10.1109/FG.2017.76
10.1109/BTAS.2016.7791205
10.1109/CVPR.2015.7298880
10.1109/ICCVW.2013.54
10.1007/978-3-319-46478-7_31
10.1109/72.554195
10.1109/CVPR.2015.7299100
10.1109/FG.2017.11
10.1109/FG.2017.137
10.1109/CVPR.2017.554
10.1109/FG.2018.00027
10.1109/CVPRW.2016.23
10.1109/ICCV.2013.448
10.1109/CVPR.2016.527
10.1109/TPAMI.2010.230
10.1109/CVPR.2014.220
10.1109/TIFS.2018.2846617
10.1109/CVPR.2014.242
10.1109/AVSS.2015.7301739
10.1109/TPAMI.2008.291
10.1109/CVPR.2014.219
10.1109/FG.2017.22
10.1109/CVPR.2018.00414
10.1109/AVSS.2009.58
10.1007/s11263-019-01151-x.
10.1109/CVPR.2016.523
10.1109/ICCV.2015.304
10.1109/CVPR.2015.7298907
10.1109/CVPR.2011.5995566
10.1109/WACV.2016.7477593
10.1007/s00138-013-0571-4
10.1007/978-3-319-46454-1_35
10.1109/CVPR.2017.163
10.5244/C.28.6
10.1109/CVPR.2015.7298803
10.1109/CVPR.2016.514
10.1109/CVPR.2013.75
10.1109/ICCV.2015.164
10.1109/CVPR.2015.7298891
10.2352/ISSN.2470-1173.2016.11.IMAWM-463
10.1109/WACV.2018.00011
10.1109/CVPRW.2017.86
10.1007/978-3-319-16811-1_17
10.1109/CVPR.2018.00544
10.1109/CVPR.2015.7299058
10.1016/j.neucom.2018.10.041
10.1007/s11263-018-01142-4
ContentType Journal Article
Copyright Springer Science+Business Media, LLC, part of Springer Nature 2019
COPYRIGHT 2019 Springer
International Journal of Computer Vision is a copyright of Springer, (2019). All Rights Reserved.
Copyright_xml – notice: Springer Science+Business Media, LLC, part of Springer Nature 2019
– notice: COPYRIGHT 2019 Springer
– notice: International Journal of Computer Vision is a copyright of Springer, (2019). All Rights Reserved.
DBID AAYXX
CITATION
ISR
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
P5Z
P62
PQBIZ
PQBZA
PQEST
PQQKQ
PQUKI
PRINS
PYYUZ
Q9U
DOI 10.1007/s11263-019-01178-0
DatabaseName CrossRef
Gale In Context: Science
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni Edition)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Aerospace Database‎ (1962 - current)
ProQuest Central Essentials
AUTh Library subscriptions: ProQuest Central
ProQuest Business Premium Collection
Technology Collection
ProQuest One Community College
ProQuest Central
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection (Proquest) (PQ_SDU_P3)
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global (ProQuest)
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ABI/INFORM Collection China
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest Central Korea
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ABI/INFORM China
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
Business Premium Collection (Alumni)
DatabaseTitleList ABI/INFORM Global (Corporate)


Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 667
ExternalDocumentID A586152530
10_1007_s11263_019_01178_0
GrantInformation_xml – fundername: Intelligence Advanced Research Projects Activity
  grantid: 2014-14071600011
  funderid: http://dx.doi.org/10.13039/100011039
GroupedDBID -4Z
-59
-5G
-BR
-EM
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
203
29J
2J2
2JN
2JY
2KG
2KM
2LR
2~H
30V
4.4
406
408
409
40D
40E
5GY
5VS
67Z
6NX
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AAFGU
AAHNG
AAIAL
AAJKR
AANZL
AAPBV
AARTL
AATNV
AATVU
AAUYE
AAWCG
AAYFA
AAYIU
AAYQN
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFGW
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKAS
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABPTK
ABQBU
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABUWG
ABWNU
ABXPI
ACBMV
ACBRV
ACBYP
ACGFO
ACGFS
ACHSB
ACHXU
ACIGE
ACIHN
ACIPQ
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACREN
ACTTH
ACVWB
ACWMK
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMDM
ADOXG
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEFTE
AEGAL
AEGNC
AEJHL
AEJRE
AENEX
AEOHA
AEPYU
AESKC
AESTI
AETLH
AEVLU
AEVTX
AEXYK
AFKRA
AFLOW
AFNRJ
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGBP
AGJBK
AGMZJ
AGQMX
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIIXL
AILAN
AIMYW
AITGF
AJDOV
AJRNO
AJZVZ
AKQUC
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
CCPQU
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
LAK
LLZTM
M0C
M0N
M4Y
MA-
N9A
NB0
NPVJJ
NQJWS
NU0
O93
O9G
O9I
O9J
OAM
P19
P2P
P62
P9O
PF0
PQBIZ
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R89
R9I
RHV
RNS
ROL
RPX
RSV
S16
S27
S3B
SAP
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
TAE
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UNUBA
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z5O
Z7R
Z7S
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8P
Z8Q
Z8R
Z8S
Z8T
Z8W
Z92
ZMTXR
~8M
~EX
-Y2
1SB
2.D
28-
2P1
2VQ
3V.
5QI
6TJ
AACDK
AAEOY
AAJBT
AAOBN
AARHV
AASML
AAYTO
AAYXX
ABAKF
ABULA
ACAOD
ACBXY
ACDTI
ACZOJ
AEBTG
AEFIE
AEFQL
AEKMD
AEMSY
AFBBN
AFEXP
AFGCZ
AGGDS
AGQEE
AGRTI
AIGIU
AJBLW
BBWZM
CAG
CITATION
COF
H13
KOW
N2Q
NDZJH
O9-
OVD
PQBZA
R4E
RNI
RZC
RZE
RZK
S1Z
S26
S28
SCJ
SCLPG
T16
TEORI
AAYZH
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
PQEST
PQUKI
PRINS
Q9U
ID FETCH-LOGICAL-c392t-8f8d39aacf367b0da86fbdd46e756f988aa1841f32042832bb7adbe6e17db4bd3
IEDL.DBID AGYKE
ISSN 0920-5691
IngestDate Tue Nov 05 16:12:24 EST 2024
Tue Nov 12 22:57:09 EST 2024
Thu Aug 01 19:04:40 EDT 2024
Thu Sep 12 16:55:21 EDT 2024
Sat Dec 16 12:00:22 EST 2023
IsPeerReviewed true
IsScholarly true
Issue 6-7
Keywords Deep learning
Face recognition
Data augmentation
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c392t-8f8d39aacf367b0da86fbdd46e756f988aa1841f32042832bb7adbe6e17db4bd3
PQID 2201458512
PQPubID 1456341
PageCount 26
ParticipantIDs proquest_journals_2201458512
gale_infotracacademiconefile_A586152530
gale_incontextgauss_ISR_A586152530
crossref_primary_10_1007_s11263_019_01178_0
springer_journals_10_1007_s11263_019_01178_0
PublicationCentury 2000
PublicationDate 6-1-2019
2019-6-00
20190601
PublicationDateYYYYMMDD 2019-06-01
PublicationDate_xml – month: 06
  year: 2019
  text: 6-1-2019
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2019
Publisher Springer US
Springer
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer
– name: Springer Nature B.V
References YagerNDunstoneTThe biometric menagerieTransactions on Pattern Analysis and Machine Intelligence201032222023010.1109/TPAMI.2008.291
Hassner, T., Harel, S., Paz, E., & Enbar, R. (2015). Effective face frontalization in unconstrained images. In Proceedings of international conference on computer vision recognition.
Sun, Y., Chen, Y., Wang, X., & Tang, X. (2014a). Deep learning face representation by joint identification-verification. In Neural information processing systems, pp. 1988–1996.
Crispell, D. E., Biris, O., Crosswhite, N., Byrne, J., & Mundy, J. L. (2016). Dataset augmentation for pose and lighting invariant face recognition. In Applied imagery pattern recognition workshop (AIPR).
Sankaranarayanan, S., Alavi, A., Castillo, C., & Chellappa, R. (2016a). Triplet probabilistic embedding for face verification and clustering. In International conference on biometrics: Theory, applications and systems.
Hassner, T. (2013). Viewing real-world faces in 3d. In Proceedings of international conference on computer vision, pp. 3607–3614.
Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In Proceedings of conference on computer vision pattern recognition, pp. 815–823.
Wolf, L., Hassner, T., & Maoz, I. (2011a). Face recognition in unconstrained videos with matched background similarity. In Proceedings of IEEE conference on computer vision pattern recognition, pp. 529–534.
Yang, H., & Patras, I. (2015). Mirror, mirror on the wall, tell me, is the error small? In Proceedings of conference on computer vision pattern recognition.
NevesJProençaH“A leopard cannot change its spots” : Improving face recognition using 3d-based caricaturesIEEE Transactions on Information Forensics and Security201914115116110.1109/TIFS.2018.2846617
Sun, Y., Liang, D., Wang, X., & Tang, X. (2015). Deepid3: Face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873.
Masi, I., Lisanti, G., Bagdanov, A., Pala, P., & Del Bimbo, A. (2013). Using 3D models to recognize 2D faces in the wild. In Proceedings of international conference on computer vision pattern recognition workshops.
Paysan, P., Knothe, R., Amberg, B., Romdhani, S., & Vetter, T. (2009). A 3d face model for pose and illumination invariant face recognition. In Sixth IEEE international conference on advanced video and signal based surveillance, AVSS ’09, pp. 296–301.
Tran, A. T., Hassner, T., Masi, I., & Medioni, G. (2017). Regressing robust and discriminative 3d morphable models with a very deep neural network. In Proceedings of conference on computer vision pattern recognition.
Hu, J., Lu, J., Tan, Y. P. (2014a). Discriminative deep metric learning for face verification in the wild. In Proceedings of international conference on computer vision pattern recognition, pp. 1875–1882.
Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. In Proceedings of British machnical vision conference.
Xie, S., & Tu, Z. (2015). Holistically-nested edge detection. In Proceedings of international conference on computer vision.
Yang, J., Ren, P., Zhang, D., Chen, D., Wen, F., Li, H., & Hua, G. (2017). Neural aggregation network for video face recognition. In Proceedings of conference on computer. vision pattern recognition.
Yin, X., Yu, X., Sohn, K., Liu, X., & Chandraker, M. (2018). Feature transfer learning for deep face recognition with long-tail data. CoRR arXiv:1803.09014
SzeliskiRComputer vision: algorithms and applications2010BerlinSpringer1219.68009
Tran, A. T., Hassner, T., Masi, I., Paz, E., Nirkin, Y., & Medioni, G. (2018). Extreme 3d face reconstruction: Seeing through occlusions. In Proceedings of conference on computer vision pattern recognition.
Chang, F., Tran, A., Hassner, T., Masi, I., Nevatia, R., & Medioni, G. (2017). FacePoseNet: Making a case for landmark-free face alignment. In 7th IEEE international workshop on analysis and modeling of faces and, gestures, ICCV workshops.
Masi, I., Tra^`\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\grave{\hat{{\rm a}}}$$\end{document}n, A. T., Hassner, T., Leksut, J. T., & Medioni, G. (2016b). Do we really need to collect millions of faces for effective face recognition? In European conference on computer vision, Springer, pp. 579–596.
Liu, X., Li, S., Kan, M., Shan, S., & Chen, X. (2017b). Self-error-correcting convolutional neural network for learning with noisy labels. In International IEEE conference on automatic face and gesture recognition, pp. 111–117.
NguyenMHLalondeJFEfrosAADe la TorreFImage-based shavingComputer Graphics Forum200827262763510.1111/j.1467-8659.2008.01160.x
Xie, L., Wang, J., Wei, Z., Wang, M., & Tian, Q. (2016). DisturbLabel: Regularizing CNN on the loss layer. In Proceedings of conference computer vision pattern recognition.
AbdAlmageed, W., Wu, Y., Rawls, S., Harel, S., Hassner, T., Masi, I., Choi, J., Leksut, J., Kim, J., Natarajan, P., Nevatia, R., & Medioni, G. (2016). Face recognition using deep multi-pose representations. In Winter conference on applications of computer vision.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of international conference on computer vision pattern recognition.
Chen, J. C., Patel, V. M., & Chellappa, R. (2016). Unconstrained face verification using deep CNN features. In Winter conference on application of computer vision.
HughesJFVan DamAFoleyJDFeinerSKComputer graphics: Principles and practice2014LondonPearson Education0875.68891
SánchezJPerronninFMensinkTVerbeekJImage classification with the fisher vector: Theory and practiceInternational Journal of Computer Vision20131053222245310402010.1007/s11263-013-0636-x1286.68447
Chang, F. J., Tran, A. T., Hassner, T., Masi, I., Nevatia, R., & Medioni, G. (2019). Deep, landmark-free FAME: Face alignment, modeling, and expression estimation. International Journal of Computer Vision. https://doi.org/10.1007/s11263-019-01151-x.
Wen, Y., Zhang, K., Li, Z., & Qiao, Y. (2019). A comprehensive study on center loss for deep face recognition. International Journal of Computer Vision.https://doi.org/10.1007/s11263-018-01142-4.
Kim, K., Yang, Z., Masi, I., Nevatia, R., & Medioni, G. (2018). Face and body association for video-based face recognition. In Winter conference on application of computer vision, pp. 39–48.
Yi, D., Lei, Z., Liao, S., & Li, S. Z. (2014). Learning face representation from scratch. arXiv preprint arXiv:1411.7923. Available: http://www.cbsr.ia.ac.cn/english/CASIA-WebFace-Database.html.
Chowdhury, A. R., Lin, T. Y., Maji, S., & Learned-Miller, E. (2016). One-to-many face recognition with bilinear CNNs. In Winter conference on application of computer vision, IEEE, pp. 1–9.
CrosswhiteNByrneJStaufferCParkhiOCaoQZissermanATemplate adaptation for face verification and identificationImage and Vision Computing201879354810.1016/j.imavis.2018.09.002
Levi, G., & Hassner, T. (2015). Age and gender classification using convolutional neural networks. In Proceedings of international conference on computer vision pattern recognition workshops. http://www.openu.ac.il/home/hassner/projects/cnn_agegender.
Klare, B. F., Klein, B., Taborsky, E., Blanton, A., Cheney, J., Allen, K., Grother P, Mah, A., Burge, M., & Jain, A. K. (2015). Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A. In Proceedings of international conference on computer vision pattern recognition, pp. 1931–1939.
Kemelmacher-Shlizerman, I., Suwajanakorn, S., & Seitz, S. M. (2014). Illumination-aware age progression. In Proceedings of IEEE conference on computer vision pattern recognition, pp. 3334–3341.
Zhu, X., Lei, Z., Liu, X., Shi, H., & Li, S. (2016). Face alignment across large poses: A 3d solution. In Proceedings of IEEE computer vision and pattern recognition, Las Vegas, NV
Klontz, J., Klare, B., Klum, S., Taborsky, E., Burge, M., & Jain, A. K. (2013). Open source biometric recognition. In International conference on biometrics: Theory, applications and systems.
Crosswhite, N., Byrne, J., Stauffer, C., Parkhi, O., Cao, Q., & Zisserman, A. (2017). Template adaptation for face verification and identification. In International conference on automatic face and gesture recognition.
Cao, K., Rong, Y., Li, C., Tang, X., & Change Loy, C. (2018). Pose-robust face recognition via deep residual equivariant mapping. In Proceedings of conference on computer vision pattern recognition, pp. 5187–5196.
Chang, F. J., Tran, A. T., Hassner, T., Masi, I., Nevatia, R., & Medioni, G. (2018). Expnet: Landmark-free, deep, 3d facial expressions. In Automatic face and gesture recognition, pp. 122–129.
Ranjan, R., Bansal, A., Zheng, J., Xu, H., Gleason, J., Lu, B., Nanduri, A., Chen, J., Castillo, C. D., & Chellappa, R. (2018). A fast and accurate system for face detection, identification, and verification. CoRR arXiv:1809.07586.
Zheng, Z., Zheng, L., & Yang, Y. (2017). Unlabeled samples generated by GAN improve the person re-identification baseline in vitro. In Proceedings of international conference on computer vision.
Masi, I., Hassner, T., Tra^`\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\grave{\hat{{\rm a}}}$$\end{document}n, A. T., & Medioni, G. (2017). Rapid synthesis of massive face sets for improved face recognition. In International conference on automatic face and gesture recognition.
Sun, Y., Wang, X., & Tang, X. (2014c). Deeply learned face representations are sparse, selective, and robust. arXiv preprint arXiv:1412.1265.
Xie, S., Yang, T., Wang, X., & Lin, Y. (2015). Hyper-class augmented and regularized deep learning for fine-grained image classification. In Proceedings
1178_CR39
1178_CR38
1178_CR37
R Szeliski (1178_CR67) 2010
1178_CR71
1178_CR70
1178_CR31
1178_CR75
1178_CR30
1178_CR74
1178_CR73
1178_CR72
1178_CR35
1178_CR79
1178_CR34
1178_CR78
1178_CR33
1178_CR32
1178_CR76
1178_CR3
1178_CR4
1178_CR1
1178_CR2
1178_CR47
1178_CR9
N Yager (1178_CR83) 2010; 32
S Lawrence (1178_CR36) 1997; 8
1178_CR7
1178_CR8
1178_CR5
1178_CR6
1178_CR82
J Sánchez (1178_CR57) 2013; 105
1178_CR81
1178_CR80
1178_CR42
1178_CR86
1178_CR41
1178_CR85
1178_CR40
1178_CR84
MH Nguyen (1178_CR49) 2008; 27
1178_CR46
1178_CR89
1178_CR44
1178_CR88
JF Hughes (1178_CR28) 2014
1178_CR43
J Neves (1178_CR48) 2019; 14
1178_CR87
1178_CR17
1178_CR16
1178_CR15
1178_CR59
1178_CR58
1178_CR19
1178_CR18
1178_CR92
1178_CR91
1178_CR90
1178_CR53
1178_CR52
E Rashedi (1178_CR56) 2019; 329
1178_CR51
1178_CR50
1178_CR13
1178_CR12
1178_CR11
1178_CR55
1178_CR10
1178_CR54
N Crosswhite (1178_CR14) 2018; 79
1178_CR27
1178_CR26
1178_CR25
1178_CR69
I Masi (1178_CR45) 2018; 99
L Wolf (1178_CR77) 2011; 33
1178_CR29
1178_CR60
1178_CR20
1178_CR64
1178_CR63
1178_CR62
1178_CR61
1178_CR24
1178_CR68
1178_CR23
1178_CR22
1178_CR66
1178_CR21
1178_CR65
References_xml – ident: 1178_CR50
  doi: 10.1109/FG.2018.00024
– ident: 1178_CR47
– ident: 1178_CR72
– ident: 1178_CR5
  doi: 10.1109/ICCVW.2017.188
– ident: 1178_CR61
  doi: 10.1109/CVPR.2018.00092
– ident: 1178_CR1
  doi: 10.1109/WACV.2016.7477555
– ident: 1178_CR82
  doi: 10.1109/ICCV.2015.290
– ident: 1178_CR3
  doi: 10.1109/BTAS.2017.8272731
– volume: 105
  start-page: 222
  issue: 3
  year: 2013
  ident: 1178_CR57
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-013-0636-x
  contributor:
    fullname: J Sánchez
– ident: 1178_CR38
– ident: 1178_CR63
– ident: 1178_CR75
  doi: 10.1109/CVPRW.2017.87
– ident: 1178_CR91
– ident: 1178_CR92
  doi: 10.1109/CVPR.2016.23
– ident: 1178_CR34
– ident: 1178_CR10
  doi: 10.1109/WACV.2016.7477557
– ident: 1178_CR18
  doi: 10.1109/BTAS.2017.8272686
– ident: 1178_CR9
  doi: 10.1109/ICCVW.2015.55
– ident: 1178_CR41
  doi: 10.1109/CVPRW.2013.116
– ident: 1178_CR24
  doi: 10.1109/CVPR.2016.90
– ident: 1178_CR29
– ident: 1178_CR39
  doi: 10.1109/CVPR.2017.713
– ident: 1178_CR60
  doi: 10.1109/CVPR.2015.7298682
– volume: 79
  start-page: 35
  year: 2018
  ident: 1178_CR14
  publication-title: Image and Vision Computing
  doi: 10.1016/j.imavis.2018.09.002
  contributor:
    fullname: N Crosswhite
– volume: 27
  start-page: 627
  issue: 2
  year: 2008
  ident: 1178_CR49
  publication-title: Computer Graphics Forum
  doi: 10.1111/j.1467-8659.2008.01160.x
  contributor:
    fullname: MH Nguyen
– ident: 1178_CR52
  doi: 10.5244/C.29.41
– ident: 1178_CR30
  doi: 10.1109/CVPR.2014.426
– ident: 1178_CR64
  doi: 10.1109/CVPR.2014.244
– ident: 1178_CR90
  doi: 10.1109/ICCV.2017.405
– ident: 1178_CR87
– ident: 1178_CR44
  doi: 10.1109/FG.2017.76
– ident: 1178_CR59
  doi: 10.1109/BTAS.2016.7791205
– ident: 1178_CR66
– ident: 1178_CR62
– ident: 1178_CR79
  doi: 10.1109/CVPR.2015.7298880
– ident: 1178_CR2
  doi: 10.1109/ICCVW.2013.54
– ident: 1178_CR35
– ident: 1178_CR73
  doi: 10.1007/978-3-319-46478-7_31
– volume: 8
  start-page: 98
  issue: 1
  year: 1997
  ident: 1178_CR36
  publication-title: Transactions on Neural Networks
  doi: 10.1109/72.554195
  contributor:
    fullname: S Lawrence
– ident: 1178_CR84
  doi: 10.1109/CVPR.2015.7299100
– ident: 1178_CR13
  doi: 10.1109/FG.2017.11
– ident: 1178_CR54
  doi: 10.1109/FG.2017.137
– ident: 1178_CR86
  doi: 10.1109/CVPR.2017.554
– ident: 1178_CR55
– ident: 1178_CR6
  doi: 10.1109/FG.2018.00027
– ident: 1178_CR23
  doi: 10.1109/CVPRW.2016.23
– ident: 1178_CR20
  doi: 10.1109/ICCV.2013.448
– ident: 1178_CR31
  doi: 10.1109/CVPR.2016.527
– volume: 33
  start-page: 1978
  issue: 10
  year: 2011
  ident: 1178_CR77
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2010.230
  contributor:
    fullname: L Wolf
– volume-title: Computer vision: algorithms and applications
  year: 2010
  ident: 1178_CR67
  contributor:
    fullname: R Szeliski
– ident: 1178_CR68
  doi: 10.1109/CVPR.2014.220
– volume: 14
  start-page: 151
  issue: 1
  year: 2019
  ident: 1178_CR48
  publication-title: IEEE Transactions on Information Forensics and Security
  doi: 10.1109/TIFS.2018.2846617
  contributor:
    fullname: J Neves
– ident: 1178_CR58
  doi: 10.1109/BTAS.2016.7791205
– ident: 1178_CR88
– ident: 1178_CR25
  doi: 10.1109/CVPR.2014.242
– ident: 1178_CR46
  doi: 10.1109/AVSS.2015.7301739
– volume: 32
  start-page: 220
  issue: 2
  year: 2010
  ident: 1178_CR83
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2008.291
  contributor:
    fullname: N Yager
– ident: 1178_CR17
– ident: 1178_CR51
  doi: 10.1109/CVPR.2014.219
– ident: 1178_CR40
  doi: 10.1109/FG.2017.22
– ident: 1178_CR71
  doi: 10.1109/CVPR.2018.00414
– ident: 1178_CR53
  doi: 10.1109/AVSS.2009.58
– ident: 1178_CR7
  doi: 10.1007/s11263-019-01151-x.
– ident: 1178_CR42
  doi: 10.1109/CVPR.2016.523
– ident: 1178_CR15
  doi: 10.1109/ICCV.2015.304
– ident: 1178_CR65
  doi: 10.1109/CVPR.2015.7298907
– ident: 1178_CR76
  doi: 10.1109/CVPR.2011.5995566
– ident: 1178_CR11
  doi: 10.1109/WACV.2016.7477593
– ident: 1178_CR21
  doi: 10.1007/s00138-013-0571-4
– ident: 1178_CR43
  doi: 10.1007/978-3-319-46454-1_35
– ident: 1178_CR70
  doi: 10.1109/CVPR.2017.163
– ident: 1178_CR27
– ident: 1178_CR8
  doi: 10.5244/C.28.6
– ident: 1178_CR33
  doi: 10.1109/CVPR.2015.7298803
– ident: 1178_CR80
  doi: 10.1109/CVPR.2016.514
– ident: 1178_CR81
  doi: 10.1109/CVPR.2013.75
– ident: 1178_CR78
  doi: 10.1109/ICCV.2015.164
– ident: 1178_CR69
  doi: 10.1109/CVPR.2015.7298891
– ident: 1178_CR19
  doi: 10.2352/ISSN.2470-1173.2016.11.IMAWM-463
– volume: 99
  start-page: 1
  year: 2018
  ident: 1178_CR45
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  contributor:
    fullname: I Masi
– ident: 1178_CR12
– ident: 1178_CR32
  doi: 10.1109/WACV.2018.00011
– ident: 1178_CR37
– ident: 1178_CR16
  doi: 10.1109/CVPRW.2017.86
– volume-title: Computer graphics: Principles and practice
  year: 2014
  ident: 1178_CR28
  contributor:
    fullname: JF Hughes
– ident: 1178_CR85
– ident: 1178_CR26
  doi: 10.1007/978-3-319-16811-1_17
– ident: 1178_CR4
  doi: 10.1109/CVPR.2018.00544
– ident: 1178_CR22
  doi: 10.1109/CVPR.2015.7299058
– volume: 329
  start-page: 311
  year: 2019
  ident: 1178_CR56
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2018.10.041
  contributor:
    fullname: E Rashedi
– ident: 1178_CR74
  doi: 10.1007/s11263-018-01142-4
– ident: 1178_CR89
SSID ssj0002823
Score 2.5591743
Snippet We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing...
We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing...
SourceID proquest
gale
crossref
springer
SourceType Aggregation Database
Publisher
StartPage 642
SubjectTerms Artificial Intelligence
Computer Imaging
Computer Science
Computer vision
Consumer goods
Data augmentation
Face recognition
Facial recognition technology
Gesture recognition
Image Processing and Computer Vision
Machine vision
Object recognition
Pattern Recognition
Pattern Recognition and Graphics
System effectiveness
Training
Vision
SummonAdditionalLinks – databaseName: ProQuest Technology Collection
  dbid: 8FG
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LTwMhECY-Ll58G-srG2PiQYmFZYGeTH3UR6IHtYk3wrPx4Fbd9v_LbFkbNXplgbADDN8wzHwIHVgfj5mcECyt8JgRT7G2TONAveFAEmhqSpa7e37dZ7fPxXO6cKvSs8pGJ9aK2g0t3JGfUAoOsIgP6OnbOwbWKPCuJgqNWTRPqBBgfMne1ZcmjubEhEo-mkgF75AUNDMJnSMUPJgQwkMIZJn9djD9VM-__KT18dNbRosJN2bdyUSvoBlfrqKlhCGztEOrWNTQNDRla-isp63HNdF8eLHZhR7prDsevKaoozKLuDXrlxaQIhBGxP6gRfbQvC0aluuo37t8Or_GiToB2wh4RlgG6fKO1jbkXJi205IH4xzjXhQ8dKTUOpp2JOQUbKacGiO0M557IpxhxuUbaK4cln4TZZI7Z30nD8EUjLZNBDRtJ4SWlrGI7lgLHTVyU2-TDBlqmgsZpKyilFUtZdVuoX0QrYLUEyW8bRnocVWpm8cH1S0kBzamPFY6TJXCMP641SlUIA4IslV9q7nTTJFKm69S06XSQsfNtE0__z24rf9720YLtF4wcAezg-ZGH2O_GyHJyOzV6-4Tu9zavg
  priority: 102
  providerName: ProQuest
Title Face-Specific Data Augmentation for Unconstrained Face Recognition
URI https://link.springer.com/article/10.1007/s11263-019-01178-0
https://www.proquest.com/docview/2201458512
Volume 127
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4VuPQClLZiy0MRQuoBgtaO43iPAXahVEVoYSV6svxcIdQsYrMXfj3jrMOK0h44RXImlu3x45t4Zj6AfePwmMkISYUpXMqIo6kyTKWeOs0DSaBuKFl-XfLzEbu4zW8XcdyNs3t7I9ls1ItYN0LDlWOIuSEkpIVdgpUYeLpSnv3-2X_ZgNGKmDPIo2WU8x6JsTL_ruXVefT3rvzmerQ5dQZrcNPG7sydTe6PZrU-Mk9vUzm-p0PrsBpRaFLOp80n-OCqDViLiDSJ632KRS3pQ1v2GY4Hyri0oa33dyY5VbVKytn4T4xhqhJEwcmoMgF3BvoJrC98kQxbT6VJ9QVGg_7NyXkaiRhSg_CpToUXNuspZXzGC921SnCvrWXcFTn3PSGUQkOR-IwGCyyjWhfKascdKaxm2mZfYbmaVG4TEsGtNa6Xea9zRrsa4VHXFoUShjHEiqwDB6065MM834ZcZFYOIyZxxGQzYrLbgb2gMRkSWVTBU2asZtOp_HE9lGUueOB2ylDoexTyE-y4UTHwABsUcl-9ktxuNS_jUp5KSsPNKwJT2oHDVpOL1_9v3Lf3iW_BR9pMhvCHZxuW68eZ20HAU-tdWBKDs904zfF53L-8GmLpiJbPlFf3JA
link.rule.ids 315,783,787,12777,21400,27936,27937,33385,33756,41093,41535,42162,42604,43612,43817,52123,52246,74363,74630
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LbxMxELZoOcAFChQRaGGFkDi0FvFjvc6pCo-QPpJDmki9WX5WHNgUNvn_zGy8jQqi112v5R3bM994PPMR8sFHMDOCMap9FalkkVPrpaWJR6eQJNC1lCyTqRov5NlVeZUP3Jp8rbLTia2iDkuPZ-SfOMcAGOADfnLziyJrFEZXM4XGDnkoBdhqzBQffb_VxOBObKjkwUUq1YDlpJlN6hzjGMHEFB7GsMrsHcP0t3r-J07amp_RHnmScWMx3Ez0M_Ig1s_J04whi7xDG3jU0TR0z16QzyPrI22J5tMPX3y1K1sM19c_c9ZRXQBuLRa1R6SIhBHQH35RzLq7Rct6nyxG3-ZfxjRTJ1APgGdFddJBDKz1SajK9YPVKrkQpIpVqdJAa2vBtWNJcPSZBHeussFFFVkVnHRBvCS79bKOr0ihVQg-DkRKrpS87wDQ9ENVWe2lBHQne-Sok5u52VTIMNtayChlA1I2rZRNv0feo2gNlp6o8W7LtV03jTm9nJlhqRWyMQlo9DE3Skv4cW9zqgAMCKtV3Wl50E2RyZuvMdul0iPH3bRtX_9_cK_v7-0deTSeTy7Mxen0_A15zNvFg-cxB2R39XsdDwGerNzbdg3-AQra3aA
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1ZT9wwELY4pIqXlh6I5WpUVepDsVg7juN9QsuR7vZAFXQl3iyfiIdmgez-_85kna5oBa-JYznj8cxnj2c-Qj66AG4mZ4wqVwYqWODUOGFo5MFKJAm0LSXLjws5moiv18V1uv_UpGuVnU1sDbWfOjwjP-IcA2CAD_hRTNcifp5Vx3f3FBmkMNKa6DRWyTp4RYk6r6ovf60ybC0WtPKwXSrkgKUEmkUaHeMYzcR0Hsaw4uwjJ_Wvqf4vZtq6omqTvEwYMhsuJv01WQn1G_Iq4cksrdYGHnWUDd2zt-SkMi7QlnQ-3rrszMxMNpzf_E4ZSHUGGDab1A5RI5JHQH_4RXbZ3TOa1u_IpDr_dTqiiUaBOgA_M6qi8vnAGBdzWdq-N0pG672QoSxkHChlDGzzWMw57p9ybm1pvA0ysNJbYX2-RdbqaR22Saak9y4M8hhtIXjfArjp-7I0ygkBSE_0yOdObvpuUS1DL-sio5Q1SFm3Utb9HvmAotVYhqLGCb0x86bR46tLPSyURGamHBp9So3iFH7cmZQ2AAPCylWPWu51U6TTQmz0Um165LCbtuXrpwe383xv78kLUD_9fXzxbZds8FZ38Ghmj6zNHuZhH5DKzB60KvgHpm_h2A
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Face-Specific+Data+Augmentation+for+Unconstrained+Face+Recognition&rft.jtitle=International+journal+of+computer+vision&rft.au=Masi%2C+Iacopo&rft.au=Tr%C3%A1%C2%BA%C2%A7n%2C+Anh+Tu%C3%A1%C2%BA%C2%A5n&rft.au=Hassner%2C+Tal&rft.au=Sahin%2C+Gozde&rft.date=2019-06-01&rft.pub=Springer&rft.issn=0920-5691&rft.volume=127&rft.issue=6-7&rft.spage=642&rft_id=info:doi/10.1007%2Fs11263-019-01178-0&rft.externalDocID=A586152530
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon