ArcFace: Additive Angular Margin Loss for Deep Face Recognition
Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also signifi...
Saved in:
Published in | IEEE transactions on pattern analysis and machine intelligence Vol. 44; no. 10; pp. 5962 - 5979 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 0162-8828 1939-3539 2160-9292 1939-3539 |
DOI | 10.1109/TPAMI.2021.3087709 |
Cover
Loading…
Abstract | Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains <inline-formula><tex-math notation="LaTeX">K</tex-math> <mml:math><mml:mi>K</mml:mi></mml:math><inline-graphic xlink:href="deng-ieq1-3087709.gif"/> </inline-formula> sub-centers and training samples only need to be close to any of the <inline-formula><tex-math notation="LaTeX">K</tex-math> <mml:math><mml:mi>K</mml:mi></mml:math><inline-graphic xlink:href="deng-ieq2-3087709.gif"/> </inline-formula> positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis. |
---|---|
AbstractList | Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains <inline-formula><tex-math notation="LaTeX">K</tex-math> <mml:math><mml:mi>K</mml:mi></mml:math><inline-graphic xlink:href="deng-ieq1-3087709.gif"/> </inline-formula> sub-centers and training samples only need to be close to any of the <inline-formula><tex-math notation="LaTeX">K</tex-math> <mml:math><mml:mi>K</mml:mi></mml:math><inline-graphic xlink:href="deng-ieq2-3087709.gif"/> </inline-formula> positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis. Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains [Formula Omitted] sub-centers and training samples only need to be close to any of the [Formula Omitted] positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis. Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis. |
Author | Xue, Niannan Yang, Jing Deng, Jiankang Zafeiriou, Stefanos Kotsia, Irene Guo, Jia |
Author_xml | – sequence: 1 givenname: Jiankang orcidid: 0000-0002-3709-6216 surname: Deng fullname: Deng, Jiankang email: j.deng16@imperial.ac.uk organization: Department of Computing, Imperial College London, London, U.K – sequence: 2 givenname: Jia orcidid: 0000-0002-0709-261X surname: Guo fullname: Guo, Jia email: guojia@gmail.com organization: InsightFace, London, U.K – sequence: 3 givenname: Jing orcidid: 0000-0002-8794-4842 surname: Yang fullname: Yang, Jing email: y.jing2016@gmail.com organization: Department of Computer Science, University of Nottingham, Nottingham, U.K – sequence: 4 givenname: Niannan orcidid: 0000-0002-7234-5425 surname: Xue fullname: Xue, Niannan email: sparrowxue@hotmail.com organization: Department of Computing, Imperial College London, London, U.K – sequence: 5 givenname: Irene surname: Kotsia fullname: Kotsia, Irene email: e.kotsia@imperial.ac.uk organization: Cogitat, London, U.K – sequence: 6 givenname: Stefanos orcidid: 0000-0002-5222-1740 surname: Zafeiriou fullname: Zafeiriou, Stefanos email: s.zafeiriou@imperial.ac.uk organization: Department of Computing, Imperial College London, London, U.K |
BookMark | eNp9kE1PwkAQhjcGI6D-Ab008eKlOPtFd72YBkVJIBqD5812OyUlpcVtMfHf2wrxwMHTXJ535p1nSHplVSIhVxRGlIK-W77Fi9mIAaMjDiqKQJ-QAaNjCDXTrEcGQMcsVIqpPhnW9RqACgn8jPS5oDBWQg7IQ-zd1Dq8D-I0zZv8C4O4XO0K64OF9au8DOZVXQdZ5YNHxG3QscE7umpVtnRVXpDTzBY1Xh7mOfmYPi0nL-H89Xk2ieeh40w1oZaJUI4DxSQCzljqIhSZgARhLAVFxQUmUqaWC26FpBlYCirlKomESrjj5-R2v3frq88d1o3Z5LXDorAlVrvaMMm1UpGiskVvjtB1tfNl286wiAqlGZMdxfaU8-2DHjOz9fnG-m9DwXR6za9e0-k1B71tSB2FXN7YzkPjbV78H73eR3NE_LulhdBtb_4D1LeFTw |
CODEN | ITPIDJ |
CitedBy_id | crossref_primary_10_3390_s24144620 crossref_primary_10_1007_s10044_023_01208_1 crossref_primary_10_1007_s00371_024_03414_2 crossref_primary_10_1007_s11042_024_20389_3 crossref_primary_10_1109_TPAMI_2024_3445582 crossref_primary_10_5753_jisa_2024_3914 crossref_primary_10_1109_ACCESS_2023_3321149 crossref_primary_10_1109_JSAC_2024_3369654 crossref_primary_10_1007_s00607_025_01418_x crossref_primary_10_3390_s23073765 crossref_primary_10_1109_TASLPRO_2025_3527147 crossref_primary_10_1109_TCSVT_2024_3435383 crossref_primary_10_1016_j_iot_2025_101501 crossref_primary_10_1007_s00530_024_01619_y crossref_primary_10_1109_TIP_2021_3125504 crossref_primary_10_1109_TCSVT_2023_3272924 crossref_primary_10_1016_j_knosys_2024_112727 crossref_primary_10_1109_TASLP_2024_3385287 crossref_primary_10_1109_TIFS_2024_3371257 crossref_primary_10_12677_CSA_2023_132020 crossref_primary_10_1109_ACCESS_2023_3336405 crossref_primary_10_1186_s13636_021_00234_3 crossref_primary_10_3390_app14052149 crossref_primary_10_3390_electronics12214421 crossref_primary_10_1007_s40747_022_00868_6 crossref_primary_10_1109_TIP_2025_3548896 crossref_primary_10_1016_j_cag_2023_08_008 crossref_primary_10_1016_j_neucom_2024_129000 crossref_primary_10_1016_j_engappai_2024_109071 crossref_primary_10_1016_j_knosys_2024_112330 crossref_primary_10_1016_j_patcog_2023_109760 crossref_primary_10_1109_TIP_2024_3411474 crossref_primary_10_1109_TIP_2025_3539472 crossref_primary_10_1117_1_JEI_31_6_062009 crossref_primary_10_3390_sym14122686 crossref_primary_10_1016_j_ins_2024_120618 crossref_primary_10_1007_s00371_022_02675_z crossref_primary_10_3390_info14060342 crossref_primary_10_1007_s00521_025_11139_z crossref_primary_10_1109_ACCESS_2024_3382584 crossref_primary_10_3390_s23136006 crossref_primary_10_1109_ACCESS_2024_3435351 crossref_primary_10_3390_agriculture13010144 crossref_primary_10_1016_j_eswa_2023_121410 crossref_primary_10_3390_jimaging9020038 crossref_primary_10_1016_j_imavis_2025_105453 crossref_primary_10_1016_j_patrec_2023_06_014 crossref_primary_10_3390_math10193592 crossref_primary_10_1007_s44196_024_00617_2 crossref_primary_10_1109_TIFS_2024_3424303 crossref_primary_10_1109_ACCESS_2023_3262271 crossref_primary_10_1016_j_apacoust_2024_109929 crossref_primary_10_1145_3597300 crossref_primary_10_3390_electronics12163447 crossref_primary_10_1007_s10489_025_06366_9 crossref_primary_10_1007_s11042_023_17018_w crossref_primary_10_1109_JIOT_2023_3339722 crossref_primary_10_1109_TIM_2024_3480217 crossref_primary_10_1109_TKDE_2024_3393512 crossref_primary_10_1002_hbm_26721 crossref_primary_10_3390_app13106070 crossref_primary_10_1109_ACCESS_2024_3377564 crossref_primary_10_1109_ACCESS_2023_3326235 crossref_primary_10_1109_ACCESS_2022_3170037 crossref_primary_10_1109_TASLP_2024_3407527 crossref_primary_10_1007_s11042_022_12158_x crossref_primary_10_1121_10_0025178 crossref_primary_10_1007_s10815_024_03080_2 crossref_primary_10_1088_1742_6596_2868_1_012042 crossref_primary_10_3390_app14072865 crossref_primary_10_1007_s10489_024_05330_3 crossref_primary_10_1016_j_mcpdig_2023_10_004 crossref_primary_10_1109_ACCESS_2024_3445178 crossref_primary_10_1007_s11042_023_17949_4 crossref_primary_10_1109_TCSVT_2024_3419933 crossref_primary_10_1016_j_asej_2025_103350 crossref_primary_10_3390_math11071694 crossref_primary_10_1002_cav_2238 crossref_primary_10_3390_app13116711 crossref_primary_10_3390_info13110535 crossref_primary_10_1007_s10489_025_06269_9 crossref_primary_10_1145_3631460 crossref_primary_10_1007_s11280_024_01263_6 crossref_primary_10_1016_j_eswa_2023_122170 crossref_primary_10_1109_TCSVT_2022_3174582 crossref_primary_10_1109_TIFS_2023_3284649 crossref_primary_10_1109_TIP_2024_3451933 crossref_primary_10_3390_s25051574 crossref_primary_10_1016_j_image_2025_117269 crossref_primary_10_1093_comjnl_bxad111 crossref_primary_10_1109_ACCESS_2023_3321118 crossref_primary_10_1007_s11227_024_06158_x crossref_primary_10_1109_ACCESS_2024_3390412 crossref_primary_10_1007_s11042_023_15084_8 crossref_primary_10_1177_00405175241293764 crossref_primary_10_1109_ACCESS_2025_3532745 crossref_primary_10_46604_ijeti_2024_13314 crossref_primary_10_1016_j_sigpro_2024_109816 crossref_primary_10_1111_jdv_20365 crossref_primary_10_1109_ACCESS_2024_3406911 crossref_primary_10_1016_j_bspc_2024_106993 crossref_primary_10_1109_ACCESS_2024_3370437 crossref_primary_10_3390_app13105950 crossref_primary_10_1007_s41870_024_01872_4 crossref_primary_10_3390_agriculture14071112 crossref_primary_10_7746_jkros_2024_19_4_335 crossref_primary_10_1016_j_engappai_2024_109346 crossref_primary_10_1371_journal_pgen_1011273 crossref_primary_10_1109_TIM_2024_3428610 crossref_primary_10_1109_TIFS_2023_3329686 crossref_primary_10_1109_ACCESS_2024_3414651 crossref_primary_10_1109_TIFS_2024_3372803 crossref_primary_10_1007_s10489_025_06267_x crossref_primary_10_1016_j_animal_2024_101079 crossref_primary_10_1016_j_eswa_2023_121182 crossref_primary_10_12720_jait_15_5_572_579 crossref_primary_10_3390_electronics12132927 crossref_primary_10_1007_s11263_024_02068_w crossref_primary_10_1007_s00521_024_10244_9 crossref_primary_10_3390_electronics13183627 crossref_primary_10_1016_j_engappai_2024_107941 crossref_primary_10_1109_TMM_2023_3313256 crossref_primary_10_3390_e25050727 crossref_primary_10_1109_ACCESS_2024_3356550 crossref_primary_10_2478_amns_2024_3586 crossref_primary_10_1109_TNNLS_2020_3017692 crossref_primary_10_1007_s12204_024_2726_z crossref_primary_10_24018_ejece_2024_8_2_604 crossref_primary_10_1109_TAES_2023_3277428 crossref_primary_10_54097_hset_v61i_10291 |
Cites_doi | 10.1109/ICCV.2017.47 10.1109/ICCV.2017.578 10.1109/CVPR.2017.668 10.1109/CVPR42600.2020.00874 10.1109/CVPR.2019.00800 10.1007/978-3-030-01252-6_48 10.1145/2810103.2813677 10.1109/CVPRW.2017.250 10.1007/978-3-030-58621-8_43 10.1109/CVPR.2019.00585 10.1109/ICCV.2019.00557 10.1007/978-3-030-58545-7_31 10.1109/CVPR42600.2020.01318 10.1109/CVPR.2015.7298682 10.1109/CVPR.2017.361 10.1109/CVPR42600.2020.00685 10.1109/CVPR42600.2020.00575 10.1109/ICCV.2019.01015 10.1007/978-3-319-46487-9_6 10.1109/CVPR.2019.00364 10.1109/CVPR42600.2020.00594 10.1145/3123266.3123359 10.1109/CVPR.2017.713 10.1109/ICCV.2019.00945 10.1109/FG.2018.00020 10.1109/CVPR.2019.01222 10.1109/WACV.2016.7477558 10.1109/CVPR.2018.00092 10.1109/ICCV.2017.309 10.1007/978-3-030-58595-2_9 10.1109/LSP.2016.2603342 10.1109/CVPR.2019.01108 10.1109/TPAMI.2017.2672557 10.1109/CVPR.2019.00123 10.1109/CVPRW.2017.87 10.1007/s11263-019-01178-0 10.1109/CVPR.2019.00453 10.1109/ICIP.2018.8451704 10.1109/CVPR.2015.7299155 10.1109/CVPR42600.2020.00617 10.1109/CVPR42600.2020.00512 10.1109/CVPR42600.2020.00525 10.1007/978-3-030-58548-8_3 10.1109/CVPR42600.2020.00571 10.1109/CVPR.2015.7299111 10.1109/CVPR42600.2020.00835 10.1109/ICCV.2019.00700 10.1109/ICCV.2019.00655 10.1109/ICCVW.2019.00322 10.1007/978-3-319-46454-1_35 10.1109/CVPR.2019.00353 10.1007/s11263-018-1113-3 10.1109/CVPR.2018.00552 10.1109/CVPR.2014.220 10.1109/CVPR42600.2020.00566 10.1016/j.cviu.2019.102805 10.1109/CVPR.2011.5995566 10.1109/ICIP.2014.7025068 10.1109/CVPRW.2017.251 10.1109/CVPR.2016.434 10.1007/s13398-014-0173-7.2 10.1017/9781108924238.008 10.1109/CVPR.2004.414 10.1609/aaai.v33i01.33019251 10.1109/CVPR.2016.527 10.1109/TIFS.2018.2833032 10.1109/CVPR.2018.00891 10.1109/CVPR42600.2020.00852 10.1609/aaai.v34i07.6906 10.1007/978-3-319-46478-7_31 10.1109/CVPR42600.2020.00774 10.5244/C.29.41 10.1109/CVPR.2019.01014 10.1109/CVPR42600.2020.00775 10.1109/LSP.2018.2822810 10.1109/CVPR.2019.00482 10.1109/SIBGRAPI.2018.00067 10.1109/ICB2018.2018.00033 10.1109/CVPR.2016.90 10.1007/978-3-030-01240-3_47 10.1109/CVPR42600.2020.00643 10.1007/978-3-030-28954-6 10.1109/CVPR.1991.139758 10.1109/CVPR.2018.00702 10.1109/TPAMI.2006.172 10.1109/CVPR.2016.522 10.1109/TPAMI.2018.2827389 10.1109/CVPR.2019.01216 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TPAMI.2021.3087709 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | Technology Research Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 2160-9292 1939-3539 |
EndPage | 5979 |
ExternalDocumentID | 10_1109_TPAMI_2021_3087709 9449988 |
Genre | orig-research |
GrantInformation_xml | – fundername: Imperial President's PhD Scholarship – fundername: Large Scale Shape Analysis of Deformable Models of Humans grantid: EP/S010203/1 – fundername: Face Matching for Automatic Identity Retrieval, Recognition, Verification and Management grantid: EP/N007743/1 – fundername: University of Nottingham – fundername: Google Faculty Award |
GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c328t-95b48c301eb70322dc7e4f40be06541e834eb55da343a451f0a108d38b748b3c3 |
IEDL.DBID | RIE |
ISSN | 0162-8828 1939-3539 |
IngestDate | Fri Jul 11 14:28:23 EDT 2025 Sun Jun 29 16:48:22 EDT 2025 Tue Jul 01 03:18:26 EDT 2025 Thu Apr 24 23:06:46 EDT 2025 Wed Aug 27 02:18:57 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 10 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c328t-95b48c301eb70322dc7e4f40be06541e834eb55da343a451f0a108d38b748b3c3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0002-0709-261X 0000-0002-5222-1740 0000-0002-3709-6216 0000-0002-8794-4842 0000-0002-7234-5425 |
PMID | 34106845 |
PQID | 2714892255 |
PQPubID | 85458 |
PageCount | 18 |
ParticipantIDs | crossref_primary_10_1109_TPAMI_2021_3087709 ieee_primary_9449988 crossref_citationtrail_10_1109_TPAMI_2021_3087709 proquest_miscellaneous_2539887815 proquest_journals_2714892255 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-10-01 |
PublicationDateYYYYMMDD | 2022-10-01 |
PublicationDate_xml | – month: 10 year: 2022 text: 2022-10-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
PublicationTitleAbbrev | TPAMI |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 Chen (ref39) 2015 ref59 ref58 ref53 ref52 ref55 ref54 Liu (ref84) Pereyra (ref81) 2017 Pernici (ref87) ref51 ref50 ref46 Mordvintsev (ref30) 2015 ref45 ref48 ref47 ref44 Liu (ref95) 2015 Zheng (ref77) 2017 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref100 ref101 ref40 ref35 ref34 ref37 ref31 ref33 ref32 Hinton (ref119) 2015 ref38 Rippel (ref43) Li (ref117) Müller (ref63) 2020 Sohn (ref12) ref24 Sun (ref1) ref23 Wang (ref108) ref26 ref25 ref20 ref22 ref21 Zhang (ref116) 2019 Tan (ref118) ref28 ref29 ref13 ref15 ref14 ref97 ref96 ref11 ref99 ref10 ref98 ref17 ref16 ref19 ref18 Heusel (ref121) Srivastava (ref93) 2014; 15 Ranjan (ref114) 2018 ref92 Hardt (ref85) 2016 ref94 ref91 ref90 Xie (ref113) ref88 Yosinski (ref71) 2015 Brock (ref67) ref82 Huang (ref89) 2007 ref80 ref79 ref78 ref109 ref106 Yi (ref56) 2014 ref107 ref104 ref74 ref105 Abadi (ref41) 2016 ref102 ref76 ref103 Liu (ref42) ref2 Hoffer (ref86) 2018 ref111 ref70 ref112 ref73 ref72 ref110 Ranjan (ref83) 2017 ref68 Goodfellow (ref36) ref69 ref64 ref115 Zheng (ref75) 2018 ref66 ref65 Kingma (ref120) 2014 Zhmoginov (ref27) 2016 ref60 ref62 ref61 |
References_xml | – ident: ref46 doi: 10.1109/ICCV.2017.47 – ident: ref73 doi: 10.1109/ICCV.2017.578 – ident: ref91 doi: 10.1109/CVPR.2017.668 – ident: ref33 doi: 10.1109/CVPR42600.2020.00874 – ident: ref20 doi: 10.1109/CVPR.2019.00800 – ident: ref94 doi: 10.1007/978-3-030-01252-6_48 – ident: ref70 doi: 10.1145/2810103.2813677 – ident: ref76 doi: 10.1109/CVPRW.2017.250 – ident: ref22 doi: 10.1007/978-3-030-58621-8_43 – ident: ref97 doi: 10.1109/CVPR.2019.00585 – ident: ref99 doi: 10.1109/ICCV.2019.00557 – ident: ref107 doi: 10.1007/978-3-030-58545-7_31 – ident: ref35 doi: 10.1109/CVPR42600.2020.01318 – ident: ref3 doi: 10.1109/CVPR.2015.7298682 – ident: ref23 doi: 10.1109/CVPR.2017.361 – ident: ref104 doi: 10.1109/CVPR42600.2020.00685 – year: 2018 ident: ref114 article-title: Crystal loss and quality pooling for unconstrained face verification and recognition – start-page: 507 volume-title: Proc. Int. Conf. Int. Conf. Mach. Learn. ident: ref42 article-title: Large-margin softmax loss for convolutional neural networks – ident: ref101 doi: 10.1109/CVPR42600.2020.00575 – ident: ref98 doi: 10.1109/ICCV.2019.01015 – ident: ref37 doi: 10.1007/978-3-319-46487-9_6 – start-page: 1332 volume-title: Proc. Brit. Mach. Vis. Conf. ident: ref113 article-title: Multicolumn networks for face recognition – ident: ref96 doi: 10.1109/CVPR.2019.00364 – ident: ref54 doi: 10.1109/CVPR42600.2020.00594 – ident: ref82 doi: 10.1145/3123266.3123359 – ident: ref13 doi: 10.1109/CVPR.2017.713 – ident: ref21 doi: 10.1109/ICCV.2019.00945 – year: 2019 ident: ref116 article-title: VargNet: Variable group convolutional neural network for efficient embedded computing – ident: ref9 doi: 10.1109/FG.2018.00020 – ident: ref50 doi: 10.1109/CVPR.2019.01222 – ident: ref74 doi: 10.1109/WACV.2016.7477558 – ident: ref64 doi: 10.1109/CVPR.2018.00092 – start-page: 2672 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref36 article-title: Generative adversarial nets – ident: ref45 doi: 10.1109/ICCV.2017.309 – ident: ref106 doi: 10.1007/978-3-030-58595-2_9 – ident: ref7 doi: 10.1109/LSP.2016.2603342 – ident: ref49 doi: 10.1109/CVPR.2019.01108 – ident: ref61 doi: 10.1109/TPAMI.2017.2672557 – start-page: 780 volume-title: Proc. Int. Conf. Learn. Representations ident: ref43 article-title: Metric learning with adaptive density discrimination – start-page: 46 volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops ident: ref87 article-title: Maximally compact and separated features with regular polytope networks – ident: ref51 doi: 10.1109/CVPR.2019.00123 – ident: ref79 doi: 10.1109/CVPRW.2017.87 – start-page: 2678 volume-title: Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshops ident: ref117 article-title: AirFace: Lightweight and efficient model for face recognition – ident: ref11 doi: 10.1007/s11263-019-01178-0 – year: 2017 ident: ref77 article-title: Cross-Age LFW: A database for studying cross-age face recognition in unconstrained environments – ident: ref68 doi: 10.1109/CVPR.2019.00453 – year: 2015 ident: ref71 article-title: Understanding neural networks through deep visualization – ident: ref38 doi: 10.1109/ICIP.2018.8451704 – ident: ref31 doi: 10.1109/CVPR.2015.7299155 – ident: ref29 doi: 10.1109/CVPR42600.2020.00617 – ident: ref66 doi: 10.1109/CVPR42600.2020.00512 – ident: ref8 doi: 10.1109/CVPR42600.2020.00525 – ident: ref111 doi: 10.1007/978-3-030-58548-8_3 – ident: ref110 doi: 10.1109/CVPR42600.2020.00571 – ident: ref24 doi: 10.1109/CVPR.2015.7299111 – ident: ref103 doi: 10.1109/CVPR42600.2020.00835 – ident: ref100 doi: 10.1109/ICCV.2019.00700 – ident: ref62 doi: 10.1109/ICCV.2019.00655 – ident: ref88 doi: 10.1109/ICCVW.2019.00322 – ident: ref10 doi: 10.1007/978-3-319-46454-1_35 – ident: ref52 doi: 10.1109/CVPR.2019.00353 – year: 2015 ident: ref95 article-title: Targeting ultimate accuracy: Face recognition via deep embedding – ident: ref25 doi: 10.1007/s11263-018-1113-3 – start-page: 6629 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref121 article-title: Gans trained by a two time-scale update rule converge to a local nash equilibrium – start-page: 37 volume-title: Proc. Int. Conf. Learn. Representations ident: ref67 article-title: Large scale GAN training for high fidelity natural image synthesis – year: 2015 ident: ref30 article-title: Inceptionism: Going deeper into neural networks – ident: ref14 doi: 10.1109/CVPR.2018.00552 – ident: ref2 doi: 10.1109/CVPR.2014.220 – ident: ref55 doi: 10.1109/CVPR42600.2020.00566 – year: 2016 ident: ref41 article-title: Tensorflow: Large-scale machine learning on heterogeneous distributed systems – year: 2016 ident: ref85 article-title: Identity matters in deep learning – ident: ref6 doi: 10.1016/j.cviu.2019.102805 – ident: ref90 doi: 10.1109/CVPR.2011.5995566 – ident: ref112 doi: 10.1109/ICIP.2014.7025068 – ident: ref17 doi: 10.1109/CVPRW.2017.251 – ident: ref44 doi: 10.1109/CVPR.2016.434 – year: 2014 ident: ref56 article-title: Learning face representation from scratch – ident: ref92 doi: 10.1007/s13398-014-0173-7.2 – year: 2017 ident: ref81 article-title: Regularizing neural networks by penalizing confident output distributions – ident: ref40 doi: 10.1017/9781108924238.008 – ident: ref59 doi: 10.1109/CVPR.2004.414 – year: 2015 ident: ref119 article-title: Distilling the knowledge in a neural network – ident: ref115 doi: 10.1609/aaai.v33i01.33019251 – ident: ref78 doi: 10.1109/CVPR.2016.527 – ident: ref57 doi: 10.1109/TIFS.2018.2833032 – start-page: 10029 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref108 article-title: Loss function search for face recognition – ident: ref47 doi: 10.1109/CVPR.2018.00891 – year: 2016 ident: ref27 article-title: Inverting face embeddings with convolutional neural networks – ident: ref34 doi: 10.1109/CVPR42600.2020.00852 – ident: ref53 doi: 10.1609/aaai.v34i07.6906 – start-page: 6225 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref84 article-title: Learning towards minimum hyperspherical energy – ident: ref72 doi: 10.1007/978-3-319-46478-7_31 – year: 2017 ident: ref83 article-title: L2-constrained softmax loss for discriminative face verification – ident: ref102 doi: 10.1109/CVPR42600.2020.00774 – year: 2015 ident: ref39 article-title: Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems – ident: ref4 doi: 10.5244/C.29.41 – year: 2020 ident: ref63 article-title: Subclass distillation – ident: ref48 doi: 10.1109/CVPR.2019.01014 – ident: ref109 doi: 10.1109/CVPR42600.2020.00775 – ident: ref15 doi: 10.1109/LSP.2018.2822810 – ident: ref16 doi: 10.1109/CVPR.2019.00482 – year: 2018 ident: ref86 article-title: Fix your classifier: The marginal value of training the last weight layer – ident: ref5 doi: 10.1109/SIBGRAPI.2018.00067 – ident: ref80 doi: 10.1109/ICB2018.2018.00033 – ident: ref58 doi: 10.1109/CVPR.2016.90 – ident: ref18 doi: 10.1007/978-3-030-01240-3_47 – ident: ref105 doi: 10.1109/CVPR42600.2020.00643 – ident: ref69 doi: 10.1007/978-3-030-28954-6 – ident: ref32 doi: 10.1109/CVPR.1991.139758 – year: 2018 ident: ref75 article-title: Cross-pose LFW: A database for studying cross-pose face recognition in unconstrained environments publication-title: Tech. Rep. 18–01 – ident: ref65 doi: 10.1109/CVPR.2018.00702 – start-page: 6105 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref118 article-title: Efficientnet: Rethinking model scaling for convolutional neural networks – ident: ref60 doi: 10.1109/TPAMI.2006.172 – volume: 15 start-page: 1929 issue: 1 year: 2014 ident: ref93 article-title: Dropout: A simple way to prevent neural networks from overfitting publication-title: J. Mach. Learn. Res. – start-page: 1988 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref1 article-title: Deep learning face representation by joint identification-verification – ident: ref26 doi: 10.1109/CVPR.2016.522 – year: 2014 ident: ref120 article-title: Adam: A method for stochastic optimization – ident: ref28 doi: 10.1109/TPAMI.2018.2827389 – start-page: 7 volume-title: Tech. Rep. year: 2007 ident: ref89 article-title: Labeled faces in the wild: A database for studying face recognition in unconstrained environments – start-page: 1857 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref12 article-title: Improved deep metric learning with multi-class n-pair loss objective – ident: ref19 doi: 10.1109/CVPR.2019.01216 |
SSID | ssj0014503 |
Score | 2.6996233 |
Snippet | Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 5962 |
SubjectTerms | additive angular margin Additives Data models Embedding Face recognition Inverse problems Large-scale face recognition model inversion Noise measurement noisy labels Predictive models sub-class Training Training data |
Title | ArcFace: Additive Angular Margin Loss for Deep Face Recognition |
URI | https://ieeexplore.ieee.org/document/9449988 https://www.proquest.com/docview/2714892255 https://www.proquest.com/docview/2539887815 |
Volume | 44 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3dT9swED8BT9sDX2WiwJCReIO0cWw3Di9TtFExRKcJgdS3yHYuEgKlqGtf9tdzdpOoGtO0tyg5R_Gd73wX390P4DwdGWGt45GwwkQUb8RRhlUZaceNIH_eYGjqM_kxunmUt1M13YDLrhYGEUPyGQ78ZTjLL2du6X-VDTNJ_rnWm7BJgduqVqs7MZAqoCCTB0MaTmFEWyATZ8OHn_nkO4WCCR-E_nexbxVK1jseaV_FtLYfBYCVd1Y5bDXjHZi0H7nKMHkeLBd24H7_0b_xf2exC9uNz8ny1SLZgw2s92GnxXNgjXrvw8e15oQ9-JLP3dg4vGJ5WYYMI5bXHrh-zjw67lPN7mhCjJxe9g3xlXladt_mI83qA3gcXz98vYkauIXIiUQvokxZqR0pPFoyA0lSuhRlJWOLvgCVoxYSrVIlyVAYqXgVGx7rUmibSm2FE59gq57VeAhMmMrSXW4zk0glSlMhuTI4SqtMIyZpH3jL9MI1vcg9JMZLEWKSOCuCzAovs6KRWR8uujGvq04c_6Tuec53lA3T-3DSyrZolPVXkaQUE2Zk2FQfzrrHpGb-7MTUOFsSjRI0PNVcHf39zcfwIfGVESHP7wS2FvMlfiZ_ZWFPw0J9A3oS4mI |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3dT9swED8x9sB4GAyY6PiYJ-1tpMSx3Th7maJtVYEWIVQk3iLbuUgTKEVd-7K_nrObRGigaW9Rco7iO9_5Lr67H8DndGCEtY5HwgoTUbwRRxlWZaQdN4L8eYOhqc_kcjC6kee36nYNTrpaGEQMyWfY95fhLL-cuaX_VXaaSfLPtX4Fr2nfV3xVrdWdGUgVcJDJhyEdp0CiLZGJs9PpVT45o2Aw4f3QAS_2zULJfscD7euYnuxIAWLlmV0Om81wCybtZ65yTO76y4Xtuz9_dXD833lsw9vG62T5apm8gzWsd2CrRXRgjYLvwOaT9oS78C2fu6Fx-JXlZRlyjFhee-j6OfP4uL9qNqYJMXJ72Q_EB-Zp2XWbkTSr9-Bm-HP6fRQ1gAuRE4leRJmyUjtSebRkCJKkdCnKSsYWfQkqRy0kWqVKkqIwUvEqNjzWpdA2ldoKJ97Dej2rcR-YMJWlu9xmJpFKlKZCcmZwkFaZRkzSHvCW6YVrupF7UIz7IkQlcVYEmRVeZkUjsx586cY8rHpx_JN613O-o2yY3oPDVrZFo66_iySlqDAj06Z68Kl7TIrmT09MjbMl0ShBw1PN1YeX3_wRNkbTybgYn11eHMCbxNdJhKy_Q1hfzJd4RN7Lwh6HRfsIdhflqw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=ArcFace%3A+Additive+Angular+Margin+Loss+for+Deep+Face+Recognition&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Deng%2C+Jiankang&rft.au=Guo%2C+Jia&rft.au=Yang%2C+Jing&rft.au=Xue%2C+Niannan&rft.date=2022-10-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0162-8828&rft.eissn=1939-3539&rft.volume=44&rft.issue=10&rft.spage=5962&rft_id=info:doi/10.1109%2FTPAMI.2021.3087709&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |