Joint Face Image Restoration and Frontalization for Recognition
In real-world scenarios, many factors may harm face recognition performance, e.g ., large pose, bad illumination, low resolution, blur and noise. To address these challenges, previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition. However,...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 3; pp. 1285 - 1298 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.03.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In real-world scenarios, many factors may harm face recognition performance, e.g ., large pose, bad illumination, low resolution, blur and noise. To address these challenges, previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition. However, most of these methods are stage-wise, which is sub-optimal and deviates from the reality. In this paper, we address all these challenges jointly for unconstrained face recognition. We propose an M ulti- D egradation F ace R estoration (MDFR) model to restore frontalized high-quality faces from the given low-quality ones under arbitrary facial poses, with three distinct novelties. First, MDFR is a well-designed encoder-decoder architecture which extracts feature representation from an input face image with arbitrary low-quality factors and restores it to a high-quality counterpart. Second, MDFR introduces a pose residual learning strategy along with a 3D-based P ose N ormalization M odule (PNM), which can perceive the pose gap between the input initial pose and its real-frontal pose to guide the face frontalization. Finally, MDFR can generate frontalized high-quality face images by a single unified network, showing a strong capability of preserving face identity. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of MDFR over state-of-the-art methods on both face frontalization and face restoration. |
---|---|
AbstractList | In real-world scenarios, many factors may harm face recognition performance, e.g ., large pose, bad illumination, low resolution, blur and noise. To address these challenges, previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition. However, most of these methods are stage-wise, which is sub-optimal and deviates from the reality. In this paper, we address all these challenges jointly for unconstrained face recognition. We propose an M ulti- D egradation F ace R estoration (MDFR) model to restore frontalized high-quality faces from the given low-quality ones under arbitrary facial poses, with three distinct novelties. First, MDFR is a well-designed encoder-decoder architecture which extracts feature representation from an input face image with arbitrary low-quality factors and restores it to a high-quality counterpart. Second, MDFR introduces a pose residual learning strategy along with a 3D-based P ose N ormalization M odule (PNM), which can perceive the pose gap between the input initial pose and its real-frontal pose to guide the face frontalization. Finally, MDFR can generate frontalized high-quality face images by a single unified network, showing a strong capability of preserving face identity. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of MDFR over state-of-the-art methods on both face frontalization and face restoration. |
Author | Liu, Qiankun Feng, Jiashi Zhao, Jian Liu, Wei Ai, Wenjie Guo, Guodong Tu, Xiaoguang Li, Zhifeng |
Author_xml | – sequence: 1 givenname: Xiaoguang orcidid: 0000-0002-1185-5229 surname: Tu fullname: Tu, Xiaoguang email: xguangtu@outlook.com organization: Aviation Engineering Institute, Civil Aviation Flight University of China, Guanghan, China – sequence: 2 givenname: Jian orcidid: 0000-0002-3508-756X surname: Zhao fullname: Zhao, Jian email: zhaojian90@u.nus.edu organization: Institute of North Electronic Equipment, Beijing, China – sequence: 3 givenname: Qiankun orcidid: 0000-0003-4786-8563 surname: Liu fullname: Liu, Qiankun email: allen.liu@pensees.ai organization: Pensees Ptd Ltd., Singapore – sequence: 4 givenname: Wenjie orcidid: 0000-0001-6841-0899 surname: Ai fullname: Ai, Wenjie email: 201821011405@std.uestc.edu.cn organization: School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China – sequence: 5 givenname: Guodong orcidid: 0000-0001-9583-0055 surname: Guo fullname: Guo, Guodong email: guodong.guo@mail.wvu.edu organization: Institute of Deep Learning, Baidu Research, Beijing, China – sequence: 6 givenname: Zhifeng orcidid: 0000-0002-9653-7907 surname: Li fullname: Li, Zhifeng email: michaelzfli@tencent.com organization: Tencent AI Lab, Shenzhen, China – sequence: 7 givenname: Wei orcidid: 0000-0002-3865-8145 surname: Liu fullname: Liu, Wei email: wl2223@columbia.edu organization: Tencent AI Lab, Shenzhen, China – sequence: 8 givenname: Jiashi orcidid: 0000-0001-6843-0064 surname: Feng fullname: Feng, Jiashi email: elefjia@nus.edu.sg organization: Department of Electrical and Computer Engineering, National University of Singapore, Singapore |
BookMark | eNp9kEtLAzEUhYNUsK3-Ad0MuJ4x72RWIsVqpSBodRtiJikpbVIz6UJ_vdNOceHC1X1wvnsuZwQGIQYLwCWCFUKwvllMXt8XFYYYVQQKyZA4AUPEmCwxhmzQ9ZChUmLEzsCobVcQIiqpGILbp-hDLqba2GK20UtbvNg2x6Szj6HQoSmmKYas1_67X7mYOomJy-D38zk4dXrd2otjHYO36f1i8ljOnx9mk7t5aQhHuWS6ERZKhF3thLMN45I2ojOVzJGaa-Iap7nUtjZEGkQJ5oYKKjghHxg3hIzBdX93m-LnrntRreIuhc5SYU4EpRDWolPJXmVSbNtknTI-H_7OSfu1QlDt41KHuNQ-LnWMq0PxH3Sb_Eanr_-hqx7y1tpfoKZYQEHID4IRd_o |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_dsp_2025_104986 crossref_primary_10_1109_ACCESS_2022_3202216 crossref_primary_10_1049_2024_7886911 crossref_primary_10_1109_TBIOM_2023_3349218 crossref_primary_10_3390_math13050749 crossref_primary_10_1109_TCSVT_2023_3244786 crossref_primary_10_1109_TCSVT_2024_3386671 crossref_primary_10_1109_TCSVT_2021_3130196 crossref_primary_10_1109_ACCESS_2023_3297488 crossref_primary_10_3390_electronics12102248 crossref_primary_10_3389_fphy_2023_1193245 crossref_primary_10_1109_TCSVT_2024_3419933 crossref_primary_10_1109_TPAMI_2021_3090942 crossref_primary_10_61927_igmin231 crossref_primary_10_1109_TCSVT_2024_3403713 crossref_primary_10_1007_s10044_024_01255_2 crossref_primary_10_1109_TCSVT_2022_3192422 crossref_primary_10_1007_s00530_024_01515_5 crossref_primary_10_1109_TIFS_2023_3329686 crossref_primary_10_1109_ACCESS_2021_3106483 crossref_primary_10_1587_transinf_2022DLP0055 crossref_primary_10_1109_TCSVT_2023_3257271 crossref_primary_10_1109_TCSVT_2023_3272924 crossref_primary_10_1007_s11760_024_03577_4 crossref_primary_10_1109_TPAMI_2024_3432651 crossref_primary_10_1109_LGRS_2022_3228689 crossref_primary_10_1145_3485132 crossref_primary_10_1049_ipr2_70003 crossref_primary_10_1016_j_tsep_2024_103204 crossref_primary_10_1109_TCSVT_2022_3181828 crossref_primary_10_1109_TMM_2023_3305095 crossref_primary_10_1080_08839514_2023_2175108 crossref_primary_10_1109_TIFS_2022_3140687 crossref_primary_10_1109_TCSVT_2024_3383659 crossref_primary_10_1109_TIP_2023_3322593 crossref_primary_10_1109_TCSVT_2023_3266222 crossref_primary_10_1109_TCSVT_2024_3450493 |
Cites_doi | 10.1109/ICIP.2016.7533077 10.1109/CVPR.2016.90 10.1109/TCSVT.2020.3042178 10.1109/TIP.2018.2808840 10.1016/j.neucom.2017.05.055 10.1109/CVPR.2017.618 10.1109/TPAMI.2019.2914039 10.1109/ICCV.2015.64 10.1109/ICCV.2017.430 10.1007/978-3-319-10593-2_17 10.1007/s11263-019-01252-7 10.1109/CVPR.2019.00482 10.1109/CVPR.2018.00552 10.24963/ijcai.2019/611 10.1109/CVPR.2016.182 10.1109/TIP.2020.3046918 10.1109/ICCV.2017.116 10.1007/978-3-030-01252-6_12 10.1007/978-3-319-46454-1_20 10.1016/j.patcog.2017.02.004 10.1109/TPAMI.2021.3061312 10.1109/WACV.2016.7477558 10.1109/TCSVT.2017.2748379 10.1109/CVPR.2017.713 10.1109/CVPR.2018.00235 10.1016/j.imavis.2009.08.002 10.24963/ijcai.2018/131 10.1109/ICCV.2017.267 10.1109/CVPR.2018.00101 10.1109/CVPR.2014.375 10.1007/s11263-019-01254-5 10.1109/ICCV.2015.441 10.1109/TIP.2018.2831899 10.1109/CVPR.2015.7298803 10.1007/s11263-019-01178-0 10.1109/CVPR.2016.23 10.1109/TIP.2018.2839891 10.1109/ICCV.2017.36 10.1007/978-3-030-01240-3_47 10.1109/LSP.2011.2158998 10.1109/CVPR.2018.00854 10.1109/CVPR.2017.570 10.1109/CVPR.2014.243 10.1109/CVPR.2018.00876 10.1109/CVPR.2018.00264 10.1109/CVPR.2015.7299058 10.1109/TPAMI.2018.2858819 10.1109/CVPR.2016.309 10.1109/CVPRW.2017.250 10.1109/CVPR42600.2020.00738 10.1109/CVPR.2018.00862 10.1109/CVPR.2017.141 10.1109/ICCV.2015.425 10.1007/978-3-030-01240-3_14 10.1109/TCSVT.2010.2045817 10.1109/SPICES.2015.7091490 10.1007/s13042-013-0182-4 10.1109/ICB2018.2018.00033 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2021.3078517 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 1298 |
ExternalDocumentID | 10_1109_TCSVT_2021_3078517 9427073 |
Genre | orig-research |
GrantInformation_xml | – fundername: Project of Comprehensive Reform of Electronic Information Engineering Specialty for Civil Aircraft Maintenance grantid: 14002600100017J172 – fundername: National Key Research and Development Program of China grantid: 2018AAA0103203 funderid: 10.13039/501100012166 – fundername: Sichuan University Failure Mechanics & Engineering Disaster Prevention and Mitigation Key Laboratory of Sichuan Province Open Foundation grantid: 2020FMSCU02 funderid: 10.13039/501100004912 – fundername: National Science Foundation of China grantid: 62006244 funderid: 10.13039/501100001809 – fundername: Open Fund Project of Key Laboratory of Flight Technology and Flight Safety of CAFUC grantid: FZ2020KF10 – fundername: Project of Civil Aviation Flight University of China grantid: J2018-56; CJ2019-01; J2020-060 funderid: 10.13039/501100002881 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c361t-5ad7e0812f9f7fed5684d7ace85f396a3fdfa68ae9c38c14326c4747633b22d33 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 03:20:06 EDT 2025 Tue Jul 01 00:41:15 EDT 2025 Thu Apr 24 23:03:17 EDT 2025 Wed Aug 27 02:49:21 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c361t-5ad7e0812f9f7fed5684d7ace85f396a3fdfa68ae9c38c14326c4747633b22d33 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-6841-0899 0000-0003-4786-8563 0000-0002-3865-8145 0000-0002-9653-7907 0000-0001-6843-0064 0000-0001-9583-0055 0000-0002-1185-5229 0000-0002-3508-756X |
PQID | 2637440097 |
PQPubID | 85433 |
PageCount | 14 |
ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2021_3078517 ieee_primary_9427073 crossref_primary_10_1109_TCSVT_2021_3078517 proquest_journals_2637440097 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-03-01 |
PublicationDateYYYYMMDD | 2022-03-01 |
PublicationDate_xml | – month: 03 year: 2022 text: 2022-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref56 ref12 ref59 ref15 ref58 ref14 ref53 ref55 ref11 ref54 ref10 ref16 ref19 ref18 ref50 ref48 ref47 ref41 ref44 yi (ref45) 2014 ref49 ref8 goodfellow (ref27) 2014 ref7 ref9 ref4 ref3 ref6 ref5 kong (ref20) 2019 wang (ref13) 2018 ref40 tu (ref42) 2020 ref35 ref34 ref37 ref36 ref31 zhu (ref51) 2014 ref30 ref33 ref32 ref2 ref1 ref39 ref38 zhu (ref24) 2015 ref71 huang (ref46) 2008 ref70 yim (ref52) 2015 ref68 ref67 zhou (ref23) 2019 ref26 ref69 ref25 ref64 ref63 ref66 ref22 ref65 ref21 vasiljevic (ref17) 2016 ref28 simonyan (ref43) 2014 ref60 ref62 ref61 zhao (ref29) 2017 |
References_xml | – ident: ref36 doi: 10.1109/ICIP.2016.7533077 – year: 2014 ident: ref43 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv 1409 1556 – ident: ref49 doi: 10.1109/CVPR.2016.90 – ident: ref71 doi: 10.1109/TCSVT.2020.3042178 – ident: ref66 doi: 10.1109/TIP.2018.2808840 – ident: ref15 doi: 10.1016/j.neucom.2017.05.055 – ident: ref32 doi: 10.1109/CVPR.2017.618 – ident: ref58 doi: 10.1109/TPAMI.2019.2914039 – start-page: 63 year: 2018 ident: ref13 article-title: ESRGAN: Enhanced super-resolution generative adversarial networks publication-title: Proc ECCV – ident: ref35 doi: 10.1109/ICCV.2015.64 – ident: ref28 doi: 10.1109/ICCV.2017.430 – ident: ref33 doi: 10.1007/978-3-319-10593-2_17 – ident: ref5 doi: 10.1007/s11263-019-01252-7 – ident: ref50 doi: 10.1109/CVPR.2019.00482 – ident: ref56 doi: 10.1109/CVPR.2018.00552 – ident: ref3 doi: 10.24963/ijcai.2019/611 – ident: ref31 doi: 10.1109/CVPR.2016.182 – ident: ref59 doi: 10.1109/TIP.2020.3046918 – ident: ref68 doi: 10.1109/ICCV.2017.116 – ident: ref12 doi: 10.1007/978-3-030-01252-6_12 – ident: ref67 doi: 10.1007/978-3-319-46454-1_20 – ident: ref39 doi: 10.1016/j.patcog.2017.02.004 – ident: ref65 doi: 10.1109/TPAMI.2021.3061312 – ident: ref48 doi: 10.1109/WACV.2016.7477558 – ident: ref70 doi: 10.1109/TCSVT.2017.2748379 – start-page: 2672 year: 2014 ident: ref27 article-title: Generative adversarial nets publication-title: Proc NIPS – year: 2019 ident: ref23 article-title: When AWGN-based denoiser meets real noises publication-title: arXiv 1904 03485 – ident: ref55 doi: 10.1109/CVPR.2017.713 – ident: ref8 doi: 10.1109/CVPR.2018.00235 – ident: ref10 doi: 10.1016/j.imavis.2009.08.002 – ident: ref30 doi: 10.24963/ijcai.2018/131 – ident: ref21 doi: 10.1109/ICCV.2017.267 – ident: ref63 doi: 10.1109/CVPR.2018.00101 – year: 2014 ident: ref45 article-title: Learning face representation from scratch publication-title: arXiv 1411 7923 – ident: ref34 doi: 10.1109/CVPR.2014.375 – ident: ref64 doi: 10.1007/s11263-019-01254-5 – year: 2016 ident: ref17 article-title: Examining the impact of blur on recognition by convolutional networks publication-title: arXiv 1611 05760 – ident: ref25 doi: 10.1109/ICCV.2015.441 – ident: ref53 doi: 10.1109/TIP.2018.2831899 – ident: ref1 doi: 10.1109/CVPR.2015.7298803 – ident: ref4 doi: 10.1007/s11263-019-01178-0 – ident: ref41 doi: 10.1109/CVPR.2016.23 – ident: ref22 doi: 10.1109/TIP.2018.2839891 – start-page: 66 year: 2017 ident: ref29 article-title: Dual-agent gans for photorealistic and identity preserving profile face synthesis publication-title: Proc NIPS – ident: ref37 doi: 10.1109/ICCV.2017.36 – year: 2020 ident: ref42 article-title: 3D face reconstruction from a single image assisted by 2D face images in the wild publication-title: arXiv 1903 09359 – ident: ref18 doi: 10.1007/978-3-030-01240-3_47 – ident: ref40 doi: 10.1109/LSP.2011.2158998 – ident: ref54 doi: 10.1109/CVPR.2018.00854 – start-page: 787 year: 2015 ident: ref24 article-title: High-fidelity pose and expression normalization for face recognition in the wild publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – ident: ref61 doi: 10.1109/CVPR.2017.570 – year: 2019 ident: ref20 article-title: Cross-resolution face recognition via prior-aided face hallucination and residual knowledge distillation publication-title: arXiv 1905 10777 – ident: ref26 doi: 10.1109/CVPR.2014.243 – ident: ref9 doi: 10.1109/CVPR.2018.00876 – start-page: 676 year: 2015 ident: ref52 article-title: Rotating your face using multi-task deep neural network publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – start-page: 217 year: 2014 ident: ref51 article-title: Multi-view perceptron: A deep model for learning face identity and view representations publication-title: Proc NIPS – ident: ref19 doi: 10.1109/CVPR.2018.00264 – ident: ref6 doi: 10.1109/CVPR.2015.7299058 – ident: ref2 doi: 10.1109/TPAMI.2018.2858819 – ident: ref44 doi: 10.1109/CVPR.2016.309 – ident: ref47 doi: 10.1109/CVPRW.2017.250 – year: 2008 ident: ref46 article-title: Labeled faces in the wild: A database forstudying face recognition in unconstrained environments – ident: ref60 doi: 10.1109/CVPR42600.2020.00738 – ident: ref16 doi: 10.1109/CVPR.2018.00862 – ident: ref7 doi: 10.1109/CVPR.2017.141 – ident: ref57 doi: 10.1109/ICCV.2015.425 – ident: ref62 doi: 10.1007/978-3-030-01240-3_14 – ident: ref69 doi: 10.1109/TCSVT.2010.2045817 – ident: ref14 doi: 10.1109/SPICES.2015.7091490 – ident: ref38 doi: 10.1007/s13042-013-0182-4 – ident: ref11 doi: 10.1109/ICB2018.2018.00033 |
SSID | ssj0014847 |
Score | 2.5404968 |
Snippet | In real-world scenarios, many factors may harm face recognition performance, e.g ., large pose, bad illumination, low resolution, blur and noise. To address... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1285 |
SubjectTerms | 3D based face normalization Coders Encoders-Decoders Face recognition Feature extraction Generators Image quality Image restoration Lighting multi-degradation face restoration Object recognition Task analysis Training unconstrained face recognition |
Title | Joint Face Image Restoration and Frontalization for Recognition |
URI | https://ieeexplore.ieee.org/document/9427073 https://www.proquest.com/docview/2637440097 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB7Ukx58i-uLHLxp1zRJ0-YkIi66sB50FW8lTRMQ165o9-Kvd5JtF1-It0ITCDPTeTTzzQdwyJTRtMCypEhiE4ki5ugHHY0Sx6SLLbOUe4Dz4Fpe3on-Q_IwB8czLIy1NjSf2a5_DHf55dhM_K-yEyVYiiY5D_NYuE2xWrMbA5EFMjFMF-IowzjWAmSoOhme394PsRRkcRct2pPRfwlCgVXlhysO8aW3AoP2ZNO2kqfupC665v3b0Mb_Hn0VlptEk5xNLWMN5my1Dkufxg9uwGl__FjVpKeNJVfP6FjITeCZCcoiuipJz4830KMGq0kwwSU3bcfRuNqEu97F8PwyaggVIsNlXEeJLlOLOQBzyqXOlonMRJlqz1zquJKau9JpmWmrDM8MZlJMGoH1huS8YKzkfAsWqnFlt4GgI7BOJ7jAWEEd14KWjIoiY1rhG9GBuJVwbppp4570YpSHqoOqPGgl91rJG6104Gi252U6a-PP1RtezLOVjYQ7sNcqMm8-x7ecSe4HIVKV7vy-axcWmcc1hOayPVioXyd2H7ONujgIZvYBF33P0A |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07b9swED6k7tBm6CNpEbd5cOiWyqFIihKnoghi2E7swXGKbAJFkUBRVwoSecmv75GWjCYtgmwCRALE3eke4n33AXxhymhaYFlSJLGJRBFz9IOORolj0sWWWco9wHk6k6MrMblOrrfg6wYLY60NzWd24B_DXX5Zm5X_VXaiBEvRJF_AS4z7SbxGa23uDEQW6MQwYYijDCNZB5Gh6mRxevljgcUgiwdo056O_kEYCrwq_zjjEGGGb2HanW3dWPJrsGqKgbl_NLbxuYd_B2_aVJN8X9vGe9iy1Q5s_zWAcBe-TeqfVUOG2lgy_o2uhcwD00xQF9FVSYZ-wIFetmhNgikumXc9R3X1Aa6GZ4vTUdRSKkSGy7iJEl2mFrMA5pRLnS0TmYky1Z671HElNXel0zLTVhmeGcylmDQCKw7JecFYyflH6FV1ZfeAoCuwTie4wFhBHdeCloyKImNa4RvRh7iTcG7aeeOe9mKZh7qDqjxoJfdayVut9OF4s-dmPW3jydW7Xsybla2E-7DfKTJvP8i7nEnuRyFSlX76_64jeDVaTC_yi_Hs_DO8Zh7lEFrN9qHX3K7sAeYeTXEYTO4PG23TGQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Joint+Face+Image+Restoration+and+Frontalization+for+Recognition&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Tu%2C+Xiaoguang&rft.au=Zhao%2C+Jian&rft.au=Liu%2C+Qiankun&rft.au=Ai%2C+Wenjie&rft.date=2022-03-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=32&rft.issue=3&rft.spage=1285&rft.epage=1298&rft_id=info:doi/10.1109%2FTCSVT.2021.3078517&rft.externalDocID=9427073 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |