High-fidelity facial reflectance and geometry inference from an unconstrained image

We present a deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions. The reconstructed high-resolution textures, which are generated in only a fe...

Full description

Saved in:
Bibliographic Details
Published inACM transactions on graphics Vol. 37; no. 4; pp. 1 - 14
Main Authors Yamaguchi, Shugo, Saito, Shunsuke, Nagano, Koki, Zhao, Yajie, Chen, Weikai, Olszewski, Kyle, Morishima, Shigeo, Li, Hao
Format Journal Article
LanguageEnglish
Published New York, NY, USA ACM 30.07.2018
Subjects
Online AccessGet full text

Cover

Loading…
Abstract We present a deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions. The reconstructed high-resolution textures, which are generated in only a few seconds, include high-resolution skin surface reflectance maps, representing both the diffuse and specular albedo, and medium- and high-frequency displacement maps, thereby allowing us to render compelling digital avatars under novel lighting conditions. To extract this data, we train our deep neural networks with a high-quality skin reflectance and geometry database created with a state-of-the-art multi-view photometric stereo system using polarized gradient illumination. Given the raw facial texture map extracted from the input image, our neural networks synthesize complete reflectance and displacement maps, as well as complete missing regions caused by occlusions. The completed textures exhibit consistent quality throughout the face due to our network architecture, which propagates texture features from the visible region, resulting in high-fidelity details that are consistent with those seen in visible regions. We describe how this highly underconstrained problem is made tractable by dividing the full inference into smaller tasks, which are addressed by dedicated neural networks. We demonstrate the effectiveness of our network design with robust texture completion from images of faces that are largely occluded. With the inferred reflectance and geometry data, we demonstrate the rendering of high-fidelity 3D avatars from a variety of subjects captured under different lighting conditions. In addition, we perform evaluations demonstrating that our method can infer plausible facial reflectance and geometric details comparable to those obtained from high-end capture devices, and outperform alternative approaches that require only a single unconstrained input image.
AbstractList We present a deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions. The reconstructed high-resolution textures, which are generated in only a few seconds, include high-resolution skin surface reflectance maps, representing both the diffuse and specular albedo, and medium- and high-frequency displacement maps, thereby allowing us to render compelling digital avatars under novel lighting conditions. To extract this data, we train our deep neural networks with a high-quality skin reflectance and geometry database created with a state-of-the-art multi-view photometric stereo system using polarized gradient illumination. Given the raw facial texture map extracted from the input image, our neural networks synthesize complete reflectance and displacement maps, as well as complete missing regions caused by occlusions. The completed textures exhibit consistent quality throughout the face due to our network architecture, which propagates texture features from the visible region, resulting in high-fidelity details that are consistent with those seen in visible regions. We describe how this highly underconstrained problem is made tractable by dividing the full inference into smaller tasks, which are addressed by dedicated neural networks. We demonstrate the effectiveness of our network design with robust texture completion from images of faces that are largely occluded. With the inferred reflectance and geometry data, we demonstrate the rendering of high-fidelity 3D avatars from a variety of subjects captured under different lighting conditions. In addition, we perform evaluations demonstrating that our method can infer plausible facial reflectance and geometric details comparable to those obtained from high-end capture devices, and outperform alternative approaches that require only a single unconstrained input image.
ArticleNumber 162
Author Zhao, Yajie
Saito, Shunsuke
Chen, Weikai
Morishima, Shigeo
Yamaguchi, Shugo
Nagano, Koki
Li, Hao
Olszewski, Kyle
Author_xml – sequence: 1
  givenname: Shugo
  surname: Yamaguchi
  fullname: Yamaguchi, Shugo
  email: wasedayshugo@suou.waseda.jp
  organization: Waseda University and USC Institute for Creative Technologies
– sequence: 2
  givenname: Shunsuke
  surname: Saito
  fullname: Saito, Shunsuke
  email: shunsuke.saito16@gmail.com
  organization: University of Southern California, and USC Institute for Creative Technologies
– sequence: 3
  givenname: Koki
  surname: Nagano
  fullname: Nagano, Koki
  email: knagano@usc.edu
  organization: Pinscreen
– sequence: 4
  givenname: Yajie
  surname: Zhao
  fullname: Zhao, Yajie
  email: yajie730@gmail.com
  organization: USC Institute for Creative Technologies
– sequence: 5
  givenname: Weikai
  surname: Chen
  fullname: Chen, Weikai
  email: chenwk891@gmail.com
  organization: USC Institute for Creative Technologies
– sequence: 6
  givenname: Kyle
  surname: Olszewski
  fullname: Olszewski, Kyle
  email: olszewsk@usc.edu
  organization: University of Southern California, and USC Institute for Creative Technologies
– sequence: 7
  givenname: Shigeo
  surname: Morishima
  fullname: Morishima, Shigeo
  email: shigeo@waseda.jp
  organization: Waseda University
– sequence: 8
  givenname: Hao
  surname: Li
  fullname: Li, Hao
  email: hao@hao-li.com
  organization: University of Southern California, and USC Institute for Creative Technologies
BookMark eNp9kLFOwzAQhi1UJNrCjMSUF0i5i2M7HVEFFKkSAzBHV_tcjBIHOWHo25OqhYGB6aT77zt9-mdiErvIQlwjLBBLdStxaRSahSwApS7PxBSVMrmRupqIKRgJOUjACzHr-w8A0GWpp-JlHXbvuQ-OmzDsM082UJMl9g3bgaLljKLLdty1PKR9FqLnxIe1T107ZtlXtF3sh0QhsstCSzu-FOeemp6vTnMu3h7uX1frfPP8-LS62-QkCz3kSFtDyjuQqDyVldtWqLUCcgZ8tVwaAC4YnduW1i51gaS0V96zRsvolZwLdfxrU9f3o3Ntw0BD6OJBp6kR6kMz9amZ-tTMyN3-4T7TKJ72_xA3R4Js-3v8E34DGmZwCw
CitedBy_id crossref_primary_10_1145_3450626_3459829
crossref_primary_10_1016_j_cag_2021_06_004
crossref_primary_10_1109_TNSRE_2019_2961244
crossref_primary_10_1155_2022_5903514
crossref_primary_10_1016_j_imavis_2021_104119
crossref_primary_10_1145_3355089_3356571
crossref_primary_10_1145_3550454_3555509
crossref_primary_10_1088_1742_6596_2363_1_012011
crossref_primary_10_1145_3550454_3555445
crossref_primary_10_1051_jnwpu_20234120370
crossref_primary_10_1007_s41095_022_0309_1
crossref_primary_10_1111_cgf_14513
crossref_primary_10_1109_ACCESS_2020_3026545
crossref_primary_10_1016_j_eswa_2023_119678
crossref_primary_10_1111_cgf_142622
crossref_primary_10_1109_TPAMI_2023_3328453
crossref_primary_10_1145_3386569_3392464
crossref_primary_10_1016_j_cad_2022_103304
crossref_primary_10_1145_3272127_3275075
crossref_primary_10_1145_3272127_3275073
crossref_primary_10_1145_3355089_3356568
crossref_primary_10_1109_TPAMI_2021_3080586
crossref_primary_10_1109_TPAMI_2021_3084524
crossref_primary_10_1145_3476576_3476647
crossref_primary_10_1145_3476576_3476646
crossref_primary_10_1109_TIP_2022_3201466
crossref_primary_10_1145_3414685_3417817
crossref_primary_10_1145_3472954
crossref_primary_10_3724_SP_J_1089_2022_18821
crossref_primary_10_1007_s11432_020_3236_2
crossref_primary_10_1007_s11263_023_01899_3
crossref_primary_10_1111_cgf_14762
crossref_primary_10_1145_3395208
crossref_primary_10_1145_3306346_3323027
crossref_primary_10_1109_TMM_2021_3068567
crossref_primary_10_1016_j_chbr_2021_100065
crossref_primary_10_1111_cgf_14943
crossref_primary_10_1145_3528223_3530143
crossref_primary_10_1109_TPAMI_2021_3129537
crossref_primary_10_1109_TPAMI_2021_3125598
crossref_primary_10_1145_3522626
crossref_primary_10_1111_cgf_14706
crossref_primary_10_1111_cgf_14904
crossref_primary_10_1145_3272127_3275104
crossref_primary_10_1007_s11263_021_01563_8
crossref_primary_10_1145_3414685_3417824
crossref_primary_10_1145_3649889
crossref_primary_10_1145_3450626_3459936
Cites_doi 10.1109/ICCV.2017.580
10.1109/CVPR.2017.578
10.1145/2070781.2024163
10.1364/JOSAA.11.000467
10.1145/2766939
10.1145/1141911.1141988
10.1145/1731047.1731055
10.1007/s11263-006-0029-5
10.1145/311535.311556
10.1145/2508363.2508380
10.1145/2010324.1964941
10.1145/2010324.1964970
10.1109/CVPR.2018.00270
10.1145/2897824.2925882
10.1109/TPAMI.2006.206
10.1109/ICCV.1999.790383
10.1145/1778765.1778777
10.1145/2638549
10.1109/TPAMI.2010.63
10.1109/ICCV.2015.425
10.1145/1201775.882264
10.1145/2661229.2661290
10.1145/1667239.1667251
10.1145/2342896.2342970
10.1145/2766974
10.1109/ICCV.2011.6126439
10.1145/1141911.1141987
10.1145/3072959.3073659
10.1109/ICCV.2013.404
10.1109/TPAMI.2014.2377712
10.1145/344779.345009
10.1631/FITEE.1700253
10.1109/CVPR.2017.632
10.1145/2980179.2980252
10.1145/344779.344855
10.1145/2766894
10.1145/2897824.2925917
10.1162/jocn.1991.3.1.71
10.1109/ICCV.2015.103
10.1145/3130800.31310887
10.1145/1457515.1409074
10.1109/CVPR.2016.598
10.1145/2766943
10.1145/1141911.1141921
10.1007/s00371-006-0078-3
10.1145/1531326.1531363
10.3758/s13428-014-0532-5
10.1145/2897824.2925873
10.1145/383259.383296
10.1023/B:VISI.0000029666.37597.d3
10.1111/cgf.13127
10.1145/1073204.1073263
10.1109/CVPR.2005.145
10.1109/ICCV.2017.175
10.1145/1360612.1360658
10.1109/CVPR.2017.624
10.1145/2614028.2615407
ContentType Journal Article
Copyright ACM
Copyright_xml – notice: ACM
DBID AAYXX
CITATION
DOI 10.1145/3197517.3201364
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList CrossRef

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1557-7368
EndPage 14
ExternalDocumentID 10_1145_3197517_3201364
3201364
GroupedDBID --Z
-DZ
-~X
.DC
23M
2FS
4.4
5GY
5VS
6J9
85S
8US
AAKMM
AALFJ
AAYFX
ABPPZ
ACGFO
ACGOD
ACM
ADBCU
ADL
ADMLS
AEBYY
AEFXT
AEJOY
AENEX
AENSD
AETEA
AFWIH
AFWXC
AIKLT
AKRVB
ALMA_UNASSIGNED_HOLDINGS
ASPBG
AVWKF
BDXCO
CCLIF
CS3
EBS
EJD
F5P
FEDTE
GUFHI
HGAVV
I07
LHSKQ
P1C
P2P
PQQKQ
RNS
ROL
TWZ
UHB
UPT
WH7
XSW
ZCA
~02
AAYXX
CITATION
ID FETCH-LOGICAL-a326t-1ab7a5fd0315fa48db816650ad70f899700e2e1ddb4cc9621a56f5ffe61ce1f53
ISSN 0730-0301
IngestDate Thu Apr 24 22:55:59 EDT 2025
Thu Jul 03 08:27:58 EDT 2025
Wed Aug 20 23:42:23 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords facial modeling
texture synthesis and inpainting
image-based modeling
Language English
License Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-a326t-1ab7a5fd0315fa48db816650ad70f899700e2e1ddb4cc9621a56f5ffe61ce1f53
PageCount 14
ParticipantIDs crossref_citationtrail_10_1145_3197517_3201364
crossref_primary_10_1145_3197517_3201364
acm_primary_3201364
PublicationCentury 2000
PublicationDate 2018-07-30
PublicationDateYYYYMMDD 2018-07-30
PublicationDate_xml – month: 07
  year: 2018
  text: 2018-07-30
  day: 30
PublicationDecade 2010
PublicationPlace New York, NY, USA
PublicationPlace_xml – name: New York, NY, USA
PublicationTitle ACM transactions on graphics
PublicationTitleAbbrev ACM TOG
PublicationYear 2018
Publisher ACM
Publisher_xml – name: ACM
References E. Richardson, M. Sela, and R. Kimmel. 2016. 3D face reconstruction by learning from synthetic data. In 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 460--469.
O. Alexander, M. Rogers, W. Lambeth, M. Chiang, and P. Debevec. 2009. The Digital Emily Project: Photoreal Facial Modeling and Animation. In ACM SIGGRAPH 2009 Courses. ACM, New York, NY, USA, Article 12, 12:1--12:15 pages. 10.1145/1667239.1667251
T. Beeler, B. Bickel, P. Beardsley, B. Sumner, and M. Gross. 2010. High-quality single-shot capture of facial geometry. In ACM Trans. Graph., Vol. 29. ACM, 40. 10.1145/1778765.1778777
U. Mohammed, S. J. D. Prince, and J. Kautz. 2009. Visio-lization: Generating Novel Facial Images. In ACM Trans. Graph. ACM, Article 57, 57:1--57:8 pages. 10.1145/1531326.1531363
P. Graham, B. Tunwattanapong, J. Busch, X. Yu, A. Jones, P. Debevec, and A. Ghosh. 2013b. Measurement-based Synthesis of Facial Microgeometry. In EUROGRAPHICS. 10.1145/2342896.2342970
C. Li, K. Zhou, and S. Lin. 2014. Intrinsic Face Image Decomposition with Human Face Priors. In Proc. ECCV (5)'14. 218--233.
M. Glencross, G.J. Ward, F. Melendez, C.Jay, J. Liu, and R. Hubbold. 2008. A perceptually validated model for surface depth hallucination. ACM Trans. Graph. 27, 3 (2008), 59. 10.1145/1360612.1360658
T. Karras, T. Aila, S. Laine, and J. Lehtinen. 2017. Progressive Growing of GANs for Improved Quality, Stability, and Variation. CoRR abs/1710.10196 (2017).
M. Turk and A. Pentland. 1991. Eigenfaces for Recognition. J. Cognitive Neuroscience 3, 1 (1991), 71--86. 10.1162/jocn.1991.3.1.71
A. Golovinskiy, W. Matusik, H. Pfister, S. Rusinkiewicz, and T Funkhouser. 2006. A Statistical Model for Synthesis of Detailed Facial Geometry. ACM Trans. Graph. 25, 3 (2006), 1025--1034. 10.1145/1141911.1141988
A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Perez, and C. Theobalt. 2017b. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In IEEE ICCV, Vol. 2.
I. Matthews and S. Baker. 2004. Active Appearance Models Revisited. Int. J. Comput. Vision 60, 2 (2004), 135--164. 10.1023/B:VISI.0000029666.37597.d3
C. Cao, D. Bradley, K. Zhou, and T. Beeler. 2015. Real-time high-fidelity facial performance capture. ACM Trans. Graph. 34, 4 (2015), 46. 10.1145/2766943
J. T. Barron and J. Malik. 2015a. Shape, illumination, and reflectance from shading. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 8 (2015), 1670--1687.
E. Richardson, M. Sela, R. Or-El, and R. Kimmel. 2017. Learning detailed face reconstruction from a single image. In Proc. CVPR. IEEE, 5553--5562.
L. Hu, S. Saito, L. Wei, K. Nagano, J. Seo, J. Fursund, I. Sadeghi, C. Sun, Y.-C. Chen, and H. Li. 2017. Avatar Digitization From a Single Image For Real-Time Rendering. ACM Trans. Graph. 36, 6 (2017). 10.1145/3130800.31310887
G.J. Edwards, C.J. Taylor, and T. F. Cootes. 1998. Interpreting Face Images Using Active Appearance Models. In Proceedings of the 3rd. International Conference on Face and Gesture Recognition (FG '98). IEEE Computer Society, 300--.
L.-Y. Wei and M. Levoy. 2000. Fast Texture Synthesis Using Tree-structured Vector Quantization. In Proc. SIGGRAPH. 479--488. 10.1145/344779.345009
K. Olszewski, Z. Li, C. Yang, Y. Zhou, R. Yu, Z. Huang, S. Xiang, S. Saito, P. Kohli, and H. Li. 2017. Realistic Dynamic Facial Textures From a Single Image Using GANs. In IEEE ICCV.
T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. 2015. Deep Convolutional Inverse Graphics Network. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.). Curran Associates, Inc., 2539--2547.
S. Lefebvre and H. Hoppe. 2006. Appearance-space texture synthesis. ACM Trans. Graph. 25, 3 (2006), 541--548. 10.1145/1141911.1141921
Solid Angle. 2016. (2016). http://www.solidangle.com/arnold/.
I. Kemelmacher-Shlizerman. 2013. Internet-based Morphable Model. IEEE ICCV (2013). 10.1109/ICCV.2013.404
W.-C. Ma, T. Hawkins, P. Peers, C.-F. Chabert, M. Weiss, and P. Debevec. 2007a. Rapid Acquisition of Specular and Diffuse Normal Maps from Polarized Spherical Gradient Illumination. In Proc. EGSR 2007. Eurographics Association, 183--194.
S. Iizuka, E. Simo-Serra, and H. Ishikawa. 2017. Globally and Locally Consistent Image Completion. ACM Trans. Graph. 36, 4, Article 107 (2017), 107:1--107:14 pages. 10.1145/3072959.3073659
J. T. Barron and J. Malik. 2015b. Shape, Illumination, and Reflectance from Shading. IEEE Transactions on Pattern Analysis and Machine Intelligence (2015).
P. Garrido, L. Valgaerts, C. Wu, and C. Theobalt. 2013. Reconstructing Detailed Dynamic Face Geometry from Monocular Video. In ACM Trans. Graph., Vol. 32. 158:1--158:10. 10.1145/2508363.2508380
C. Liu, H.-Y. Shum, and W. T. Freeman. 2007. Face Hallucination: Theory and Practice. Int. J. Comput. Vision 75, 1 (2007), 115--134. 10.1007/s11263-006-0029-5
A. A. Efros and T. K. Leung. 1999. Texture Synthesis by Non-Parametric Sampling. In IEEE ICCV. 1033--.
Z. Shu, E. Yumer, S. Hadap, K. Sunkavalli, E. Shechtman, and D. Samaras. 2017. Neural Face Editing with Intrinsic Image Disentangling. arXiv:1704.04131 (2017).
J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. 2017. Toward Multimodal Image-to-image Translation. In Advances in Neural Information Processing Systems 30.
H. Kim, M. Zollhöfer, A. Tewari, J. Thies, C. Richardt, and C. Theobalt. 2018. Inverse-FaceNet: Deep Monocular Inverse Face Rendering. In Proc. CVPR.
S. McDonagh, M. Klaudiny, D. Bradley, T. Beeler, I. Matthews, and K. Mitchell. 2016. Synthetic prior design for real-time face tracking. In 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 639--648.
R. A. Yeh*, C. Chen*, T. Y. Lim, S. A. G., M. Hasegawa-Johnson, and M. N. Do. 2017. Semantic Image Inpainting with Deep Generative Models. In Proc. CVPR. * equal contribution.
M. Sela, E. Richardson, and R. Kimmel. 2017. Unrestricted facial geometry reconstruction using image-to-image translation. In IEEE ICCV. IEEE, 1585--1594.
T. Beeler, F. Hahn, D. Bradley, B. Bickel, P. Beardsley, C. Gotsman, R. W. Sumner, and M. Gross. 2011. High-quality passive facial performance capture using anchor frames. In ACM Trans. Graph., Vol. 30. ACM, 75. 10.1145/2010324.1964970
V. Blanz and T. Vetter. 1999. A morphable model for the synthesis of 3D faces. In Proc. SIGGRAPH. 187--194. 10.1145/311535.311556
P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, and W. Sarokin. 2000. Acquiring the Reflectance Field of a Human Face. In Proc. SIGGRAPH. 10.1145/344779.344855
H. Li, L. Trutoiu, K. Olszewski, L. Wei, T. Trutna, P.-L. Hsieh, A. Nicholls,, A. Nicholls, and C. Ma. 2015. Facial Performance Sensing Head-Mounted Display. ACM Trans. Graph. 34, 4 (July 2015). 10.1145/2766939
S. Romdhani and T. Vetter. 2005. Estimating 3D Shape and Texture Using Pixel Intensity, Edges, Specular Highlights, Texture Constraints and a Prior.. In Proc. CVPR. 986--993. 10.1109/CVPR.2005.145
S. Suwajanakorn, I. Kemelmacher-Shlizerman, and S. M. Seitz. 2014. Total moving face reconstruction. In Proc. ECCV. Springer, 796--812.
W.-C. Ma, A. Jones, J.-Y. Chiang, T. Hawkins, S. Frederiksen, P. Peers, M. Vukovic, M. Ouhyoung, and P. Debevec. 2008. Facial Performance Synthesis Using Deformation-driven Polynomial Displacement Maps. In Proc. SIGGRAPH. ACM, 121:1--121:10. 10.1145/1457515.1409074
D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. 2016. Context encoders: Feature learning by inpainting. In Proc. CVPR. 2536--2544.
J. von der Pahlen, J. Jimenez, E. Danvoye, P. Debevec, G. Fyffe, and O. Alexander. 2014. Digital Ira and Beyond: Creating Real-time Photoreal Digital Actors. In ACM SIGGRAPH 2014 Courses. ACM, New York, NY, USA, Article 1, 1:1--1:384 pages. 10.1145/2614028.2615407
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and others. 2016. Photo-realistic single image super-resolution using a generative adversarial network. arXiv:1609.04802 (2016).
Z. Liu, P. Luo, X. Wang, and X. Tang. 2015. Deep Learning Face Attributes in the Wild. In IEEE ICCV. 10.1109/ICCV.2015.425
L. A. Gatys, M. Bethge, A. Hertzmann, and E. Shechtman. 2016. Preserving Color in Neural Artistic Style Transfer. CoRR abs/1606.05897 (2016).
D.S. Ma, J. Correll, and B. Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods 47, 4 (2015), 1122--1135.
The Digital Human League. 2015. Digital Emily 2.0. (2015). http://gl.ict.usc.edu/Research/DigitalEmily2/.
A. Haro, B. Guenterz, and I. Essay. 2001. Real-time, Photo-realistic, Physically Based Rendering of Fine Scale Human Skin Structure. In Eurographics Workshop on Rendering, S. J. Gortle and K. Myszkowski (Eds.).
S. Saito, T. Li, and H. Li. 2016. Real-Time Facial Segmentation and Performance Capture from RGB Input. In Proc. ECCV.
X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. 2015. High-fidelity pose and expression normalization for face recognition in the wild. In Proc. CVPR. 787--796.
F. Shi, H.-T. Wu, X. Tong, and J. Chai. 2014. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graph. 33, 6 (2014), 222. 10.1145/2661229.2661290
Y. Li, S. Liu, J. Yang, and M.-H. Yang. 2017. Generative Face Completion. In Proc. CVPR.
A. Ghosh, G. Fyffe, B. Tunwattanapong, J. Busch, X. Yu, and P. Debevec. 2011. Multiview Face Capture Using Polarized Spherical Gradient Illumination. ACM Trans. Graph. 30, 6, Article 129 (2011), 129:1--129:10 pages. 10.1145/2070781.2024163
J. Han, K. Zhou, L.-Y. Wei, M. Gong, H. Bao, X. Zhang, and B. Guo. 2006. Fast example-based surface texture synthesis via discrete optimization. The Visual Computer 22, 9--11 (2006), 918--925. 10.1007/s00371-006-0078-3
K. Olszewski, J. J. Lim, S. Saito, and H. Li. 2016. High-Fidelity Facial and Speech Animation for VR HMDs. ACM Trans. Graph. 35, 6 (December 2016). 10.1145/2980179.2980252
S. Saito, L. Wei, L. Hu, K. Nagano, and
e_1_2_2_4_1
e_1_2_2_24_1
e_1_2_2_49_1
e_1_2_2_6_1
e_1_2_2_22_1
McDonagh S. (e_1_2_2_61_1)
e_1_2_2_20_1
e_1_2_2_2_1
Lasram A. (e_1_2_2_47_1)
e_1_2_2_62_1
e_1_2_2_87_1
e_1_2_2_43_1
e_1_2_2_85_1
e_1_2_2_8_1
e_1_2_2_28_1
e_1_2_2_45_1
e_1_2_2_66_1
e_1_2_2_26_1
e_1_2_2_68_1
e_1_2_2_89_1
Ledig C. (e_1_2_2_48_1) 2016
Bradley D. (e_1_2_2_9_1)
Li C. (e_1_2_2_50_1)
e_1_2_2_60_1
Haro A. (e_1_2_2_31_1)
e_1_2_2_13_1
e_1_2_2_38_1
e_1_2_2_59_1
e_1_2_2_11_1
Ma W.-C. (e_1_2_2_57_1) 2007
Suwajanakorn S. (e_1_2_2_79_1)
e_1_2_2_30_1
e_1_2_2_51_1
e_1_2_2_76_1
e_1_2_2_19_1
e_1_2_2_32_1
e_1_2_2_53_1
e_1_2_2_74_1
e_1_2_2_17_1
e_1_2_2_34_1
e_1_2_2_55_1
e_1_2_2_36_1
e_1_2_2_78_1
Richardson E. (e_1_2_2_70_1)
e_1_2_2_91_1
e_1_2_2_25_1
Zhao H. (e_1_2_2_93_1)
e_1_2_2_5_1
Kingma D. P. (e_1_2_2_42_1) 2014
Duong C. N (e_1_2_2_14_1)
e_1_2_2_23_1
Duong C. Nhan (e_1_2_2_64_1)
e_1_2_2_7_1
e_1_2_2_21_1
e_1_2_2_1_1
e_1_2_2_3_1
e_1_2_2_40_1
e_1_2_2_63_1
e_1_2_2_86_1
e_1_2_2_65_1
e_1_2_2_84_1
e_1_2_2_29_1
e_1_2_2_44_1
e_1_2_2_27_1
e_1_2_2_46_1
e_1_2_2_88_1
e_1_2_2_82_1
e_1_2_2_80_1
Kim H. (e_1_2_2_41_1)
Pathak D. (e_1_2_2_67_1)
Saito S. (e_1_2_2_73_1)
e_1_2_2_37_1
e_1_2_2_12_1
e_1_2_2_39_1
e_1_2_2_10_1
e_1_2_2_52_1
e_1_2_2_75_1
Edwards G.J. (e_1_2_2_15_1)
e_1_2_2_54_1
e_1_2_2_18_1
e_1_2_2_33_1
e_1_2_2_56_1
Tewari A. (e_1_2_2_81_1) 2017
e_1_2_2_16_1
e_1_2_2_35_1
e_1_2_2_77_1
Zhu X. (e_1_2_2_95_1)
e_1_2_2_90_1
e_1_2_2_94_1
Ma W.-C. (e_1_2_2_58_1)
Richardson E. (e_1_2_2_69_1)
e_1_2_2_71_1
R. A. (e_1_2_2_92_1)
Thies J. (e_1_2_2_83_1)
Saito S. (e_1_2_2_72_1)
References_xml – reference: J. Booth, A. Roussos, S. Zafeiriou, A. Ponniah, and D. Dunaway 2016. A 3d morphable model learnt from 10,000 faces. In Proc. CVPR. 5543--5552.
– reference: T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. 2015. Deep Convolutional Inverse Graphics Network. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.). Curran Associates, Inc., 2539--2547.
– reference: Y. Li, S. Liu, J. Yang, and M.-H. Yang. 2017. Generative Face Completion. In Proc. CVPR.
– reference: J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. 2017. Toward Multimodal Image-to-image Translation. In Advances in Neural Information Processing Systems 30.
– reference: G. Fyffe, A. Jones, O. Alexander, R. Ichikari, and P. Debevec. 2014. Driving high-resolution facial scans with video performance capture. ACM Trans. Graph. 34, 1 (2014), 8. 10.1145/2638549
– reference: K. Olszewski, J. J. Lim, S. Saito, and H. Li. 2016. High-Fidelity Facial and Speech Animation for VR HMDs. ACM Trans. Graph. 35, 6 (December 2016). 10.1145/2980179.2980252
– reference: S. Lefebvre and H. Hoppe. 2006. Appearance-space texture synthesis. ACM Trans. Graph. 25, 3 (2006), 541--548. 10.1145/1141911.1141921
– reference: P. Garrido, L. Valgaerts, C. Wu, and C. Theobalt. 2013. Reconstructing Detailed Dynamic Face Geometry from Monocular Video. In ACM Trans. Graph., Vol. 32. 158:1--158:10. 10.1145/2508363.2508380
– reference: M. Turk and A. Pentland. 1991. Eigenfaces for Recognition. J. Cognitive Neuroscience 3, 1 (1991), 71--86. 10.1162/jocn.1991.3.1.71
– reference: G. Fyffe, K. Nagano, L. Huynh, S. Saito, J. Busch, A. Jones, H. Li, and P. Debevec. 2017. Multi-View Stereo on Consistent Face Topology. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 295--309. 10.1111/cgf.13127
– reference: U. Mohammed, S. J. D. Prince, and J. Kautz. 2009. Visio-lization: Generating Novel Facial Images. In ACM Trans. Graph. ACM, Article 57, 57:1--57:8 pages. 10.1145/1531326.1531363
– reference: K. Olszewski, Z. Li, C. Yang, Y. Zhou, R. Yu, Z. Huang, S. Xiang, S. Saito, P. Kohli, and H. Li. 2017. Realistic Dynamic Facial Textures From a Single Image Using GANs. In IEEE ICCV.
– reference: X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. 2015. High-fidelity pose and expression normalization for face recognition in the wild. In Proc. CVPR. 787--796.
– reference: P. Graham, B. Tunwattanapong, J. Busch, X. Yu, A. Jones, P. Debevec, and A. Ghosh. 2013a. Measurement-Based Synthesis of Facial Microgeometry. In Computer Graphics Forum, Vol. 32. Wiley Online Library, 335--344. 10.1145/2342896.2342970
– reference: H. Li, L. Trutoiu, K. Olszewski, L. Wei, T. Trutna, P.-L. Hsieh, A. Nicholls,, A. Nicholls, and C. Ma. 2015. Facial Performance Sensing Head-Mounted Display. ACM Trans. Graph. 34, 4 (July 2015). 10.1145/2766939
– reference: M. K.Johnson, F. Cole, A. Raj, and E. H. Adelson. 2011. Microgeometry Capture using an Elastomeric Sensor. ACM Trans. Graph 30, 4 (2011), 46:1--46:8. 10.1145/2010324.1964941
– reference: C. Li, K. Zhou, and S. Lin. 2014. Intrinsic Face Image Decomposition with Human Face Priors. In Proc. ECCV (5)'14. 218--233.
– reference: L. Hu, S. Saito, L. Wei, K. Nagano, J. Seo, J. Fursund, I. Sadeghi, C. Sun, Y.-C. Chen, and H. Li. 2017. Avatar Digitization From a Single Image For Real-Time Rendering. ACM Trans. Graph. 36, 6 (2017). 10.1145/3130800.31310887
– reference: T. Karras, T. Aila, S. Laine, and J. Lehtinen. 2017. Progressive Growing of GANs for Improved Quality, Stability, and Variation. CoRR abs/1710.10196 (2017).
– reference: V. Kwatra, I. Essa, A. Bobick, and N. Kwatra. 2005. Texture optimization for example-based synthesis. ACM Trans. Graph. 24, 3 (2005), 795--802. 10.1145/1073204.1073263
– reference: C. Cao, H. Wu, Y. Weng, T. Shao, and K. Zhou. 2016. Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. 35, 4 (2016), 126. 10.1145/2897824.2925873
– reference: F. Shi, H.-T. Wu, X. Tong, and J. Chai. 2014. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graph. 33, 6 (2014), 222. 10.1145/2661229.2661290
– reference: T. Beeler, F. Hahn, D. Bradley, B. Bickel, P. Beardsley, C. Gotsman, R. W. Sumner, and M. Gross. 2011. High-quality passive facial performance capture using anchor frames. In ACM Trans. Graph., Vol. 30. ACM, 75. 10.1145/2010324.1964970
– reference: O. Alexander, M. Rogers, W. Lambeth, M. Chiang, and P. Debevec. 2009. The Digital Emily Project: Photoreal Facial Modeling and Animation. In ACM SIGGRAPH 2009 Courses. ACM, New York, NY, USA, Article 12, 12:1--12:15 pages. 10.1145/1667239.1667251
– reference: H. Kim, M. Zollhöfer, A. Tewari, J. Thies, C. Richardt, and C. Theobalt. 2018. Inverse-FaceNet: Deep Monocular Inverse Face Rendering. In Proc. CVPR.
– reference: Solid Angle. 2016. (2016). http://www.solidangle.com/arnold/.
– reference: A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Perez, and C. Theobalt. 2017b. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In IEEE ICCV, Vol. 2.
– reference: C. A. Wilson, A. Ghosh, P. Peers, J.-Y. Chiang, J. Busch, and P. Debevec. 2010. Temporal upsampling of performance geometry using photometric alignment. ACM Trans. Graph. 29, 2 (2010), 17. 10.1145/1731047.1731055
– reference: Z. Liu, P. Luo, X. Wang, and X. Tang. 2015. Deep Learning Face Attributes in the Wild. In IEEE ICCV. 10.1109/ICCV.2015.425
– reference: D.S. Ma, J. Correll, and B. Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods 47, 4 (2015), 1122--1135.
– reference: S. Saito, T. Li, and H. Li. 2016. Real-Time Facial Segmentation and Performance Capture from RGB Input. In Proc. ECCV.
– reference: C. Wu, D. Bradley, M. Gross, and T. Beeler. 2016. An anatomically-constrained local deformation model for monocular face capture. ACM Trans. Graph. 35, 4 (2016), 115. 10.1145/2897824.2925882
– reference: E. Richardson, M. Sela, and R. Kimmel. 2016. 3D face reconstruction by learning from synthetic data. In 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 460--469.
– reference: S. Saito, L. Wei, L. Hu, K. Nagano, and H. Li. 2017. Photorealistic Facial Texture Inference Using Deep Neural Networks. In Proc. CVPR.
– reference: A. Tewari, M. Zollhöfer, P. Garrido, F. Bernard, H. Kim, P. Pérez, and C. Theobalt. 2017a. Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz. arXiv.1712.02859 (2017).
– reference: Z. Shu, E. Yumer, S. Hadap, K. Sunkavalli, E. Shechtman, and D. Samaras. 2017. Neural Face Editing with Intrinsic Image Disentangling. arXiv:1704.04131 (2017).
– reference: J. Thies, M. Zollöfer, M. Stamminger, C. Theobalt, and M. Nießner. 2016b. FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality. arXiv:1610.03151 (2016).
– reference: P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, and W. Sarokin. 2000. Acquiring the Reflectance Field of a Human Face. In Proc. SIGGRAPH. 10.1145/344779.344855
– reference: P. F. Gotardo, T. Simon, Y. Sheikh, and I. Matthews. 2015. Photogeometric scene flow for high-detail dynamic 3d reconstruction. In Proc. ICCV. 846--854. 10.1109/ICCV.2015.103
– reference: H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. 2017. Pyramid Scene Parsing Network. In Proc. CVPR.
– reference: S. McDonagh, M. Klaudiny, D. Bradley, T. Beeler, I. Matthews, and K. Mitchell. 2016. Synthetic prior design for real-time face tracking. In 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 639--648.
– reference: I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N D. Lawrence, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 2672--2680.
– reference: A. Lasram and S. Lefebvre. 2012. Parallel patch-based texture synthesis. In Proceedings of the Fourth ACM SIGGRAPH/Eurographics conference on High-Performance Graphics. Eurographics Association, 115--124.
– reference: C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and others. 2016. Photo-realistic single image super-resolution using a generative adversarial network. arXiv:1609.04802 (2016).
– reference: M. Sela, E. Richardson, and R. Kimmel. 2017. Unrestricted facial geometry reconstruction using image-to-image translation. In IEEE ICCV. IEEE, 1585--1594.
– reference: F. Liu, D. Zeng, J. Li, and Q.-j. Zhao. 2017. On 3D face reconstruction via cascaded regression in shape space. Frontiers of Information Technology & Electronic Engineering 18, 12(2017), 1978--1990.
– reference: C. Cao, D. Bradley, K. Zhou, and T. Beeler. 2015. Real-time high-fidelity facial performance capture. ACM Trans. Graph. 34, 4 (2015), 46. 10.1145/2766943
– reference: R. A. Yeh*, C. Chen*, T. Y. Lim, S. A. G., M. Hasegawa-Johnson, and M. N. Do. 2017. Semantic Image Inpainting with Deep Generative Models. In Proc. CVPR. * equal contribution.
– reference: E. Richardson, M. Sela, R. Or-El, and R. Kimmel. 2017. Learning detailed face reconstruction from a single image. In Proc. CVPR. IEEE, 5553--5562.
– reference: A. Haro, B. Guenterz, and I. Essay. 2001. Real-time, Photo-realistic, Physically Based Rendering of Fine Scale Human Skin Structure. In Eurographics Workshop on Rendering, S. J. Gortle and K. Myszkowski (Eds.).
– reference: G.J. Edwards, C.J. Taylor, and T. F. Cootes. 1998. Interpreting Face Images Using Active Appearance Models. In Proceedings of the 3rd. International Conference on Face and Gesture Recognition (FG '98). IEEE Computer Society, 300--.
– reference: T. Weyrich, W. Matusik, H. Pfister, B. Bickel, C. Donner, C. Tu, J. McAndless, J. Lee, A. Ngan, H. W. Jensen, and M. Gross. 2006. Analysis of Human Faces using a Measurement-Based Skin Reflectance Model. ACM Trans. Graph. 25, 3 (2006), 1013--1024. 10.1145/1141911.1141987
– reference: R. Donner, M. Reiter, G. Langs, P. Peloschek, and H. Bischof. 2006. Fast Active Appearance Model Search Using Canonical Correlation Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 10 (2006), 1690--1694. 10.1109/TPAMI.2006.206
– reference: S. Iizuka, E. Simo-Serra, and H. Ishikawa. 2017. Globally and Locally Consistent Image Completion. ACM Trans. Graph. 36, 4, Article 107 (2017), 107:1--107:14 pages. 10.1145/3072959.3073659
– reference: L.-Y. Wei and M. Levoy. 2000. Fast Texture Synthesis Using Tree-structured Vector Quantization. In Proc. SIGGRAPH. 479--488. 10.1145/344779.345009
– reference: A. Golovinskiy, W. Matusik, H. Pfister, S. Rusinkiewicz, and T Funkhouser. 2006. A Statistical Model for Synthesis of Detailed Facial Geometry. ACM Trans. Graph. 25, 3 (2006), 1025--1034. 10.1145/1141911.1141988
– reference: D. Bradley, T. Beeler, K. Mitchell, and others. 2017. Real-Time Multi-View Facial Capture with Synthetic Training. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 325--336.
– reference: L. A. Gatys, A. S. Ecker, and M. Bethge. 2015. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. CoRR abs/1505.07376 (2015).
– reference: J. T. Barron and J. Malik. 2015b. Shape, Illumination, and Reflectance from Shading. IEEE Transactions on Pattern Analysis and Machine Intelligence (2015).
– reference: J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. 2016a. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. In Proc. CVPR.
– reference: A. Ghosh, G. Fyffe, B. Tunwattanapong, J. Busch, X. Yu, and P. Debevec. 2011. Multiview Face Capture Using Polarized Spherical Gradient Illumination. ACM Trans. Graph. 30, 6, Article 129 (2011), 129:1--129:10 pages. 10.1145/2070781.2024163
– reference: J. Han, K. Zhou, L.-Y. Wei, M. Gong, H. Bao, X. Zhang, and B. Guo. 2006. Fast example-based surface texture synthesis via discrete optimization. The Visual Computer 22, 9--11 (2006), 918--925. 10.1007/s00371-006-0078-3
– reference: I. Kemelmacher-Shlizerman. 2013. Internet-based Morphable Model. IEEE ICCV (2013). 10.1109/ICCV.2013.404
– reference: The Digital Human League. 2015. Digital Emily 2.0. (2015). http://gl.ict.usc.edu/Research/DigitalEmily2/.
– reference: J. T. Barron and J. Malik. 2015a. Shape, illumination, and reflectance from shading. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 8 (2015), 1670--1687.
– reference: C. Nhan Duong, K. Luu, K. Gia Quach, and T. D. Bui. 2015. Beyond principal components: Deep boltzmann machines for face modeling. In Proc. CVPR. 4786--4794.
– reference: S. Suwajanakorn, I. Kemelmacher-Shlizerman, and S. M. Seitz. 2014. Total moving face reconstruction. In Proc. ECCV. Springer, 796--812.
– reference: P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. 2016. Image-to-image translation with conditional adversarial networks. arXiv:1611.07004 (2016).
– reference: W.-C. Ma, A. Jones, J.-Y. Chiang, T. Hawkins, S. Frederiksen, P. Peers, M. Vukovic, M. Ouhyoung, and P. Debevec. 2008. Facial Performance Synthesis Using Deformation-driven Polynomial Displacement Maps. In Proc. SIGGRAPH. ACM, 121:1--121:10. 10.1145/1457515.1409074
– reference: C. Liu, H.-Y. Shum, and W. T. Freeman. 2007. Face Hallucination: Theory and Practice. Int. J. Comput. Vision 75, 1 (2007), 115--134. 10.1007/s11263-006-0029-5
– reference: C. N Duong, K. Luu, K. G. Quach, and T. D. Bui. 2015. Beyond principal components: Deep boltzmann machines for face modeling. In Proc. CVPR. 4786--4794.
– reference: D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. 2016. Context encoders: Feature learning by inpainting. In Proc. CVPR. 2536--2544.
– reference: M. S. Langer and S. W. Zucker. 1994. Shape-from-shading on a cloudy day. JOSA A 11, 2 (1994), 467--478.
– reference: J. von der Pahlen, J. Jimenez, E. Danvoye, P. Debevec, G. Fyffe, and O. Alexander. 2014. Digital Ira and Beyond: Creating Real-time Photoreal Digital Actors. In ACM SIGGRAPH 2014 Courses. ACM, New York, NY, USA, Article 1, 1:1--1:384 pages. 10.1145/2614028.2615407
– reference: I. Matthews and S. Baker. 2004. Active Appearance Models Revisited. Int. J. Comput. Vision 60, 2 (2004), 135--164. 10.1023/B:VISI.0000029666.37597.d3
– reference: T. Beeler, B. Bickel, P. Beardsley, B. Sumner, and M. Gross. 2010. High-quality single-shot capture of facial geometry. In ACM Trans. Graph., Vol. 29. ACM, 40. 10.1145/1778765.1778777
– reference: M. Glencross, G.J. Ward, F. Melendez, C.Jay, J. Liu, and R. Hubbold. 2008. A perceptually validated model for surface depth hallucination. ACM Trans. Graph. 27, 3 (2008), 59. 10.1145/1360612.1360658
– reference: V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick. 2003. Graphcut Textures: Image and Video Synthesis Using Graph Cuts. In Proc. SIGGRAPH. ACM, 277--286. 10.1145/1201775.882264
– reference: L. A. Gatys, M. Bethge, A. Hertzmann, and E. Shechtman. 2016. Preserving Color in Neural Artistic Style Transfer. CoRR abs/1606.05897 (2016).
– reference: K. Nagano, G. Fyffe, O. Alexander, J. Barbič, H. Li, A. Ghosh, and P. Debevec. 2015. Skin Microstructure Deformation with Displacement Map Convolution. ACM Trans. Graph. 34, 4 (2015). 10.1145/2766894
– reference: D. P. Kingma and J. Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980 (2014).
– reference: W.-C. Ma, T. Hawkins, P. Peers, C.-F. Chabert, M. Weiss, and P. Debevec. 2007b. Rapid Acquisition of Specular and Diffuse Normal Maps from Polarized Spherical Gradient Illumination. In Eurographics Symposium on Rendering.
– reference: A. A. Efros and W. T. Freeman. 2001. Image Quilting for Texture Synthesis and Transfer. In Proc. SIGGRAPH. ACM, 341--346. 10.1145/383259.383296
– reference: M. Aittala, T. Aila, and J. Lehtinen. 2016. Reflectance modeling by neural texture synthesis. ACM Trans. Graph. 35, 4 (2016), 65. 10.1145/2897824.2925917
– reference: A. A. Efros and T. K. Leung. 1999. Texture Synthesis by Non-Parametric Sampling. In IEEE ICCV. 1033--.
– reference: I. Kemelmacher-Shlizerman and R. Basri. 2011. 3D face reconstruction from a single image using a single reference face shape. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 2 (2011), 394--405. 10.1109/TPAMI.2010.63
– reference: W.-C. Ma, T. Hawkins, P. Peers, C.-F. Chabert, M. Weiss, and P. Debevec. 2007a. Rapid Acquisition of Specular and Diffuse Normal Maps from Polarized Spherical Gradient Illumination. In Proc. EGSR 2007. Eurographics Association, 183--194.
– reference: P. Graham, B. Tunwattanapong, J. Busch, X. Yu, A. Jones, P. Debevec, and A. Ghosh. 2013b. Measurement-based Synthesis of Facial Microgeometry. In EUROGRAPHICS. 10.1145/2342896.2342970
– reference: A. E. Ichim, S. Bouaziz, and M. Pauly. 2015. Dynamic 3D Avatar Creation from Handheld Video Input. ACM Trans. Graph. 34, 4, Article 45 (2015), 45:1--45:14 pages. 10.1145/2766974
– reference: I. Kemelmacher-Shlizerman and S. M. Seitz. 2011. Face reconstruction in the wild. In IEEE ICCV. IEEE, 1746--1753. 10.1109/ICCV.2011.6126439
– reference: A. Radford, L. Metz, and S. Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR abs/1511.06434 (2015).
– reference: S. Sengupta, A. Kanazawa, C. D. Castillo, and D. Jacobs. 2017. SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild. arXiv.1712.01261 (2017).
– reference: L.-Y. Wei, S. Lefebvre, V. Kwatra, and G. Turk. 2009. State of the art in example-based texture synthesis. In Eurographics 2009, State of the Art Report, EG-STAR. Eurographics Association, 93--117.
– reference: S. Romdhani and T. Vetter. 2005. Estimating 3D Shape and Texture Using Pixel Intensity, Edges, Specular Highlights, Texture Constraints and a Prior.. In Proc. CVPR. 986--993. 10.1109/CVPR.2005.145
– reference: V. Blanz and T. Vetter. 1999. A morphable model for the synthesis of 3D faces. In Proc. SIGGRAPH. 187--194. 10.1145/311535.311556
– ident: e_1_2_2_65_1
  doi: 10.1109/ICCV.2017.580
– ident: e_1_2_2_77_1
  doi: 10.1109/CVPR.2017.578
– ident: e_1_2_2_23_1
  doi: 10.1145/2070781.2024163
– ident: e_1_2_2_46_1
  doi: 10.1364/JOSAA.11.000467
– ident: e_1_2_2_51_1
  doi: 10.1145/2766939
– ident: e_1_2_2_25_1
  doi: 10.1145/1141911.1141988
– ident: e_1_2_2_90_1
  doi: 10.1145/1731047.1731055
– ident: e_1_2_2_37_1
– volume-title: Eurographics Symposium on Rendering.
  ident: e_1_2_2_58_1
– volume-title: Proc. CVPR. IEEE, 5553--5562
  ident: e_1_2_2_70_1
– ident: e_1_2_2_53_1
  doi: 10.1007/s11263-006-0029-5
– ident: e_1_2_2_7_1
  doi: 10.1145/311535.311556
– ident: e_1_2_2_20_1
  doi: 10.1145/2508363.2508380
– ident: e_1_2_2_84_1
– ident: e_1_2_2_36_1
  doi: 10.1145/2010324.1964941
– ident: e_1_2_2_6_1
  doi: 10.1145/2010324.1964970
– ident: e_1_2_2_68_1
– ident: e_1_2_2_80_1
  doi: 10.1109/CVPR.2018.00270
– ident: e_1_2_2_91_1
  doi: 10.1145/2897824.2925882
– ident: e_1_2_2_13_1
  doi: 10.1109/TPAMI.2006.206
– ident: e_1_2_2_17_1
  doi: 10.1109/ICCV.1999.790383
– ident: e_1_2_2_5_1
  doi: 10.1145/1778765.1778777
– ident: e_1_2_2_18_1
  doi: 10.1145/2638549
– volume-title: Proceedings of the 3rd. International Conference on Face and Gesture Recognition (FG '98)
  ident: e_1_2_2_15_1
– ident: e_1_2_2_39_1
  doi: 10.1109/TPAMI.2010.63
– ident: e_1_2_2_55_1
  doi: 10.1109/ICCV.2015.425
– volume-title: 2016 Fourth International Conference on. IEEE, 460--469
  ident: e_1_2_2_69_1
– ident: e_1_2_2_45_1
  doi: 10.1145/1201775.882264
– ident: e_1_2_2_76_1
  doi: 10.1145/2661229.2661290
– ident: e_1_2_2_2_1
  doi: 10.1145/1667239.1667251
– ident: e_1_2_2_29_1
  doi: 10.1145/2342896.2342970
– ident: e_1_2_2_33_1
  doi: 10.1145/2766974
– volume-title: Proc. EGSR
  year: 2007
  ident: e_1_2_2_57_1
– ident: e_1_2_2_22_1
– volume-title: Proc. ECCV (5)'14
  ident: e_1_2_2_50_1
– volume-title: Proc. CVPR.
  ident: e_1_2_2_83_1
– ident: e_1_2_2_40_1
  doi: 10.1109/ICCV.2011.6126439
– ident: e_1_2_2_89_1
  doi: 10.1145/1141911.1141987
– ident: e_1_2_2_34_1
  doi: 10.1145/3072959.3073659
– ident: e_1_2_2_38_1
  doi: 10.1109/ICCV.2013.404
– ident: e_1_2_2_4_1
  doi: 10.1109/TPAMI.2014.2377712
– volume-title: Proceedings of the Fourth ACM SIGGRAPH/Eurographics conference on High-Performance Graphics. Eurographics Association, 115--124
  ident: e_1_2_2_47_1
– ident: e_1_2_2_88_1
  doi: 10.1145/344779.345009
– volume-title: Proc. CVPR.
  ident: e_1_2_2_41_1
– ident: e_1_2_2_43_1
– ident: e_1_2_2_82_1
– ident: e_1_2_2_54_1
  doi: 10.1631/FITEE.1700253
– ident: e_1_2_2_35_1
  doi: 10.1109/CVPR.2017.632
– ident: e_1_2_2_66_1
  doi: 10.1145/2980179.2980252
– ident: e_1_2_2_12_1
  doi: 10.1145/344779.344855
– volume-title: Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980
  year: 2014
  ident: e_1_2_2_42_1
– ident: e_1_2_2_63_1
  doi: 10.1145/2766894
– volume-title: Proc. CVPR. 4786--4794
  ident: e_1_2_2_64_1
– volume-title: Proc. ECCV. Springer, 796--812
  ident: e_1_2_2_79_1
– ident: e_1_2_2_1_1
  doi: 10.1145/2897824.2925917
– ident: e_1_2_2_28_1
  doi: 10.1145/2342896.2342970
– ident: e_1_2_2_85_1
  doi: 10.1162/jocn.1991.3.1.71
– volume-title: Proc. CVPR. * equal contribution.
  ident: e_1_2_2_92_1
– volume-title: Computer Graphics Forum
  ident: e_1_2_2_9_1
– ident: e_1_2_2_27_1
  doi: 10.1109/ICCV.2015.103
– ident: e_1_2_2_32_1
  doi: 10.1145/3130800.31310887
– ident: e_1_2_2_59_1
  doi: 10.1145/1457515.1409074
– volume-title: Proc. ECCV.
  ident: e_1_2_2_72_1
– ident: e_1_2_2_87_1
– ident: e_1_2_2_26_1
– ident: e_1_2_2_8_1
  doi: 10.1109/CVPR.2016.598
– ident: e_1_2_2_10_1
  doi: 10.1145/2766943
– ident: e_1_2_2_49_1
  doi: 10.1145/1141911.1141921
– volume-title: Proc. CVPR. 2536--2544
  ident: e_1_2_2_67_1
– volume-title: Proc. CVPR. 787--796
  ident: e_1_2_2_95_1
– ident: e_1_2_2_30_1
  doi: 10.1007/s00371-006-0078-3
– ident: e_1_2_2_62_1
  doi: 10.1145/1531326.1531363
– ident: e_1_2_2_56_1
  doi: 10.3758/s13428-014-0532-5
– volume-title: Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction
  year: 2017
  ident: e_1_2_2_81_1
– volume-title: Proc. CVPR.
  ident: e_1_2_2_73_1
– ident: e_1_2_2_11_1
  doi: 10.1145/2897824.2925873
– volume-title: Proc. CVPR. 4786--4794
  ident: e_1_2_2_14_1
– ident: e_1_2_2_16_1
  doi: 10.1145/383259.383296
– volume-title: Physically Based Rendering of Fine Scale Human Skin Structure. In Eurographics Workshop on Rendering, S. J. Gortle and K. Myszkowski (Eds.).
  ident: e_1_2_2_31_1
– ident: e_1_2_2_60_1
  doi: 10.1023/B:VISI.0000029666.37597.d3
– ident: e_1_2_2_78_1
– volume-title: Proc. CVPR.
  ident: e_1_2_2_93_1
– ident: e_1_2_2_19_1
  doi: 10.1111/cgf.13127
– ident: e_1_2_2_75_1
– volume-title: 2016 Fourth International Conference on. IEEE, 639--648
  ident: e_1_2_2_61_1
– ident: e_1_2_2_94_1
– ident: e_1_2_2_44_1
  doi: 10.1145/1073204.1073263
– ident: e_1_2_2_71_1
  doi: 10.1109/CVPR.2005.145
– volume-title: Photo-realistic single image super-resolution using a generative adversarial network. arXiv:1609.04802
  year: 2016
  ident: e_1_2_2_48_1
– ident: e_1_2_2_74_1
  doi: 10.1109/ICCV.2017.175
– ident: e_1_2_2_21_1
– ident: e_1_2_2_24_1
  doi: 10.1145/1360612.1360658
– ident: e_1_2_2_3_1
  doi: 10.1109/TPAMI.2014.2377712
– ident: e_1_2_2_52_1
  doi: 10.1109/CVPR.2017.624
– ident: e_1_2_2_86_1
  doi: 10.1145/2614028.2615407
SSID ssj0006446
Score 2.6070743
Snippet We present a deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may...
SourceID crossref
acm
SourceType Enrichment Source
Index Database
Publisher
StartPage 1
SubjectTerms Computer graphics
Computing methodologies
Mesh geometry models
Shape modeling
SubjectTermsDisplay Computing methodologies -- Computer graphics -- Shape modeling -- Mesh geometry models
Title High-fidelity facial reflectance and geometry inference from an unconstrained image
URI https://dl.acm.org/doi/10.1145/3197517.3201364
Volume 37
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3Nb9MwFLdKd4EDggFifMkHDkgoJY7tfByrAZqA7rJN2k6R49ilG23RSA_w1_Oe7aQuDAm4RJFTu5HfL-_L74OQl1pwkeHRP3DHMhG8UInKyyqxuWx1yTnIRMxGnh3nR2fiw7k8H41sFLW06ZqJ_nFjXsn_UBXGgK6YJfsPlB0WhQG4B_rCFSgM17-iMQZpJBYLVaEubZVzf4PIQ1f8kAowN-ul6a6_u7grX1PW5ZTAhw0yDdVD7BIBeudiqXYDg6aHM2wh0fcTdwcLrsB1FCF_oWAWNlRxbtTPm_l68NmohWvRhKOrb5urAUHHaq5cw-_XH9dXi8hz7cYu1OXCxL4IVjonZxqhZxaxL-AdCRpcXtIE9iqLpOC-kU7Pf33Rl4AzETFTFklln2n6O78XWBoD2EghWTHhGVagE1vRNgQchie3yF4G5kQ2JnvTt7NPJ4PMBq3QnWr37xyKQMHyb35ZHLUYvYy0mEgdOb1H7gY7gk49KO6TkVntkztRdckH5GQHHtTDg0bwoAAP2sODDvCgCA94RnfgQR08HpKz9-9OD4-S0EMjUaCYdwlTTaGkbbGZh1WibBs8KJapaovUgq1dpKnJDGvbRmhd5RlTMrfSWpMzbZiV_BEZr9Yr85hQsG21yMuCN1wLYZuqrUplMta0lc65rg7IPuxM_dVXSanDfh2QSb9TtQ5l5_HFv9Q-JV7WYYO3E14NE_q1_vDTJzf-41Nye4vNZ2TcXW_Mc9Acu-ZFIPtP15Vrqg
linkProvider EBSCOhost
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=High-fidelity+facial+reflectance+and+geometry+inference+from+an+unconstrained+image&rft.jtitle=ACM+transactions+on+graphics&rft.au=Yamaguchi%2C+Shugo&rft.au=Saito%2C+Shunsuke&rft.au=Nagano%2C+Koki&rft.au=Zhao%2C+Yajie&rft.date=2018-07-30&rft.pub=ACM&rft.issn=0730-0301&rft.eissn=1557-7368&rft.volume=37&rft.issue=4&rft.spage=1&rft.epage=14&rft_id=info:doi/10.1145%2F3197517.3201364&rft.externalDocID=3201364
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0730-0301&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0730-0301&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0730-0301&client=summon