GroomGen: A High-Quality Generative Hair Model Using Hierarchical Latent Representations
Despite recent successes in hair acquisition that fits a high-dimensional hair model to a specific input subject, generative hair models, which establish general embedding spaces for encoding, editing, and sampling diverse hairstyles, are way less explored. In this paper, we present GroomGen, the fi...
Saved in:
Published in | ACM transactions on graphics Vol. 42; no. 6; pp. 1 - 16 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York, NY, USA
ACM
04.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Despite recent successes in hair acquisition that fits a high-dimensional hair model to a specific input subject, generative hair models, which establish general embedding spaces for encoding, editing, and sampling diverse hairstyles, are way less explored. In this paper, we present GroomGen, the first generative model designed for hair geometry composed of highly-detailed dense strands. Our approach is motivated by two key ideas. First, we construct hair latent spaces covering both individual strands and hairstyles. The latent spaces are compact, expressive, and well-constrained for high-quality and diverse sampling. Second, we adopt a hierarchical hair representation that parameterizes a complete hair model to three levels: single strands, sparse guide hairs, and complete dense hairs. This representation is critical to the compactness of latent spaces, the robustness of training, and the efficiency of inference. Based on this hierarchical latent representation, our proposed pipeline consists of a strand-VAE and a hairstyle-VAE that encode an individual strand and a set of guide hairs to their respective latent spaces, and a hybrid densification step that populates sparse guide hairs to a dense hair model. GroomGen not only enables novel hairstyle sampling and plausible hairstyle interpolation, but also supports interactive editing of complex hairstyles, or can serve as strong data-driven prior for hairstyle reconstruction from images. We demonstrate the superiority of our approach with qualitative examples of diverse sampled hairstyles and quantitative evaluation of generation quality regarding every single component and the entire pipeline. |
---|---|
AbstractList | Despite recent successes in hair acquisition that fits a high-dimensional hair model to a specific input subject, generative hair models, which establish general embedding spaces for encoding, editing, and sampling diverse hairstyles, are way less explored. In this paper, we present GroomGen , the first generative model designed for hair geometry composed of highly-detailed dense strands. Our approach is motivated by two key ideas. First, we construct hair latent spaces covering both individual strands and hairstyles. The latent spaces are compact, expressive, and well-constrained for high-quality and diverse sampling. Second, we adopt a hierarchical hair representation that parameterizes a complete hair model to three levels: single strands, sparse guide hairs, and complete dense hairs. This representation is critical to the compactness of latent spaces, the robustness of training, and the efficiency of inference. Based on this hierarchical latent representation, our proposed pipeline consists of a strand-VAE and a hairstyle-VAE that encode an individual strand and a set of guide hairs to their respective latent spaces, and a hybrid densification step that populates sparse guide hairs to a dense hair model. GroomGen not only enables novel hairstyle sampling and plausible hairstyle interpolation, but also supports interactive editing of complex hairstyles, or can serve as strong data-driven prior for hairstyle reconstruction from images. We demonstrate the superiority of our approach with qualitative examples of diverse sampled hairstyles and quantitative evaluation of generation quality regarding every single component and the entire pipeline. |
ArticleNumber | 270 |
Author | Beeler, Thabo Chai, Menglei Zhou, Yuxiao Pepe, Alessandro Gross, Markus |
Author_xml | – sequence: 1 givenname: Yuxiao surname: Zhou fullname: Zhou, Yuxiao email: yuxiao.zhou@inf.ethz.ch organization: ETH Zurich, Switzerland – sequence: 2 givenname: Menglei surname: Chai fullname: Chai, Menglei email: mengleichai@google.com organization: Google Inc., United States of America – sequence: 3 givenname: Alessandro surname: Pepe fullname: Pepe, Alessandro email: apepe@google.com organization: Google Inc., United States of America – sequence: 4 givenname: Markus surname: Gross fullname: Gross, Markus email: grossm@inf.ethz.ch organization: ETH Zurich, Switzerland – sequence: 5 givenname: Thabo surname: Beeler fullname: Beeler, Thabo email: tbeeler@google.com organization: Google Inc., Switzerland |
BookMark | eNptkEFLAzEQhYMo2Fbx7ik3T9HJ7maT9VaKtkJFFAveltnstI1sd0uyCv33Rlo9iKd5zPvmMbwhO267lhi7kHAtZaZu0lyaFIojNpBKaaHT3ByzAegUBKQgT9kwhHcAyLMsH7C3qe-6zZTaWz7mM7dai-cPbFy_43FHHnv3SXyGzvPHrqaGL4JrVxGMlrdrZ7Hhc-yp7fkLbT2FqOJN14YzdrLEJtD5YY7Y4v7udTIT86fpw2Q8F5hkWS-MhcoQ5tImSVGbqtLWLAujKg2SbEU6MTUVpHNFCUGFURqUtdJWoYVEpiN2tc-1vgvB07LcerdBvysllN-FlIdCIin-kNbtn-09uuYf_nLPo938hv6YX39dbBE |
CitedBy_id | crossref_primary_10_1145_3658147 |
Cites_doi | 10.1145/2816795.2818112 10.1145/2661229.2661284 10.1111/j.1467-8659.2012.03192.x 10.1145/1360612.1360629 10.1109/TVCG.2020.2968433 10.1145/3528223.3530116 10.1145/2897824.2925961 10.1007/978-3-031-19827-4_5 10.1145/1141911.1142012 10.1109/CVPR52729.2023.01224 10.1145/2766931 10.1145/3272127.3275020 10.1109/TVCG.2020.3029823 10.1145/2366145.2366165 10.1007/978-3-030-87361-5_44 10.1145/3355089.3356511 10.1145/1531326.1531362 10.1109/CVPR52688.2022.00605 10.1145/1618452.1618510 10.1109/TVCG.2016.2551242 10.1145/1015706.1015784 10.1109/CVPR.2019.00453 10.5555/3128975.3129002 10.1145/2601097.2601194 10.1145/2185520.2185613 10.1145/2461912.2462026 10.1007/978-3-030-58452-8_24 10.1145/2601097.2601211 10.1145/3272127.3275019 10.1523/JNEUROSCI.05-07-01688.1985 10.1145/2185520.2185612 10.1109/CVPR52688.2022.00158 10.1145/3072959.3073627 10.1145/1073204.1073267 10.1145/2461912.2461990 |
ContentType | Journal Article |
Copyright | Owner/Author |
Copyright_xml | – notice: Owner/Author |
DBID | AAYXX CITATION |
DOI | 10.1145/3618309 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1557-7368 |
EndPage | 16 |
ExternalDocumentID | 10_1145_3618309 3618309 |
GroupedDBID | --Z -DZ -~X .DC 23M 2FS 4.4 5GY 5VS 6J9 85S 8US AAKMM AALFJ AAYFX ABPPZ ACGFO ACGOD ACM ADBCU ADL ADMLS AEBYY AEFXT AEJOY AENEX AENSD AETEA AFWIH AFWXC AIKLT AKRVB ALMA_UNASSIGNED_HOLDINGS ASPBG AVWKF BDXCO CCLIF CS3 F5P FEDTE GUFHI HGAVV I07 LHSKQ P1C P2P PQQKQ RNS ROL TWZ UHB UPT WH7 XSW ZCA ~02 AAYXX CITATION |
ID | FETCH-LOGICAL-a244t-8c0b8ea61c229d8bb7c8f985b701ecbe728de9e765e2e0ba9e78a1d57c5ac0213 |
ISSN | 0730-0301 |
IngestDate | Thu Apr 24 23:06:46 EDT 2025 Wed Aug 27 16:38:19 EDT 2025 Mon Aug 25 17:41:11 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Keywords | hairstyle generation strand-level hair modeling |
Language | English |
License | Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-a244t-8c0b8ea61c229d8bb7c8f985b701ecbe728de9e765e2e0ba9e78a1d57c5ac0213 |
OpenAccessLink | https://dl.acm.org/doi/10.1145/3618309 |
PageCount | 16 |
ParticipantIDs | crossref_primary_10_1145_3618309 crossref_citationtrail_10_1145_3618309 acm_primary_3618309 |
PublicationCentury | 2000 |
PublicationDate | 2023-12-04 |
PublicationDateYYYYMMDD | 2023-12-04 |
PublicationDate_xml | – month: 12 year: 2023 text: 2023-12-04 day: 04 |
PublicationDecade | 2020 |
PublicationPlace | New York, NY, USA |
PublicationPlace_xml | – name: New York, NY, USA |
PublicationTitle | ACM transactions on graphics |
PublicationTitleAbbrev | ACM TOG |
PublicationYear | 2023 |
Publisher | ACM |
Publisher_xml | – name: ACM |
References | Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. In CVPR 2019. 4401--4410. Yuefan Shen, Changgeng Zhang, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2021. DeepSketchHair: Deep Sketch-Based 3D Hair Modeling. IEEE Trans. Vis. Comput. Graph. 27, 7 (2021), 3250--3263. Kyle Olszewski, Duygu Ceylan, Jun Xing, Jose Echevarria, Zhili Chen, Weikai Chen, and Hao Li. 2020. Intuitive, Interactive Beard and Hair Synthesis With Generative Models. In CVPR 2020. 7444--7454. Qing Lyu, Menglei Chai, Xiang Chen, and Kun Zhou. 2022. Real-Time Hair Simulation With Neural Interpolation. IEEE Trans. Vis. Comput. Graph. 28, 4 (2022), 1894--1905. Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML 2015, Vol. 37. 448--456. Menglei Chai, Changxi Zheng, and Kun Zhou. 2017. Adaptive Skinning for Interactive Hair-Solid Simulation. IEEE Trans. Vis. Comput. Graph. 23, 7 (2017), 1725--1738. Tamar Flash and Neville Hogan. 1985. The Coordination of Arm Movements: An Experimentally Confirmed Mathematical Model. Journal of Neuroscience 5, 7 (1985), 1688--1703. Shu Liang, Xiufeng Huang, Xianyu Meng, Kunyao Chen, Linda G. Shapiro, and Ira Kemelmacher-Shlizerman. 2018. Video to fully automatic 3D hair model. ACM Trans. Graph. 37, 6 (2018), 206. Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013. Structure-aware hair capture. ACM Trans. Graph. 32, 4 (2013), 76:1--76:12. Radu Alexandru Rosu, Shunsuke Saito, Ziyan Wang, Chenglei Wu, Sven Behnke, and Giljoo Nam. 2022. Neural Strands: Learning Hair Geometry and Appearance from Multi-view Images. In ECCV 2022, Vol. 13693. 73--89. Yichen Wei, Eyal Ofek, Long Quan, and Heung-Yeung Shum. 2005. Modeling hair from multiple views. ACM Trans. Graph. 24, 3 (2005), 816--820. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR 2016. 770--778. Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou. 2012. Single-view hair modeling for portrait manipulation. ACM Trans. Graph. 31, 4 (2012), 116:1--116:8. Keyu Wu, Yifan Ye, Lingchen Yang, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2022. NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image Using Implicit Neural Representations. In CVPR 2022. 1516--1525. Lvdi Wang, Yizhou Yu, Kun Zhou, and Baining Guo. 2009. Example-based hair geometry synthesis. ACM Trans. Graph. 28, 3 (2009), 56. Zexiang Xu, Hsiang-Tao Wu, Lvdi Wang, Changxi Zheng, Xin Tong, and Yue Qi. 2014. Dynamic hair capture using spacetime optimization. ACM Trans. Graph. 33, 6 (2014), 224:1--224:11. Sylvain Paris, Héctor M. Briceño, and François X. Sillion. 2004. Capture of hair geometry from multiple images. ACM Trans. Graph. 23, 3 (2004), 712--719. Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Michael Zollhöfer, Jessica K. Hodgins, and Christoph Lassner. 2022. HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture. In CVPR 2022. 6133--6144. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In NIPS 2014. 2672--2680. Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou. 2017. Adata-driven approach to four-view image-based hair modeling. ACM Trans. Graph. 36, 4 (2017), 156:1--156:11. Tomás Lay Herrera, Arno Zinke, and Andreas Weber. 2012. Lighting hair from the inside: a thermal approach to hair reconstruction. ACM Trans. Graph. 31, 6 (2012), 146:1--146:9. Ishit Mehta, Michaël Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, and Manmohan Chandraker. 2021. Modulated Periodic Activations for Generalizable Local Functional Representations. In ICCV 2021. 14194--14203. Tiancheng Sun, Giljoo Nam, Carlos Aliaga, Christophe Hery, and Ravi Ramamoorthi. 2021. Human Hair Inverse Rendering using Multi-View Photometric data. In EGSR 2021. 179--190. Qing Zhang, Jing Tong, Huamin Wang, Zhigeng Pan, and Ruigang Yang. 2012. Simulation Guided Hair Dynamics Modeling from Video. Comput. Graph. Forum 31, 7 (2012), 2003--2010. Zhiyi Kuang, Yiyang Chen, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2022. Deep-MVSHair: Deep Hair Modeling from Sparse Views. In SIGGRAPH Asia 2022. 10:1--10:8. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV 2020, Vol. 12346. 405--421. Lingchen Yang, Zefeng Shi, Youyi Zheng, and Kun Zhou. 2019. Dynamic hair modeling from monocular videos using deep neural networks. ACM Trans. Graph. 38, 6 (2019), 235:1--235:12. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR 2015. Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou. 2015. High-quality hair modeling from a single portrait photo. ACM Trans. Graph. 34, 6 (2015), 204:1--204:10. Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou. 2013. Dynamic hair manipulation in images and videos. ACM Trans. Graph. 32, 4 (2013), 75:1--75:8. Menglei Chai, Changxi Zheng, and Kun Zhou. 2014. A reduced model for interactive hairs. ACM Trans. Graph. 33, 4 (2014), 124:1--124:11. Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In ICLR 2016. Yi Zhou, Liwen Hu, Jun Xing, Weikai Chen, Han-Wei Kung, Xin Tong, and Hao Li. 2018. HairNet: Single-View Hair Reconstruction Using Convolutional Neural Networks. In ECCV 2018, Vol. 11215. 249--265. Yujian Zheng, Zirong Jin, Moran Li, Haibin Huang, Chongyang Ma, Shuguang Cui, and Xiaoguang Han. 2023. HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for Single-View 3D Hair Modeling. In CVPR 2023. 12726--12735. Martín Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein Generative Adversarial Networks. In ICML 2017, Vol. 70. 214--223. Wenzel Jakob, Jonathan T. Moon, and Steve Marschner. 2009. Capturing hair assemblies fiber by fiber. ACM Trans. Graph. 28, 5 (2009), 164. Florence Bertails, Basile Audoly, Bernard Querleux, Frédéric Leroy, Jean Luc Lévêque, and Marie-Paule Cani. 2005. Predicting Natural Hair Shapes by Solving the Statics of Flexible Rods. In Eurographics 2005. 81--84. Sylvain Paris, Will Chang, Oleg I. Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Frédo Durand. 2008. Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27, 3 (2008), 30. Sebastian Winberg, Gaspard Zoss, Prashanth Chandran, Paulo F. U. Gotardo, and Derek Bradley. 2022. Facial hair tracking for high fidelity performance capture. ACM Trans. Graph. 41, 4 (2022), 165:1--165:12. Peng Guan, Leonid Sigal, Valeria Reznitskaya, and Jessica K. Hodgins. 2012. Multi-linear Data-Driven Dynamic Hair Model with Efficient Hair-Body Collision Handling. In SCA 2012. 295--304. Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou. 2016. AutoHair: fully automatic hair modeling from a single image. ACM Trans. Graph. 35, 4 (2016), 116:1--116:12. Florence Bertails, Basile Audoly, Marie-Paule Cani, Bernard Querleux, Frédéric Leroy, and Jean Luc Lévêque. 2006. Super-helices for predicting the dynamics of natural hair. ACM Trans. Graph. 25, 3 (2006), 1180--1187. Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. 2016. Instance Normalization: The Missing Ingredient for Fast Stylization. CoRR abs/1607.08022 (2016). Shunsuke Saito, Liwen Hu, Chongyang Ma, Hikaru Ibayashi, Linjie Luo, and Hao Li. 2018. 3D hair synthesis using volumetric variational autoencoders. ACM Trans. Graph. 37, 6 (2018), 208. Giljoo Nam, Chenglei Wu, Min H. Kim, and Yaser Sheikh. 2019. Strand-Accurate Multi-View Hair Capture. In CVPR 2019. 155--164. Thabo Beeler, Bernd Bickel, Gioacchino Noris, Paul A. Beardsley, Steve Marschner, Robert W. Sumner, and Markus H. Gross. 2012. Coupled 3D reconstruction of sparse facial hair and skin. ACM Trans. Graph. 31, 4 (2012), 117:1--117:10. Qiaomu Ren, Haikun Wei, and Yangang Wang. 2021. Hair Salon: A Geometric Example-Based Method to Generate 3D Hair Data. In ICIG 2021, Vol. 12890. 533--544. Liwen Hu, Derek Bradley, Hao Li, and Thabo Beeler. 2017. Simulation-Ready Hair Capture. Comput. Graph. Forum 36, 2 (2017), 281--294. Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In ICLR 2014. Linjie Luo, Hao Li, Sylvain Paris, Thibaut Weise, Mark Pauly, and Szymon Rusinkiewicz. 2012. Multi-view hair capture using orientation fields. In CVPR 2012. 1490--1497. Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling using a hairstyle database. ACM Trans. Graph. 34, 4 (2015), 125:1--125:9. Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2014. Robust hair capture using simulated examples. ACM Trans. Graph. 33, 4 (2014), 126:1--126:10. e_1_2_2_49_1 e_1_2_2_6_1 e_1_2_2_20_1 e_1_2_2_2_1 e_1_2_2_43_1 e_1_2_2_8_1 He Kaiming (e_1_2_2_14_1) 2016 e_1_2_2_28_1 e_1_2_2_45_1 e_1_2_2_47_1 Ioffe Sergey (e_1_2_2_19_1) 2015; 37 Mehta Ishit (e_1_2_2_29_1) 2021 Olszewski Kyle (e_1_2_2_32_1) 2020; 2020 Diederik (e_1_2_2_23_1) 2014 Ulyanov Dmitry (e_1_2_2_41_1) 2016 e_1_2_2_38_1 e_1_2_2_11_1 e_1_2_2_30_1 e_1_2_2_51_1 e_1_2_2_17_1 e_1_2_2_34_1 e_1_2_2_15_1 Luo Linjie (e_1_2_2_26_1) 2012 e_1_2_2_36_1 Goodfellow Ian J. (e_1_2_2_12_1) 2014 e_1_2_2_25_1 e_1_2_2_48_1 e_1_2_2_5_1 e_1_2_2_7_1 Arjovsky Martín (e_1_2_2_1_1) 2017; 70 e_1_2_2_21_1 e_1_2_2_3_1 e_1_2_2_42_1 e_1_2_2_9_1 Nam Giljoo (e_1_2_2_31_1) 2019 e_1_2_2_44_1 e_1_2_2_27_1 e_1_2_2_46_1 Diederik (e_1_2_2_22_1) 2015 Zhou Yi (e_1_2_2_52_1) 2018; 11215 e_1_2_2_37_1 e_1_2_2_39_1 e_1_2_2_10_1 Sun Tiancheng (e_1_2_2_40_1) 2021 e_1_2_2_18_1 e_1_2_2_33_1 e_1_2_2_16_1 Radford Alec (e_1_2_2_35_1) 2016 Kuang Zhiyi (e_1_2_2_24_1) 2022 e_1_2_2_50_1 Guan Peng (e_1_2_2_13_1) 2012 Bertails Florence (e_1_2_2_4_1) 2005 |
References_xml | – reference: Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Michael Zollhöfer, Jessica K. Hodgins, and Christoph Lassner. 2022. HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture. In CVPR 2022. 6133--6144. – reference: Sebastian Winberg, Gaspard Zoss, Prashanth Chandran, Paulo F. U. Gotardo, and Derek Bradley. 2022. Facial hair tracking for high fidelity performance capture. ACM Trans. Graph. 41, 4 (2022), 165:1--165:12. – reference: Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR 2016. 770--778. – reference: Zhiyi Kuang, Yiyang Chen, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2022. Deep-MVSHair: Deep Hair Modeling from Sparse Views. In SIGGRAPH Asia 2022. 10:1--10:8. – reference: Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In ICLR 2016. – reference: Tiancheng Sun, Giljoo Nam, Carlos Aliaga, Christophe Hery, and Ravi Ramamoorthi. 2021. Human Hair Inverse Rendering using Multi-View Photometric data. In EGSR 2021. 179--190. – reference: Yi Zhou, Liwen Hu, Jun Xing, Weikai Chen, Han-Wei Kung, Xin Tong, and Hao Li. 2018. HairNet: Single-View Hair Reconstruction Using Convolutional Neural Networks. In ECCV 2018, Vol. 11215. 249--265. – reference: Kyle Olszewski, Duygu Ceylan, Jun Xing, Jose Echevarria, Zhili Chen, Weikai Chen, and Hao Li. 2020. Intuitive, Interactive Beard and Hair Synthesis With Generative Models. In CVPR 2020. 7444--7454. – reference: Shunsuke Saito, Liwen Hu, Chongyang Ma, Hikaru Ibayashi, Linjie Luo, and Hao Li. 2018. 3D hair synthesis using volumetric variational autoencoders. ACM Trans. Graph. 37, 6 (2018), 208. – reference: Tomás Lay Herrera, Arno Zinke, and Andreas Weber. 2012. Lighting hair from the inside: a thermal approach to hair reconstruction. ACM Trans. Graph. 31, 6 (2012), 146:1--146:9. – reference: Florence Bertails, Basile Audoly, Marie-Paule Cani, Bernard Querleux, Frédéric Leroy, and Jean Luc Lévêque. 2006. Super-helices for predicting the dynamics of natural hair. ACM Trans. Graph. 25, 3 (2006), 1180--1187. – reference: Giljoo Nam, Chenglei Wu, Min H. Kim, and Yaser Sheikh. 2019. Strand-Accurate Multi-View Hair Capture. In CVPR 2019. 155--164. – reference: Peng Guan, Leonid Sigal, Valeria Reznitskaya, and Jessica K. Hodgins. 2012. Multi-linear Data-Driven Dynamic Hair Model with Efficient Hair-Body Collision Handling. In SCA 2012. 295--304. – reference: Qiaomu Ren, Haikun Wei, and Yangang Wang. 2021. Hair Salon: A Geometric Example-Based Method to Generate 3D Hair Data. In ICIG 2021, Vol. 12890. 533--544. – reference: Lingchen Yang, Zefeng Shi, Youyi Zheng, and Kun Zhou. 2019. Dynamic hair modeling from monocular videos using deep neural networks. ACM Trans. Graph. 38, 6 (2019), 235:1--235:12. – reference: Wenzel Jakob, Jonathan T. Moon, and Steve Marschner. 2009. Capturing hair assemblies fiber by fiber. ACM Trans. Graph. 28, 5 (2009), 164. – reference: Yujian Zheng, Zirong Jin, Moran Li, Haibin Huang, Chongyang Ma, Shuguang Cui, and Xiaoguang Han. 2023. HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for Single-View 3D Hair Modeling. In CVPR 2023. 12726--12735. – reference: Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou. 2015. High-quality hair modeling from a single portrait photo. ACM Trans. Graph. 34, 6 (2015), 204:1--204:10. – reference: Tamar Flash and Neville Hogan. 1985. The Coordination of Arm Movements: An Experimentally Confirmed Mathematical Model. Journal of Neuroscience 5, 7 (1985), 1688--1703. – reference: Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou. 2016. AutoHair: fully automatic hair modeling from a single image. ACM Trans. Graph. 35, 4 (2016), 116:1--116:12. – reference: Keyu Wu, Yifan Ye, Lingchen Yang, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2022. NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image Using Implicit Neural Representations. In CVPR 2022. 1516--1525. – reference: Qing Zhang, Jing Tong, Huamin Wang, Zhigeng Pan, and Ruigang Yang. 2012. Simulation Guided Hair Dynamics Modeling from Video. Comput. Graph. Forum 31, 7 (2012), 2003--2010. – reference: Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou. 2012. Single-view hair modeling for portrait manipulation. ACM Trans. Graph. 31, 4 (2012), 116:1--116:8. – reference: Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML 2015, Vol. 37. 448--456. – reference: Yichen Wei, Eyal Ofek, Long Quan, and Heung-Yeung Shum. 2005. Modeling hair from multiple views. ACM Trans. Graph. 24, 3 (2005), 816--820. – reference: Ishit Mehta, Michaël Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, and Manmohan Chandraker. 2021. Modulated Periodic Activations for Generalizable Local Functional Representations. In ICCV 2021. 14194--14203. – reference: Zexiang Xu, Hsiang-Tao Wu, Lvdi Wang, Changxi Zheng, Xin Tong, and Yue Qi. 2014. Dynamic hair capture using spacetime optimization. ACM Trans. Graph. 33, 6 (2014), 224:1--224:11. – reference: Shu Liang, Xiufeng Huang, Xianyu Meng, Kunyao Chen, Linda G. Shapiro, and Ira Kemelmacher-Shlizerman. 2018. Video to fully automatic 3D hair model. ACM Trans. Graph. 37, 6 (2018), 206. – reference: Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling using a hairstyle database. ACM Trans. Graph. 34, 4 (2015), 125:1--125:9. – reference: Sylvain Paris, Héctor M. Briceño, and François X. Sillion. 2004. Capture of hair geometry from multiple images. ACM Trans. Graph. 23, 3 (2004), 712--719. – reference: Liwen Hu, Derek Bradley, Hao Li, and Thabo Beeler. 2017. Simulation-Ready Hair Capture. Comput. Graph. Forum 36, 2 (2017), 281--294. – reference: Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. 2016. Instance Normalization: The Missing Ingredient for Fast Stylization. CoRR abs/1607.08022 (2016). – reference: Qing Lyu, Menglei Chai, Xiang Chen, and Kun Zhou. 2022. Real-Time Hair Simulation With Neural Interpolation. IEEE Trans. Vis. Comput. Graph. 28, 4 (2022), 1894--1905. – reference: Linjie Luo, Hao Li, Sylvain Paris, Thibaut Weise, Mark Pauly, and Szymon Rusinkiewicz. 2012. Multi-view hair capture using orientation fields. In CVPR 2012. 1490--1497. – reference: Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. In CVPR 2019. 4401--4410. – reference: Menglei Chai, Changxi Zheng, and Kun Zhou. 2014. A reduced model for interactive hairs. ACM Trans. Graph. 33, 4 (2014), 124:1--124:11. – reference: Radu Alexandru Rosu, Shunsuke Saito, Ziyan Wang, Chenglei Wu, Sven Behnke, and Giljoo Nam. 2022. Neural Strands: Learning Hair Geometry and Appearance from Multi-view Images. In ECCV 2022, Vol. 13693. 73--89. – reference: Florence Bertails, Basile Audoly, Bernard Querleux, Frédéric Leroy, Jean Luc Lévêque, and Marie-Paule Cani. 2005. Predicting Natural Hair Shapes by Solving the Statics of Flexible Rods. In Eurographics 2005. 81--84. – reference: Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2014. Robust hair capture using simulated examples. ACM Trans. Graph. 33, 4 (2014), 126:1--126:10. – reference: Yuefan Shen, Changgeng Zhang, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2021. DeepSketchHair: Deep Sketch-Based 3D Hair Modeling. IEEE Trans. Vis. Comput. Graph. 27, 7 (2021), 3250--3263. – reference: Menglei Chai, Changxi Zheng, and Kun Zhou. 2017. Adaptive Skinning for Interactive Hair-Solid Simulation. IEEE Trans. Vis. Comput. Graph. 23, 7 (2017), 1725--1738. – reference: Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV 2020, Vol. 12346. 405--421. – reference: Thabo Beeler, Bernd Bickel, Gioacchino Noris, Paul A. Beardsley, Steve Marschner, Robert W. Sumner, and Markus H. Gross. 2012. Coupled 3D reconstruction of sparse facial hair and skin. ACM Trans. Graph. 31, 4 (2012), 117:1--117:10. – reference: Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou. 2017. Adata-driven approach to four-view image-based hair modeling. ACM Trans. Graph. 36, 4 (2017), 156:1--156:11. – reference: Sylvain Paris, Will Chang, Oleg I. Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Frédo Durand. 2008. Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27, 3 (2008), 30. – reference: Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In NIPS 2014. 2672--2680. – reference: Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013. Structure-aware hair capture. ACM Trans. Graph. 32, 4 (2013), 76:1--76:12. – reference: Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In ICLR 2014. – reference: Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou. 2013. Dynamic hair manipulation in images and videos. ACM Trans. Graph. 32, 4 (2013), 75:1--75:8. – reference: Martín Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein Generative Adversarial Networks. In ICML 2017, Vol. 70. 214--223. – reference: Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR 2015. – reference: Lvdi Wang, Yizhou Yu, Kun Zhou, and Baining Guo. 2009. Example-based hair geometry synthesis. ACM Trans. Graph. 28, 3 (2009), 56. – volume-title: Deep Residual Learning for Image Recognition. In CVPR year: 2016 ident: e_1_2_2_14_1 – ident: e_1_2_2_5_1 doi: 10.1145/2816795.2818112 – ident: e_1_2_2_47_1 doi: 10.1145/2661229.2661284 – volume: 11215 volume-title: HairNet: Single-View Hair Reconstruction Using Convolutional Neural Networks. In ECCV year: 2018 ident: e_1_2_2_52_1 – ident: e_1_2_2_50_1 doi: 10.1111/j.1467-8659.2012.03192.x – ident: e_1_2_2_34_1 doi: 10.1145/1360612.1360629 – volume-title: Generative Adversarial Nets. In NIPS year: 2014 ident: e_1_2_2_12_1 – volume: 37 volume-title: ICML year: 2015 ident: e_1_2_2_19_1 – ident: e_1_2_2_39_1 doi: 10.1109/TVCG.2020.2968433 – ident: e_1_2_2_45_1 doi: 10.1145/3528223.3530116 – volume-title: Modulated Periodic Activations for Generalizable Local Functional Representations. In ICCV year: 2021 ident: e_1_2_2_29_1 – ident: e_1_2_2_6_1 doi: 10.1145/2897824.2925961 – ident: e_1_2_2_37_1 doi: 10.1007/978-3-031-19827-4_5 – ident: e_1_2_2_3_1 doi: 10.1145/1141911.1142012 – ident: e_1_2_2_51_1 doi: 10.1109/CVPR52729.2023.01224 – volume-title: Jean Luc Lévêque, and Marie-Paule Cani year: 2005 ident: e_1_2_2_4_1 – ident: e_1_2_2_18_1 doi: 10.1145/2766931 – ident: e_1_2_2_25_1 doi: 10.1145/3272127.3275020 – volume: 70 volume-title: Wasserstein Generative Adversarial Networks. In ICML year: 2017 ident: e_1_2_2_1_1 – ident: e_1_2_2_28_1 doi: 10.1109/TVCG.2020.3029823 – ident: e_1_2_2_15_1 doi: 10.1145/2366145.2366165 – ident: e_1_2_2_36_1 doi: 10.1007/978-3-030-87361-5_44 – ident: e_1_2_2_48_1 doi: 10.1145/3355089.3356511 – ident: e_1_2_2_42_1 doi: 10.1145/1531326.1531362 – volume: 2020 start-page: 7444 year: 2020 ident: e_1_2_2_32_1 article-title: Intuitive publication-title: Interactive Beard and Hair Synthesis With Generative Models. In CVPR – ident: e_1_2_2_43_1 doi: 10.1109/CVPR52688.2022.00605 – volume-title: CVPR year: 2012 ident: e_1_2_2_26_1 – ident: e_1_2_2_20_1 doi: 10.1145/1618452.1618510 – volume-title: SIGGRAPH Asia year: 2022 ident: e_1_2_2_24_1 – volume-title: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In ICLR year: 2016 ident: e_1_2_2_35_1 – ident: e_1_2_2_10_1 doi: 10.1109/TVCG.2016.2551242 – ident: e_1_2_2_33_1 doi: 10.1145/1015706.1015784 – ident: e_1_2_2_21_1 doi: 10.1109/CVPR.2019.00453 – ident: e_1_2_2_16_1 doi: 10.5555/3128975.3129002 – ident: e_1_2_2_17_1 doi: 10.1145/2601097.2601194 – ident: e_1_2_2_2_1 doi: 10.1145/2185520.2185613 – ident: e_1_2_2_27_1 doi: 10.1145/2461912.2462026 – ident: e_1_2_2_30_1 doi: 10.1007/978-3-030-58452-8_24 – volume-title: Kingma and Jimmy Ba year: 2015 ident: e_1_2_2_22_1 – ident: e_1_2_2_9_1 doi: 10.1145/2601097.2601211 – ident: e_1_2_2_38_1 doi: 10.1145/3272127.3275019 – ident: e_1_2_2_11_1 doi: 10.1523/JNEUROSCI.05-07-01688.1985 – volume-title: Multi-linear Data-Driven Dynamic Hair Model with Efficient Hair-Body Collision Handling. In SCA year: 2012 ident: e_1_2_2_13_1 – ident: e_1_2_2_8_1 doi: 10.1145/2185520.2185612 – volume-title: EGSR year: 2021 ident: e_1_2_2_40_1 – ident: e_1_2_2_46_1 doi: 10.1109/CVPR52688.2022.00158 – ident: e_1_2_2_49_1 doi: 10.1145/3072959.3073627 – volume-title: Lempitsky year: 2016 ident: e_1_2_2_41_1 – volume-title: Strand-Accurate Multi-View Hair Capture. In CVPR year: 2019 ident: e_1_2_2_31_1 – ident: e_1_2_2_44_1 doi: 10.1145/1073204.1073267 – ident: e_1_2_2_7_1 doi: 10.1145/2461912.2461990 – volume-title: Auto-Encoding Variational Bayes. In ICLR year: 2014 ident: e_1_2_2_23_1 |
SSID | ssj0006446 |
Score | 2.4774377 |
Snippet | Despite recent successes in hair acquisition that fits a high-dimensional hair model to a specific input subject, generative hair models, which establish... |
SourceID | crossref acm |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 1 |
SubjectTerms | Computer graphics Computing methodologies Parametric curve and surface models Shape modeling |
SubjectTermsDisplay | Computing methodologies -- Computer graphics -- Shape modeling -- Parametric curve and surface models |
Title | GroomGen: A High-Quality Generative Hair Model Using Hierarchical Latent Representations |
URI | https://dl.acm.org/doi/10.1145/3618309 |
Volume | 42 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3Pb9MwFLZKd4HDNDYQ3RjyYdoFGeI4iRNu1YBV08qBbVLFpbIdR1Qq7bSl0sZfz_OvxhQk2C5RFNmW6vf187P9vfcQOiqqOityXhHNREGyukmIbIqGUKFqzjIua3umO_5SjK6ys0k-6fXuItXSqpXv1M-_xpU8xqrwDexqomQfYNn1oPAB3sG-8AQLw_O_bHxq3N5TvXDh5UayQVxOjHufTtrqgkZidmNrns3fOoHAaGaijm0RFHOK3Ro5wFeriPWBSP4ELySnPRmbShKhrLi9X7B5riOh_Lfvy5Vl89XdTCwjzYAVCxjp7FzPOhq-1j645vZWmIwJnQrIF3E3IUS_H0mkzMo7soi5gDaI2Wu5RcYza84JZ66GTqDeLI0gFvMojRZkF4v5J9VnJisGK4CTkqpbzcIN_sYit5YeukDsfOo7PkFbKWww0j7aGn4cn1-sV3HwE-09d_gpLuDadH3vuxp_Rv2I_JnIMbncQdt-R4GHDh7PUU8vdtGzKM_kHpoEoHzAQxzDBHcwwQYm2MIEW5jgGCbYwQRvwOQFuvr86fJkRHxJDSLAj2tJqRJZalFQlaZVXUrJVdlUZS55QrWSmqdlrSvNi1ynOpECXktB65yrXChwB9lL1F8sF_oVwpQ1sLdglIpaAPFTaf7YLAN_FobVmg_QLkzP9NolTQnzPUDHYbqmymehN8VQ5tMNwwwQXjcMY2w02f93kwP0tAPoa9Rvb1b6ELzIVr7xBv8FK3RyWg |
linkProvider | EBSCOhost |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=GroomGen%3A+A+High-Quality+Generative+Hair+Model+Using+Hierarchical+Latent+Representations&rft.jtitle=ACM+transactions+on+graphics&rft.au=Zhou%2C+Yuxiao&rft.au=Chai%2C+Menglei&rft.au=Pepe%2C+Alessandro&rft.au=Gross%2C+Markus&rft.date=2023-12-04&rft.issn=0730-0301&rft.eissn=1557-7368&rft.volume=42&rft.issue=6&rft.spage=1&rft.epage=16&rft_id=info:doi/10.1145%2F3618309&rft.externalDBID=n%2Fa&rft.externalDocID=10_1145_3618309 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0730-0301&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0730-0301&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0730-0301&client=summon |