Learning 3D face reconstruction from a single sketch

•We develop a deep network for 3D face reconstruction from sketches, integrating the knowledge of reconstruction from photos.•We propose a novel line loss function and demonstrate its ability to preserve the characteristic details in face sketches.•The proposed method can be used for easy editing of...

Full description

Saved in:
Bibliographic Details
Published inGraphical models Vol. 115; p. 101102
Main Authors Yang, Li, Wu, Jing, Huo, Jing, Lai, Yu-Kun, Gao, Yang
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.05.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•We develop a deep network for 3D face reconstruction from sketches, integrating the knowledge of reconstruction from photos.•We propose a novel line loss function and demonstrate its ability to preserve the characteristic details in face sketches.•The proposed method can be used for easy editing of 3D face models with characteristic details added or removed. [Display omitted] 3D face reconstruction from a single image is a classic computer vision problem with many applications. However, most works achieve reconstruction from face photos, and little attention has been paid to reconstruction from other portrait forms. In this paper, we propose a learning-based approach to reconstruct a 3D face from a single face sketch. To overcome the problem of no paired sketch-3D data for supervised learning, we introduce a photo-to-sketch synthesis technique to obtain paired training data, and propose a dual-path architecture to achieve synergistic 3D reconstruction from both sketches and photos. We further propose a novel line loss function to refine the reconstruction with characteristic details depicted by lines in sketches well preserved. Our method outperforms the state-of-the-art 3D face reconstruction approaches in terms of reconstruction from face sketches. We also demonstrate the use of our method for easy editing of details on 3D face models.
ISSN:1524-0703
1524-0711
DOI:10.1016/j.gmod.2021.101102