Facial Expression Synthesis using a Global‐Local Multilinear Framework

We present a practical method to synthesize plausible 3D facial expressions for a particular target subject. The ability to synthesize an entire facial rig from a single neutral expression has a large range of applications both in computer graphics and computer vision, ranging from the efficient and...

Full description

Saved in:
Bibliographic Details
Published inComputer graphics forum Vol. 39; no. 2; pp. 235 - 245
Main Authors Wang, M., Bradley, D., Zafeiriou, S., Beeler, T.
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.05.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present a practical method to synthesize plausible 3D facial expressions for a particular target subject. The ability to synthesize an entire facial rig from a single neutral expression has a large range of applications both in computer graphics and computer vision, ranging from the efficient and cost‐effective creation of CG characters to scalable data generation for machine learning purposes. Unlike previous methods based on multilinear models, the proposed approach is capable to extrapolate well outside the sample pool, which allows it to plausibly predict the identity of the target subject and create artifact free expression shapes while requiring only a small input dataset. We introduce global‐local multilinear models that leverage the strengths of expression‐specific and identity‐specific local models combined with coarse motion estimations from a global model. Experimental results show that we achieve high‐quality, plausible facial expression synthesis results for an individual that outperform existing methods both quantitatively and qualitatively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.13926