A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters

ABSTRACT 3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time‐consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic informati...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 35; no. 6; pp. e70001 - n/a
Main Authors Zhu, ChangAn, Joslin, Chris
Format Journal Article
LanguageEnglish
Published Hoboken, USA John Wiley & Sons, Inc 01.11.2024
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:ABSTRACT 3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time‐consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time‐consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor‐demanding preparations. We present a facial motion retargeting pipeline for plausible facial animation across diverse characters. The pipeline automatically translates mocap data to facial muscles' activation intensities and drives our muscle‐based transferrable face rig. A list of passive muscle actions during muscles' activation and a blendshape‐free quantitative evaluation method are provided.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Funding: This work was supported in part by the NSERC CREATE Project VISION: Visual Effects and Animation Innovation and Simulation (584762).
ISSN:1546-4261
1546-427X
DOI:10.1002/cav.70001