Multi‐scale edge aggregation mesh‐graph‐network for character secondary motion

As an enhancement to skinning‐based animations, light‐weight secondary motion method for 3D characters are widely demanded in many application scenarios. To address the dependence of data‐driven methods on ground truth data, we propose a self‐supervised training strategy that is free of ground truth...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 35; no. 3
Main Authors Wang, Tianyi, Liu, Shiguang
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As an enhancement to skinning‐based animations, light‐weight secondary motion method for 3D characters are widely demanded in many application scenarios. To address the dependence of data‐driven methods on ground truth data, we propose a self‐supervised training strategy that is free of ground truth data for the first time in this domain. Specifically, we construct a self‐supervised training framework by modeling the implicit integration problem with steps as an optimization problem based on physical energy terms. Furthermore, we introduce a multi‐scale edge aggregation mesh‐graph block (MSEA‐MG Block), which significantly enhances the network performance. This enables our model to make vivid predictions of secondary motion for 3D characters with arbitrary structures. Empirical experiments indicate that our method, without requiring ground truth data for model training, achieves comparable or even superior performance quantitatively and qualitatively compared to state‐of‐the‐art data‐driven approaches in the field. We adopt a self‐supervised training framework based on physical loss functions to predict the secondary motion of 3D characters, compared with previous traditional supervised training methods (left), our method (right) is data‐free, and has a comparable speed with physical‐based methods in the inference stage (1.6 ms/frame).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1546-4261
1546-427X
DOI:10.1002/cav.2241