Guided Learning of Control Graphs for Physics-Based Characters
The difficulty of developing control strategies has been a primary bottleneck in the adoption of physics-based simulations of human motion. We present a method for learning robust feedback strategies around given motion capture clips as well as the transition paths between clips. The output is a con...
Saved in:
Published in | ACM transactions on graphics Vol. 35; no. 3; pp. 1 - 14 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
02.06.2016
|
Online Access | Get full text |
ISSN | 0730-0301 1557-7368 |
DOI | 10.1145/2893476 |
Cover
Summary: | The difficulty of developing control strategies has been a primary bottleneck in the adoption of physics-based simulations of human motion. We present a method for learning robust feedback strategies around given motion capture clips as well as the transition paths between clips. The output is a control graph that supports real-time physics-based simulation of multiple characters, each capable of a diverse range of robust movement skills, such as walking, running, sharp turns, cartwheels, spin-kicks, and flips. The control fragments that compose the control graph are developed using guided learning. This leverages the results of open-loop sampling-based reconstruction in order to produce state-action pairs that are then transformed into a linear feedback policy for each control fragment using linear regression. Our synthesis framework allows for the development of robust controllers with a minimal amount of prior knowledge. |
---|---|
ISSN: | 0730-0301 1557-7368 |
DOI: | 10.1145/2893476 |