Learning Kinematic Machine Models from Videos
VR/AR applications, such as virtual training or coaching, often require a digital twin of a machine. Such a virtual twin must also include a kinematic model that defines its motion behavior. This behavior is usually expressed by constraints in a physics engine. In this paper, we present a system tha...
Saved in:
Published in | 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) pp. 107 - 114 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.12.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | VR/AR applications, such as virtual training or coaching, often require a digital twin of a machine. Such a virtual twin must also include a kinematic model that defines its motion behavior. This behavior is usually expressed by constraints in a physics engine. In this paper, we present a system that automatically derives the kinematic model of a machine from RGB video with an optional depth channel. Our system records a live session while a user performs all typical machine movements. It then searches for trajectories and converts them into linear, circular and helical constraints. Our system can also detect kinematic chains and coupled constraints, for example, when a crank moves a toothed rod. |
---|---|
DOI: | 10.1109/AIVR50618.2020.00028 |