Virtual-Real Fusion with Dynamic Scene from Videos
In this paper, we introduce a method to augment virtual environment with multiple videos. Our goal is to make a virtual-real fusion system that fuses dynamic imagery with 3D models in a real-time display, so as to help observers visualize dynamic videos simultaneously in the context of 3D models. 3D...
Saved in:
Published in | 2016 International Conference on Cyberworlds (CW) pp. 65 - 72 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.09.2016
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, we introduce a method to augment virtual environment with multiple videos. Our goal is to make a virtual-real fusion system that fuses dynamic imagery with 3D models in a real-time display, so as to help observers visualize dynamic videos simultaneously in the context of 3D models. 3D models in virtual environment are reconstructed using multiple view vision methods. Based on this, images can be registered through feature matching among images. Foreground objects in videos lead to distortions when video images are simply projected to static models. Detection and tracking of those objects are needed, and then several ordinary 3D models are used to represent those objects. Both geometry and appearance are taken into account to recover characteristic of different objects. This paper focuses on the integration of these components into a prototype system and the presentation of results shows the benefits of an virtual-real fusion system. |
---|---|
DOI: | 10.1109/CW.2016.17 |