Fusing surveillance videos and three‐dimensional scene: A mixed reality system

Augmented Virtual Environments (AVE) or Virtual‐Reality Fusion systems fuse dynamic videos with static three‐dimensional (3D) models of a virtual environment to provide an optimal solution for visualizing and understanding multichannel surveillance systems. However, texture distortion caused by view...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 34; no. 1
Main Authors Cui, Xiaoliang, Khan, Dawar, He, Zhenbang, Cheng, Zhanglin
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Augmented Virtual Environments (AVE) or Virtual‐Reality Fusion systems fuse dynamic videos with static three‐dimensional (3D) models of a virtual environment to provide an optimal solution for visualizing and understanding multichannel surveillance systems. However, texture distortion caused by viewpoint changes in such systems is a critical issue that needs to be addressed. To minimize texture fusion distortion, this paper presents a novel virtual environment system in two phases, offline and online phases, to dynamically fuse multiple surveillance videos with a virtual 3D scene. In the offline phase, a static virtual environment is obtained by performing a 3D photogrammetric reconstruction from the input images of the scene. In the online phase, the virtual environment is augmented by fusing multiple videos through two optional strategies. One strategy is to dynamically map images of different videos onto a 3D model of the virtual environment, and the other is to extract moving objects and represent them as billboards. The system can be used to visualize a 3D environment from any viewpoint augmented by real‐time videos. Experiments and user studies in different scenarios demonstrate the superiority of our system. Texture distortion caused by viewpoint changes in Augmented Virtual Environments (AVE) is a critical issue. To minimize texture fusion distortion, we present a novel virtual environment system in two phases, offline and online phases, to dynamically fuse multiple surveillance videos with a virtual 3D scene. Experiments and user studies in different scenarios demonstrate the superiority of our system.
Bibliography:Funding information
NSFC, Grant/Award Numbers: U21A20515; 61972388; Shenzhen Science and Technology Program, Grant/Award Numbers: JCYJ20180507182222355; GJHZ20210705141402008
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1546-4261
1546-427X
DOI:10.1002/cav.2129