A Real‐Time Virtual‐Real Fusion Rendering Framework in Cloud‐Edge Environments
ABSTRACT This paper introduces a cloud‐edge collaborative framework for real‐time virtual‐real fusion rendering in augmented reality (AR). By integrating Visual Simultaneous Localization and Mapping (VSLAM) with Neural Radiance Fields (NeRF), the proposed method achieves high‐fidelity virtual object...
Saved in:
Published in | Computer animation and virtual worlds Vol. 36; no. 4 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.07.2025
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | ABSTRACT
This paper introduces a cloud‐edge collaborative framework for real‐time virtual‐real fusion rendering in augmented reality (AR). By integrating Visual Simultaneous Localization and Mapping (VSLAM) with Neural Radiance Fields (NeRF), the proposed method achieves high‐fidelity virtual object placement and shadow synthesis in real‐world scenes. The cloud server handles computationally intensive tasks, including offline NeRF‐based 3D reconstruction and online illumination estimation, while edge devices perform real‐time data acquisition, SLAM‐based plane detection, and rendering. To enhance realism, the system employs an improved soft shadow generation technique that dynamically adjusts shadow parameters based on light source information. Experimental results across diverse indoor environments demonstrate the system's effectiveness, with consistent real‐time performance, accurate illumination estimation, and high‐quality shadow rendering. The proposed method reduces the computational burden on edge devices, enabling immersive AR experiences on resource‐constrained hardware, such as mobile and wearable devices.
This study presents an edge‐cloud architecture for real‐time AR fusion with realistic lighting/shadows. Algorithm: ORB‐SLAM3 VSLAM for poses/features; NeuS‐NeRF mesh extraction; depth‐albedo learning for cuboid light estimation and dynamic soft shadows. System: Edges handle capture, SLAM, plane detection, WebRTC fusion; cloud manages NeRF training and illumination for low‐latency AR glasses. Results: lmmersive AR with 20% improved shadow accuracy and <50 ms latency in complex scenes. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1546-4261 1546-427X |
DOI: | 10.1002/cav.70049 |