GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting

In this paper, we introduce GS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping (SLAM) system. It facilitates a better bal-ance between efficiency and accuracy. Compared to recent SLAM methods employing neural implicit representations, our method utili...

Full description

Saved in:
Bibliographic Details
Published inProceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) pp. 19595 - 19604
Main Authors Yan, Chi, Qu, Delin, Xu, Dan, Zhao, Bin, Wang, Zhigang, Wang, Dong, Li, Xuelong
Format Conference Proceeding
LanguageEnglish
Published IEEE 16.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we introduce GS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping (SLAM) system. It facilitates a better bal-ance between efficiency and accuracy. Compared to recent SLAM methods employing neural implicit representations, our method utilizes a real-time differentiable splatting ren-dering pipeline that offers significant speedup to map opti-mization and RGB-D rendering. Specifically, we propose an adaptive expansion strategy that adds new or deletes noisy 3D Gaussians in order to efficiently reconstruct new observed scene geometry and improve the mapping of pre-viously observed areas. This strategy is essential to ex-tend 3D Gaussian representation to reconstruct the whole scene rather than synthesize a static object in existing meth-ods. Moreover, in the pose tracking process, an effective coarse-to-fine technique is designed to select reliable 3D Gaussian representations to optimize camera pose, resulting in runtime reduction and robust estimation. Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets. Project page: https://gs-slam.github.io/.
ISSN:1063-6919
DOI:10.1109/CVPR52733.2024.01853