LiDGS: An efficient 3D reconstruction framework integrating lidar point clouds and multi-view images for enhanced geometric fidelity
•A new method combines LiDAR point clouds and multi-view images for 3D reconstruction.•Dense depth maps generated from LiDAR point clouds improve reconstruction accuracy.•An adaptive Gaussian densification strategy improves geometric fidelity in 3D models.•Depth regularization refines estimation, en...
Saved in:
Published in | International journal of applied earth observation and geoinformation Vol. 142; p. 104730 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.08.2025
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •A new method combines LiDAR point clouds and multi-view images for 3D reconstruction.•Dense depth maps generated from LiDAR point clouds improve reconstruction accuracy.•An adaptive Gaussian densification strategy improves geometric fidelity in 3D models.•Depth regularization refines estimation, ensuring consistent depth across viewpoints.
Multi-view reconstruction of real-world scenes has been an important and challenging task. Although methods based on Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have made significant progress in rendering quality, there are still some limitations regarding the fidelity of geometric structures. To address this challenge, we propose a novel 3D reconstruction approach within the 3DGS framework, integrating lidar point clouds and multi-view images, named LiDGS, which achieves high-fidelity 3D scene reconstruction by introducing high-precision geometric a priori information and multiple geometric constraints from lidar point clouds, while guaranteeing efficient and accurate scene rendering. Specifically, we adopt an adaptive checkerboard sampling strategy and multi-hypothesis joint view selection (ACMP) for whole-image depth propagation, generating a high −precision dense depth map that provides continuous and accurate depth prior constraints for Gaussian optimization. Then, we design an adaptive Gaussian densification strategy, which effectively guides the geometric structure of the 3D scene through geometric anchors and adaptively adjusts the number and volume of Gaussians to more finely characterize the geometry of the object surface. Finally, this paper introduces a depth regularization method to correct the depth estimation of each Gaussian, ensuring the consistency of depth information from different viewpoints, which, in turn, improves the reconstruction quality. The experimental results show that the method achieves superior performance in both the new view synthesis task and the 3D reconstruction task, outperforming other classical methods. Our source code will be published at https://github.com/SongJiang-WHU/LiDGS. |
---|---|
ISSN: | 1569-8432 |
DOI: | 10.1016/j.jag.2025.104730 |