LiDGS: An efficient 3D reconstruction framework integrating lidar point clouds and multi-view images for enhanced geometric fidelity
•A new method combines LiDAR point clouds and multi-view images for 3D reconstruction.•Dense depth maps generated from LiDAR point clouds improve reconstruction accuracy.•An adaptive Gaussian densification strategy improves geometric fidelity in 3D models.•Depth regularization refines estimation, en...
Saved in:
Published in | International journal of applied earth observation and geoinformation Vol. 142; p. 104730 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.08.2025
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | •A new method combines LiDAR point clouds and multi-view images for 3D reconstruction.•Dense depth maps generated from LiDAR point clouds improve reconstruction accuracy.•An adaptive Gaussian densification strategy improves geometric fidelity in 3D models.•Depth regularization refines estimation, ensuring consistent depth across viewpoints.
Multi-view reconstruction of real-world scenes has been an important and challenging task. Although methods based on Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have made significant progress in rendering quality, there are still some limitations regarding the fidelity of geometric structures. To address this challenge, we propose a novel 3D reconstruction approach within the 3DGS framework, integrating lidar point clouds and multi-view images, named LiDGS, which achieves high-fidelity 3D scene reconstruction by introducing high-precision geometric a priori information and multiple geometric constraints from lidar point clouds, while guaranteeing efficient and accurate scene rendering. Specifically, we adopt an adaptive checkerboard sampling strategy and multi-hypothesis joint view selection (ACMP) for whole-image depth propagation, generating a high −precision dense depth map that provides continuous and accurate depth prior constraints for Gaussian optimization. Then, we design an adaptive Gaussian densification strategy, which effectively guides the geometric structure of the 3D scene through geometric anchors and adaptively adjusts the number and volume of Gaussians to more finely characterize the geometry of the object surface. Finally, this paper introduces a depth regularization method to correct the depth estimation of each Gaussian, ensuring the consistency of depth information from different viewpoints, which, in turn, improves the reconstruction quality. The experimental results show that the method achieves superior performance in both the new view synthesis task and the 3D reconstruction task, outperforming other classical methods. Our source code will be published at https://github.com/SongJiang-WHU/LiDGS. |
---|---|
AbstractList | Multi-view reconstruction of real-world scenes has been an important and challenging task. Although methods based on Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have made significant progress in rendering quality, there are still some limitations regarding the fidelity of geometric structures. To address this challenge, we propose a novel 3D reconstruction approach within the 3DGS framework, integrating lidar point clouds and multi-view images, named LiDGS, which achieves high-fidelity 3D scene reconstruction by introducing high-precision geometric a priori information and multiple geometric constraints from lidar point clouds, while guaranteeing efficient and accurate scene rendering. Specifically, we adopt an adaptive checkerboard sampling strategy and multi-hypothesis joint view selection (ACMP) for whole-image depth propagation, generating a high −precision dense depth map that provides continuous and accurate depth prior constraints for Gaussian optimization. Then, we design an adaptive Gaussian densification strategy, which effectively guides the geometric structure of the 3D scene through geometric anchors and adaptively adjusts the number and volume of Gaussians to more finely characterize the geometry of the object surface. Finally, this paper introduces a depth regularization method to correct the depth estimation of each Gaussian, ensuring the consistency of depth information from different viewpoints, which, in turn, improves the reconstruction quality. The experimental results show that the method achieves superior performance in both the new view synthesis task and the 3D reconstruction task, outperforming other classical methods. Our source code will be published at https://github.com/SongJiang-WHU/LiDGS. •A new method combines LiDAR point clouds and multi-view images for 3D reconstruction.•Dense depth maps generated from LiDAR point clouds improve reconstruction accuracy.•An adaptive Gaussian densification strategy improves geometric fidelity in 3D models.•Depth regularization refines estimation, ensuring consistent depth across viewpoints. Multi-view reconstruction of real-world scenes has been an important and challenging task. Although methods based on Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have made significant progress in rendering quality, there are still some limitations regarding the fidelity of geometric structures. To address this challenge, we propose a novel 3D reconstruction approach within the 3DGS framework, integrating lidar point clouds and multi-view images, named LiDGS, which achieves high-fidelity 3D scene reconstruction by introducing high-precision geometric a priori information and multiple geometric constraints from lidar point clouds, while guaranteeing efficient and accurate scene rendering. Specifically, we adopt an adaptive checkerboard sampling strategy and multi-hypothesis joint view selection (ACMP) for whole-image depth propagation, generating a high −precision dense depth map that provides continuous and accurate depth prior constraints for Gaussian optimization. Then, we design an adaptive Gaussian densification strategy, which effectively guides the geometric structure of the 3D scene through geometric anchors and adaptively adjusts the number and volume of Gaussians to more finely characterize the geometry of the object surface. Finally, this paper introduces a depth regularization method to correct the depth estimation of each Gaussian, ensuring the consistency of depth information from different viewpoints, which, in turn, improves the reconstruction quality. The experimental results show that the method achieves superior performance in both the new view synthesis task and the 3D reconstruction task, outperforming other classical methods. Our source code will be published at https://github.com/SongJiang-WHU/LiDGS. |
ArticleNumber | 104730 |
Author | Song, Jiang Gong, Shucheng Yan, Li Wei, Pengcheng Li, Gang Xie, Hong Fan, Zhongli Zhu, Longze |
Author_xml | – sequence: 1 givenname: Li surname: Yan fullname: Yan, Li organization: School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Rd, Wuhan 430079, China – sequence: 2 givenname: Jiang orcidid: 0009-0006-1071-2636 surname: Song fullname: Song, Jiang email: song_jiang@whu.edu.cn organization: School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Rd, Wuhan 430079, China – sequence: 3 givenname: Hong orcidid: 0000-0002-0956-0421 surname: Xie fullname: Xie, Hong organization: School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Rd, Wuhan 430079, China – sequence: 4 givenname: Pengcheng surname: Wei fullname: Wei, Pengcheng organization: School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Rd, Wuhan 430079, China – sequence: 5 givenname: Gang surname: Li fullname: Li, Gang organization: School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Rd, Wuhan 430079, China – sequence: 6 givenname: Longze surname: Zhu fullname: Zhu, Longze organization: School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Rd, Wuhan 430079, China – sequence: 7 givenname: Zhongli surname: Fan fullname: Fan, Zhongli organization: State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China – sequence: 8 givenname: Shucheng surname: Gong fullname: Gong, Shucheng organization: School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Rd, Wuhan 430079, China |
BookMark | eNp9kcFuFDEQRH0IEkngA7j5B2axPd7ZGThFCYRIK3EAzlbb7h48zNqR7SXKnQ_HyyKOnFrdUpWq612xi5giMvZGio0Ucni7bBaYN0qobdv1rhcX7FJuh6kbda9esqtSFiHkbjeMl-zXPtzdf3nHbyJHouACxsr7O57RpVhqProaUuSU4YBPKf_gIVacM9QQZ74GD5k_pnbjbk1HXzhEzw_HtYbuZ8AnHg4wY-GUMsf4HaJDz2dMB6w5OE7B4xrq8yv2gmAt-PrvvGbfPn74evup23--f7i92XeuH0TtAN2WJq9ASnKalEKanLeD1SOAJWenAbV1dmsVuMmp0QvwA_WTEtajpf6aPZx9fYLFPOaWLj-bBMH8OaQ8G8g1uBUNiZ3UWlsNftIkYASvFPVqHO3kScvmJc9eLqdSMtI_PynMCYNZTMNgThjMGUPTvD9rsD3Z-smmnApvpYTWd20pwn_UvwH3SphL |
Cites_doi | 10.1007/978-3-031-73027-6_2 10.1145/3355089.3356513 10.1109/TVCG.2022.3203102 10.1145/3592433 10.1109/PBG.2005.194059 10.1145/3687762 10.1016/j.isprsjprs.2024.01.017 10.1109/CVPR.2016.445 10.1007/978-3-031-72761-0_25 10.1145/3306346.3322980 10.1111/cgf.14340 10.1007/978-3-031-72933-1_9 10.1109/WACV61041.2025.00375 10.1007/978-3-031-19824-3_20 10.1109/CVPR52734.2025.01119 10.1109/TGRS.2024.3510781 10.1145/3528223.3530127 10.1145/3503250 |
ContentType | Journal Article |
Copyright | 2025 |
Copyright_xml | – notice: 2025 |
DBID | 6I. AAFTH AAYXX CITATION DOA |
DOI | 10.1016/j.jag.2025.104730 |
DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Environmental Sciences |
ExternalDocumentID | oai_doaj_org_article_f071444b4ad94f0a8ad22f3288b9df41 10_1016_j_jag_2025_104730 S1569843225003772 |
GroupedDBID | 29J 4.4 5GY 6I. AAFTH AAHBH AALRI AAQXK AATTM AAXKI AAXUO AAYWO ABFYP ABJNI ABLST ABQEM ABQYD ABWVN ACLVX ACRLP ACRPL ACSBN ADBBV ADMUD ADNMO ADVLN AEIPS AFJKZ AFXIZ AGCQF AGQPQ AGYEJ AHEUO AIIUN AIKHN AITUG AKIFW ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU APXCP ASPBG ATOGT AVWKF AZFZN BKOJK BLECG EBS EFJIC EFKBS EJD FDB FEDTE FIRID FYGXN GROUPED_DOAJ HVGLF IMUCA KCYFY M41 O-L P-8 P-9 P2P R2- ROL SES SPC SSJ ~02 AAYXX CITATION SSE |
ID | FETCH-LOGICAL-c360t-aec5f9d2a11fc4f22ef9cdb6b48aabfcb96e4bcb5b2ac9c28d0ad6f3920bdebf3 |
IEDL.DBID | AIKHN |
ISSN | 1569-8432 |
IngestDate | Wed Aug 27 01:30:47 EDT 2025 Thu Aug 21 00:13:39 EDT 2025 Sat Aug 30 17:13:43 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Depth prior Depth regularization 3D reconstruction Geometric anchors 3D gaussian splatting Novel view synthesis |
Language | English |
License | This is an open access article under the CC BY license. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c360t-aec5f9d2a11fc4f22ef9cdb6b48aabfcb96e4bcb5b2ac9c28d0ad6f3920bdebf3 |
ORCID | 0009-0006-1071-2636 0000-0002-0956-0421 |
OpenAccessLink | https://www.sciencedirect.com/science/article/pii/S1569843225003772 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_f071444b4ad94f0a8ad22f3288b9df41 crossref_primary_10_1016_j_jag_2025_104730 elsevier_sciencedirect_doi_10_1016_j_jag_2025_104730 |
PublicationCentury | 2000 |
PublicationDate | August 2025 2025-08-00 2025-08-01 |
PublicationDateYYYYMMDD | 2025-08-01 |
PublicationDate_xml | – month: 08 year: 2025 text: August 2025 |
PublicationDecade | 2020 |
PublicationTitle | International journal of applied earth observation and geoinformation |
PublicationYear | 2025 |
Publisher | Elsevier B.V Elsevier |
Publisher_xml | – name: Elsevier B.V – name: Elsevier |
References | Xue, Chen, Wan, Huang, Yu, Li, Bao (b0150) 2019 Xu, Tao (b0145) 2020 Barron, Mildenhall, Verbin, Srinivasan, Hedman (b0005) 2023 Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H., 2022. TensoRF: Tensorial Radiance Fields, in: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (Eds.), Computer Vision – ECCV 2022, Lecture Notes in Computer Science. Springer Nature Switzerland, Cham, pp. 333–350. DOI: 10.1007/978-3-031-19824-3_20. Hess, G., Lindström, C., Fatemi, M., Petersson, C., Svensson, L., 2025. SplatAD: Real-Time Lidar and Camera Rendering with 3D Gaussian Splatting for Autonomous Driving. DOI: 10.48550/arXiv.2411.16816. Mildenhall, Srinivasan, Tancik, Barron, Ramamoorthi, Ng (b0095) 2022; 65 Cui, J., Cao, J., Zhao, F., He, Z., Chen, Y., Zhong, Y., Xu, L., Shi, Y., Zhang, Y., Yu, J., 2024. LetsGo: Large-Scale Garage Modeling and Rendering via LiDAR-Assisted Gaussian Primitives. Rematas, Liu, Srinivasan, Barron, Tagliasacchi, Funkhouser, Ferrari (b0110) 2022 Barron, Mildenhall, Verbin, Srinivasan, Hedman (b0010) 2022 Müller, Evans, Schied, Keller (b0100) 2022; 41 Xiong (b0140) 2024 Wei, Yan, Xie, Qiu, Qiu, Wu, Zhao, Hu, Huang (b0130) 2024; 208 Schops, Schonberger, Galliani, Sattler, Schindler, Pollefeys, Geiger (b0120) 2017 Botsch, M., Hornung, A., Zwicker, M., Kobbelt, L., 2005. High-quality surface splatting on today’s GPUs, in: Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, 2005. IEEE, pp. 17–141. Zhu, Z., Fan, Z., Jiang, Y., Wang, Z., 2024. FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting. Wang, Chen, Cui, Qin, Lu, Yu, Zhao, Dong, Zhu, Trigoni (b0125) 2021 Wei, Liu, Rao, Zhao, Lu, Zhou (b0135) 2021 Garbin, Kowalski, Johnson, Shotton, Valentin (b0045) 2021 Lee, J.W., Lim, H., Yang, S., Choi, J., 2025. Micro-splatting: Maximizing Isotropic Constraints for Refined Optimization in 3D Gaussian Splatting. DOI: 10.48550/arXiv.2504.05740. Zhang, Yan, Wei, Xie, Wang, Wang (b0195) 2024; 62 Kerbl, Kopanas, Leimkühler, Drettakis (b0060) 2023; 42 Neff, Stadlbauer, Parger, Kurz, Mueller, Chaitanya, Kaplanyan, Steinberger (b0105) 2021; 40 Liu, Y., Luo, C., Mao, Z., Peng, J., Zhang, Z., 2025. CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes. DOI: 10.48550/arXiv.2411.00771. Zhang, K., Riegler, G., Snavely, N., Koltun, V., 2020. NeRF++: Analyzing and Improving Neural Radiance Fields. Li, Z., Zhang, Y., Wu, C., Zhu, J., Zhang, L., 2024. HO-Gaussian: Hybrid Optimization of 3D Gaussian Splatting for Urban Scenes. DOI: 10.48550/arXiv.2403.20032. Schönberger, J.L., Frahm, J.-M., 2016. Structure-from-Motion Revisited, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4104–4113. DOI: 10.1109/CVPR.2016.445. Zhang, Zhan, Xu, Lu, Xing (b0185) 2024 Luo, Guan, Ju, Huang, Luo (b0085) 2019 Jiang, J., Gu, C., Chen, Y., Zhang, L., 2025. GS-LiDAR: Generating Realistic LiDAR Point Clouds with Panoramic Gaussian Splatting. DOI: 10.48550/arXiv.2501.13971. Li, Y., Lyu, C., Di, Y., Zhai, G., Lee, G.H., Tombari, F., 2024. GeoGaussian: Geometry-aware Gaussian Splatting for Scene Rendering. Deng, He, Ye, Duinkharjav, Chakravarthula, Yang, Sun (b0040) 2022; 28 Yu, Chen, Huang, Sattler, Geiger (b0175) 2024 Chen, Han, Xu, Su (b0025) 2019 Deng, Liu, Zhu, Ramanan (b0035) 2022 Yan, Low, Chen, Lee (b0160) 2024 Yifan, Serena, Wu, Öztireli, Sorkine-Hornung (b0170) 2019; 38 Mildenhall, Srinivasan, Ortiz-Cayon, Kalantari, Ramamoorthi, Ng, Kar (b0090) 2019; 38 Yu, Zhongrui, Wang, Haoran, Yang, J., Wang, Hanzhang, Xie, Z., Cai, Y., Cao, J., Ji, Z., Sun, M., 2024. SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior. Yan, Qu, Xu, Zhao, Wang, Wang, Li (b0155) 2024 Zwicker, M., Pfister, H., Van Baar, J., Gross, M., 2001. EWA volume splatting, in: Proceedings Visualization, 2001. VIS’01. IEEE, pp. 29–538. Yao, Luo, Li, Fang, Quan (b0165) 2018 Schops (10.1016/j.jag.2025.104730_b0120) 2017 Yifan (10.1016/j.jag.2025.104730_b0170) 2019; 38 Yu (10.1016/j.jag.2025.104730_b0175) 2024 Zhang (10.1016/j.jag.2025.104730_b0195) 2024; 62 10.1016/j.jag.2025.104730_b0050 10.1016/j.jag.2025.104730_b0075 10.1016/j.jag.2025.104730_b0030 Xue (10.1016/j.jag.2025.104730_b0150) 2019 10.1016/j.jag.2025.104730_b0190 Kerbl (10.1016/j.jag.2025.104730_b0060) 2023; 42 10.1016/j.jag.2025.104730_b0070 10.1016/j.jag.2025.104730_b0015 Chen (10.1016/j.jag.2025.104730_b0025) 2019 10.1016/j.jag.2025.104730_b0115 Yan (10.1016/j.jag.2025.104730_b0160) 2024 Deng (10.1016/j.jag.2025.104730_b0040) 2022; 28 10.1016/j.jag.2025.104730_b0055 Barron (10.1016/j.jag.2025.104730_b0010) 2022 Luo (10.1016/j.jag.2025.104730_b0085) 2019 Yan (10.1016/j.jag.2025.104730_b0155) 2024 Mildenhall (10.1016/j.jag.2025.104730_b0095) 2022; 65 Xiong (10.1016/j.jag.2025.104730_b0140) 2024 Xu (10.1016/j.jag.2025.104730_b0145) 2020 Wang (10.1016/j.jag.2025.104730_b0125) 2021 Wei (10.1016/j.jag.2025.104730_b0135) 2021 Zhang (10.1016/j.jag.2025.104730_b0185) 2024 Deng (10.1016/j.jag.2025.104730_b0035) 2022 10.1016/j.jag.2025.104730_b0020 10.1016/j.jag.2025.104730_b0080 Barron (10.1016/j.jag.2025.104730_b0005) 2023 10.1016/j.jag.2025.104730_b0180 Rematas (10.1016/j.jag.2025.104730_b0110) 2022 Mildenhall (10.1016/j.jag.2025.104730_b0090) 2019; 38 Garbin (10.1016/j.jag.2025.104730_b0045) 2021 10.1016/j.jag.2025.104730_b0065 Neff (10.1016/j.jag.2025.104730_b0105) 2021; 40 Müller (10.1016/j.jag.2025.104730_b0100) 2022; 41 Wei (10.1016/j.jag.2025.104730_b0130) 2024; 208 10.1016/j.jag.2025.104730_b0200 Yao (10.1016/j.jag.2025.104730_b0165) 2018 10.1016/j.jag.2025.104730_b0205 |
References_xml | – start-page: 1538 year: 2019 end-page: 1547 ident: b0025 article-title: Point-based multi-view stereo network, in publication-title: Proceedings of the IEEE/CVF International Conference on Computer Vision – reference: Yu, Zhongrui, Wang, Haoran, Yang, J., Wang, Hanzhang, Xie, Z., Cai, Y., Cao, J., Ji, Z., Sun, M., 2024. SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior. – reference: Cui, J., Cao, J., Zhao, F., He, Z., Chen, Y., Zhong, Y., Xu, L., Shi, Y., Zhang, Y., Yu, J., 2024. LetsGo: Large-Scale Garage Modeling and Rendering via LiDAR-Assisted Gaussian Primitives. – start-page: 12882 year: 2022 end-page: 12891 ident: b0035 article-title: Depth-supervised nerf: fewer views and faster training for free, in publication-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition – start-page: 19595 year: 2024 end-page: 19604 ident: b0155 article-title: Gs-slam: Dense visual slam with 3d gaussian splatting publication-title: In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition – start-page: 21424 year: 2024 end-page: 21433 ident: b0185 article-title: Fregs: 3d gaussian splatting with progressive frequency regularization publication-title: In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition – reference: Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H., 2022. TensoRF: Tensorial Radiance Fields, in: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (Eds.), Computer Vision – ECCV 2022, Lecture Notes in Computer Science. Springer Nature Switzerland, Cham, pp. 333–350. DOI: 10.1007/978-3-031-19824-3_20. – volume: 62 start-page: 1 year: 2024 end-page: 15 ident: b0195 article-title: Micro-Structures Graph-based Point Cloud Registration for Balancing Efficiency and Accuracy publication-title: IEEE Trans. Geosci. Remote Sens. – start-page: 14346 year: 2021 end-page: 14355 ident: b0045 article-title: Fastnerf: High-fidelity neural rendering at 200fps publication-title: In: Proceedings of the IEEE/CVF International Conference on Computer Vision – reference: Li, Z., Zhang, Y., Wu, C., Zhu, J., Zhang, L., 2024. HO-Gaussian: Hybrid Optimization of 3D Gaussian Splatting for Urban Scenes. DOI: 10.48550/arXiv.2403.20032. – start-page: 12516 year: 2020 end-page: 12523 ident: b0145 article-title: Planar prior assisted patchmatch multi-view stereo, in publication-title: Proceedings of the AAAI Conference on Artificial Intelligence – volume: 38 start-page: 1 year: 2019 end-page: 14 ident: b0090 article-title: Local light field fusion: practical view synthesis with prescriptive sampling guidelines publication-title: ACM Trans. Graph. – reference: Schönberger, J.L., Frahm, J.-M., 2016. Structure-from-Motion Revisited, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4104–4113. DOI: 10.1109/CVPR.2016.445. – start-page: 16004 year: 2021 end-page: 16013 ident: b0125 publication-title: P2-Net: Joint Description and Detection of Local Features for Pixel and Point Matching, in – volume: 38 start-page: 1 year: 2019 end-page: 14 ident: b0170 article-title: Differentiable surface splatting for point-based geometry processing publication-title: ACM Trans. Graph. – reference: Zwicker, M., Pfister, H., Van Baar, J., Gross, M., 2001. EWA volume splatting, in: Proceedings Visualization, 2001. VIS’01. IEEE, pp. 29–538. – reference: Liu, Y., Luo, C., Mao, Z., Peng, J., Zhang, Z., 2025. CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes. DOI: 10.48550/arXiv.2411.00771. – start-page: 4312 year: 2019 end-page: 4321 ident: b0150 article-title: Mvscrf: Learning multi-view stereo with conditional random fields publication-title: In: Proceedings of the IEEE/CVF International Conference on Computer Vision – volume: 40 start-page: 45 year: 2021 end-page: 59 ident: b0105 article-title: DONeRF: Towards Real‐Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks publication-title: Comput. Graphics Forum – volume: 41 start-page: 1 year: 2022 end-page: 15 ident: b0100 article-title: Instant neural graphics primitives with a multiresolution hash encoding publication-title: ACM Trans. Graph. – volume: 208 start-page: 296 year: 2024 end-page: 307 ident: b0130 article-title: LiDeNeRF: Neural radiance field reconstruction with depth prior provided by LiDAR point cloud publication-title: ISPRS J. Photogramm. Remote Sens. – start-page: 12932 year: 2022 end-page: 12942 ident: b0110 article-title: Urban radiance fields publication-title: In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition – volume: 28 start-page: 3854 year: 2022 end-page: 3864 ident: b0040 article-title: Fov-nerf: Foveated neural radiance fields for virtual reality publication-title: IEEE Trans. Vis. Comput. Graph. – reference: Hess, G., Lindström, C., Fatemi, M., Petersson, C., Svensson, L., 2025. SplatAD: Real-Time Lidar and Camera Rendering with 3D Gaussian Splatting for Autonomous Driving. DOI: 10.48550/arXiv.2411.16816. – reference: Botsch, M., Hornung, A., Zwicker, M., Kobbelt, L., 2005. High-quality surface splatting on today’s GPUs, in: Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, 2005. IEEE, pp. 17–141. – reference: Jiang, J., Gu, C., Chen, Y., Zhang, L., 2025. GS-LiDAR: Generating Realistic LiDAR Point Clouds with Panoramic Gaussian Splatting. DOI: 10.48550/arXiv.2501.13971. – reference: Li, Y., Lyu, C., Di, Y., Zhai, G., Lee, G.H., Tombari, F., 2024. GeoGaussian: Geometry-aware Gaussian Splatting for Scene Rendering. – year: 2024 ident: b0140 article-title: SparseGS: Real-time 360° sparse view synthesis using Gaussian splatting (Master’s Thesis) – start-page: 19697 year: 2023 end-page: 19705 ident: b0005 article-title: Zip-nerf: Anti-aliased grid-based neural radiance fields publication-title: In: Proceedings of the IEEE/CVF International Conference on Computer Vision – start-page: 767 year: 2018 end-page: 783 ident: b0165 article-title: Mvsnet: Depth inference for unstructured multi-view stereo publication-title: In: Proceedings of the European Conference on Computer Vision (ECCV) – start-page: 5470 year: 2022 end-page: 5479 ident: b0010 article-title: Mip-nerf 360: Unbounded anti-aliased neural radiance fields publication-title: In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition – start-page: 5610 year: 2021 end-page: 5619 ident: b0135 article-title: Nerfingmvs: Guided optimization of neural radiance fields for indoor multi-view stereo publication-title: In: Proceedings of the IEEE/CVF International Conference on Computer Vision – reference: Zhang, K., Riegler, G., Snavely, N., Koltun, V., 2020. NeRF++: Analyzing and Improving Neural Radiance Fields. – start-page: 3260 year: 2017 end-page: 3269 ident: b0120 article-title: A multi-view stereo benchmark with high-resolution images and multi-camera videos publication-title: In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition – start-page: 19447 year: 2024 end-page: 19456 ident: b0175 article-title: Mip-splatting: Alias-free 3d gaussian splatting, in publication-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition – start-page: 10452 year: 2019 end-page: 10461 ident: b0085 article-title: P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereo publication-title: In: Proceedings of the IEEE/CVF International Conference on Computer Vision – start-page: 20923 year: 2024 end-page: 20931 ident: b0160 article-title: Multi-scale 3d gaussian splatting for anti-aliased rendering, in publication-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition – volume: 42 start-page: 139 year: 2023 ident: b0060 article-title: 3D Gaussian Splatting for Real-Time Radiance Field Rendering publication-title: ACM Trans. Graph. – reference: Lee, J.W., Lim, H., Yang, S., Choi, J., 2025. Micro-splatting: Maximizing Isotropic Constraints for Refined Optimization in 3D Gaussian Splatting. DOI: 10.48550/arXiv.2504.05740. – volume: 65 start-page: 99 year: 2022 end-page: 106 ident: b0095 article-title: NeRF: representing scenes as neural radiance fields for view synthesis publication-title: Commun. ACM – reference: Zhu, Z., Fan, Z., Jiang, Y., Wang, Z., 2024. FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting. – start-page: 19447 year: 2024 ident: 10.1016/j.jag.2025.104730_b0175 article-title: Mip-splatting: Alias-free 3d gaussian splatting, in – ident: 10.1016/j.jag.2025.104730_b0065 – ident: 10.1016/j.jag.2025.104730_b0075 doi: 10.1007/978-3-031-73027-6_2 – start-page: 20923 year: 2024 ident: 10.1016/j.jag.2025.104730_b0160 article-title: Multi-scale 3d gaussian splatting for anti-aliased rendering, in – start-page: 21424 year: 2024 ident: 10.1016/j.jag.2025.104730_b0185 article-title: Fregs: 3d gaussian splatting with progressive frequency regularization – start-page: 14346 year: 2021 ident: 10.1016/j.jag.2025.104730_b0045 article-title: Fastnerf: High-fidelity neural rendering at 200fps – volume: 38 start-page: 1 year: 2019 ident: 10.1016/j.jag.2025.104730_b0170 article-title: Differentiable surface splatting for point-based geometry processing publication-title: ACM Trans. Graph. doi: 10.1145/3355089.3356513 – start-page: 12882 year: 2022 ident: 10.1016/j.jag.2025.104730_b0035 article-title: Depth-supervised nerf: fewer views and faster training for free, in – start-page: 5610 year: 2021 ident: 10.1016/j.jag.2025.104730_b0135 article-title: Nerfingmvs: Guided optimization of neural radiance fields for indoor multi-view stereo – start-page: 16004 year: 2021 ident: 10.1016/j.jag.2025.104730_b0125 – volume: 28 start-page: 3854 year: 2022 ident: 10.1016/j.jag.2025.104730_b0040 article-title: Fov-nerf: Foveated neural radiance fields for virtual reality publication-title: IEEE Trans. Vis. Comput. Graph. doi: 10.1109/TVCG.2022.3203102 – volume: 42 start-page: 139 year: 2023 ident: 10.1016/j.jag.2025.104730_b0060 article-title: 3D Gaussian Splatting for Real-Time Radiance Field Rendering publication-title: ACM Trans. Graph. doi: 10.1145/3592433 – start-page: 1538 year: 2019 ident: 10.1016/j.jag.2025.104730_b0025 article-title: Point-based multi-view stereo network, in – ident: 10.1016/j.jag.2025.104730_b0015 doi: 10.1109/PBG.2005.194059 – start-page: 12932 year: 2022 ident: 10.1016/j.jag.2025.104730_b0110 article-title: Urban radiance fields – start-page: 10452 year: 2019 ident: 10.1016/j.jag.2025.104730_b0085 article-title: P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereo – start-page: 767 year: 2018 ident: 10.1016/j.jag.2025.104730_b0165 article-title: Mvsnet: Depth inference for unstructured multi-view stereo – year: 2024 ident: 10.1016/j.jag.2025.104730_b0140 – ident: 10.1016/j.jag.2025.104730_b0205 – start-page: 3260 year: 2017 ident: 10.1016/j.jag.2025.104730_b0120 article-title: A multi-view stereo benchmark with high-resolution images and multi-camera videos – ident: 10.1016/j.jag.2025.104730_b0030 doi: 10.1145/3687762 – start-page: 12516 year: 2020 ident: 10.1016/j.jag.2025.104730_b0145 article-title: Planar prior assisted patchmatch multi-view stereo, in – volume: 208 start-page: 296 year: 2024 ident: 10.1016/j.jag.2025.104730_b0130 article-title: LiDeNeRF: Neural radiance field reconstruction with depth prior provided by LiDAR point cloud publication-title: ISPRS J. Photogramm. Remote Sens. doi: 10.1016/j.isprsjprs.2024.01.017 – start-page: 19697 year: 2023 ident: 10.1016/j.jag.2025.104730_b0005 article-title: Zip-nerf: Anti-aliased grid-based neural radiance fields – ident: 10.1016/j.jag.2025.104730_b0115 doi: 10.1109/CVPR.2016.445 – ident: 10.1016/j.jag.2025.104730_b0070 doi: 10.1007/978-3-031-72761-0_25 – volume: 38 start-page: 1 year: 2019 ident: 10.1016/j.jag.2025.104730_b0090 article-title: Local light field fusion: practical view synthesis with prescriptive sampling guidelines publication-title: ACM Trans. Graph. doi: 10.1145/3306346.3322980 – ident: 10.1016/j.jag.2025.104730_b0190 – volume: 40 start-page: 45 year: 2021 ident: 10.1016/j.jag.2025.104730_b0105 article-title: DONeRF: Towards Real‐Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks publication-title: Comput. Graphics Forum doi: 10.1111/cgf.14340 – start-page: 4312 year: 2019 ident: 10.1016/j.jag.2025.104730_b0150 article-title: Mvscrf: Learning multi-view stereo with conditional random fields – ident: 10.1016/j.jag.2025.104730_b0200 doi: 10.1007/978-3-031-72933-1_9 – ident: 10.1016/j.jag.2025.104730_b0055 – ident: 10.1016/j.jag.2025.104730_b0080 – start-page: 19595 year: 2024 ident: 10.1016/j.jag.2025.104730_b0155 article-title: Gs-slam: Dense visual slam with 3d gaussian splatting – ident: 10.1016/j.jag.2025.104730_b0180 doi: 10.1109/WACV61041.2025.00375 – ident: 10.1016/j.jag.2025.104730_b0020 doi: 10.1007/978-3-031-19824-3_20 – ident: 10.1016/j.jag.2025.104730_b0050 doi: 10.1109/CVPR52734.2025.01119 – volume: 62 start-page: 1 year: 2024 ident: 10.1016/j.jag.2025.104730_b0195 article-title: Micro-Structures Graph-based Point Cloud Registration for Balancing Efficiency and Accuracy publication-title: IEEE Trans. Geosci. Remote Sens. doi: 10.1109/TGRS.2024.3510781 – volume: 41 start-page: 1 year: 2022 ident: 10.1016/j.jag.2025.104730_b0100 article-title: Instant neural graphics primitives with a multiresolution hash encoding publication-title: ACM Trans. Graph. doi: 10.1145/3528223.3530127 – volume: 65 start-page: 99 year: 2022 ident: 10.1016/j.jag.2025.104730_b0095 article-title: NeRF: representing scenes as neural radiance fields for view synthesis publication-title: Commun. ACM doi: 10.1145/3503250 – start-page: 5470 year: 2022 ident: 10.1016/j.jag.2025.104730_b0010 article-title: Mip-nerf 360: Unbounded anti-aliased neural radiance fields |
SSID | ssj0017768 |
Score | 2.4080234 |
Snippet | •A new method combines LiDAR point clouds and multi-view images for 3D reconstruction.•Dense depth maps generated from LiDAR point clouds improve... Multi-view reconstruction of real-world scenes has been an important and challenging task. Although methods based on Neural Radiance Fields (NeRF) and 3D... |
SourceID | doaj crossref elsevier |
SourceType | Open Website Index Database Publisher |
StartPage | 104730 |
SubjectTerms | 3D gaussian splatting 3D reconstruction Depth prior Depth regularization Geometric anchors Novel view synthesis |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07T8QwDI4QEwyIpzhe8sCEVJFL01zLBhwHQsACSGxVnAcc4nqIg3_AD8dJW-iCWJgqRZFTxW79WbY_M7bf504RDuVJHz0FKEblCRZKJC7VKeFz7jEOm7i-URf38vIhe-iM-go1YTU9cH1xhz502EiJUttCeq5zbYXwqchzLKyPLeuCfF4bTDX5g8GgboLLVJHkMhVtPjNWdj3rRwoMRRbym4NQ_tzxSJG4v-OYOs5mtMyWGpQIx_XbrbA5V62yxQ534CrbOPtpUaOtzTc6W2OfV-Ph-e0RHFfgIj8EbYB0CDH0_aaLBd9WZUHLGEFi4WVs9Ru8TmkNzMv0w85AVxZi2WESsggwntAfaAaEdcFVT7F-AB7ddBImcxnwgTWLgP06ux-d3Z1eJM2shcSkir8n2pnMF1boft8b6YVwvjAWFcpca_SGFOgkGsxQaFMYkVuurfKErjhahz7dYPPVtHKbDEyg_DJqQE8hjXFYIAmwVnjDrePYYwftfZevNaVG2daaPZeknDIop6yV02MnQSPfGwMbdlwgGykbGyn_spEek60-ywZY1ICBRI1_P3vrP87eZgtBZF0zuMPmSc1ul3DMO-5Fk_0CTCXzfw priority: 102 providerName: Directory of Open Access Journals |
Title | LiDGS: An efficient 3D reconstruction framework integrating lidar point clouds and multi-view images for enhanced geometric fidelity |
URI | https://dx.doi.org/10.1016/j.jag.2025.104730 https://doaj.org/article/f071444b4ad94f0a8ad22f3288b9df41 |
Volume | 142 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwELYQXOih4lHULWU1B06VonUcrxP3tiyPLaWoKkXiFvm5DYJkxdJ_0B_O2EmW7aGXnqJYYyfyTMbjzDefCTlOqRMYh9Ik1R43KEYUiZaCJS5TGcbn1Ot42MS3azG75Zd347sNMu1rYQKssvP9rU-P3rprGXWzOVpU1egGdx6y4MEgA4lKjn54i2VSoGlvTb58nV2vkgl53lbEoXwSOvTJzQjzuldz3CWycUh25gELvbY8RRb_tVVqbeU53yFvu5ARJu1b7ZINV--RN2tEgnvk4Oy1Xg1Fuw92uU_-XFWnFzefYVKDi2QRKADZKcR98Io7FnwP0YKePgKHhYfKqidYNNgG5qH5bZegagsRg5iElAJUj-iOloCBL7j6VwQTwNw1j-GYLgM-UGhhlP-O3J6f_ZzOku7ghcRkgj4nypmxl5apNPWGe8acl8ZqoXmhlPYGtem4NnqsmTLSsMJSZYXHUItq67TPDshm3dTuPQET-L-MyPHKuDFOS40DWMu8odZRPSCf-vkuFy2_RtkDz-5LVE4ZlFO2yhmQk6CRlWCgxo4NzdO87Gyj9KEki3PNlZXcU1Uoy5jPWFFoaT1PB4T3-iz_sjQcqvr3sz_8X7dDsh3uWsjgR7KJinVHGMY86yGa6fTH1fdhZ67D-DvgBcDw9w8 |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9swDCaK9LDtMGzdimVPHnYaYESW5VdvWR9L1zSXtkBvgp6pi9YOmu4f9IeP8iPLDrvsZECmZUOkKRL8-Anga8xcRnEoi2LtKUExWRHpMuORS1RC8Tnzuj1s4nyRza7Ez-v0egcOh16YAKvsfX_n01tv3Y9M-tWcrKpqckGZR1mIYJCBRCUnP7wb2KnSEexOT89mi00xIc-7jjiSj8IDQ3GzhXndqiVliTwNxc48YKG3tqeWxX9rl9raeU5ewcs-ZMRp91WvYcfVe_Bii0hwD_aP__SrkWj_w67fwNO8OvpxcYDTGl1LFkECmBxhmwdvuGPRDxAtHOgjaFq8q6x6wFVDY2juml92jaq22GIQo1BSwOqe3NEaKfBFV9-0YAJcuuY-HNNl0AcKLYry38LVyfHl4SzqD16ITJKxx0g5k_rSchXH3gjPufOlsTrTolBKe0PadEIbnWquTGl4YZmymadQi2nrtE_2YVQ3tXsHaAL_l8lyunJhjNOlpgms5d4w65gew7dhveWq49eQA_DsVpJyZFCO7JQzhu9BIxvBQI3dDjQPS9nbhvShJUsILZQthWeqUJZzn_Ci0KX1Ih6DGPQp_7I0mqr697vf_99jX-DZ7PJ8Lueni7MP8Dzc6eCDH2FESnafKKR51J97k_0N20H3aw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LiDGS%3A+An+efficient+3D+reconstruction+framework+integrating+lidar+point+clouds+and+multi-view+images+for+enhanced+geometric+fidelity&rft.jtitle=International+journal+of+applied+earth+observation+and+geoinformation&rft.au=Yan%2C+Li&rft.au=Song%2C+Jiang&rft.au=Xie%2C+Hong&rft.au=Wei%2C+Pengcheng&rft.date=2025-08-01&rft.issn=1569-8432&rft.volume=142&rft.spage=104730&rft_id=info:doi/10.1016%2Fj.jag.2025.104730&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_jag_2025_104730 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1569-8432&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1569-8432&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1569-8432&client=summon |