Representation learning of point cloud upsampling in global and local inputs

In recent years, point cloud upsampling has been widely applied in tasks such as 3D reconstruction and object recognition. This study proposed a novel framework, ReLPU, which enhances upsampling performance by explicitly learning from both global and local structural features of point clouds. Specif...

Full description

Saved in:
Bibliographic Details
Published inComputer vision and image understanding Vol. 260; p. 104467
Main Authors Zhang, Tongxu, Wang, Bei
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.10.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, point cloud upsampling has been widely applied in tasks such as 3D reconstruction and object recognition. This study proposed a novel framework, ReLPU, which enhances upsampling performance by explicitly learning from both global and local structural features of point clouds. Specifically, we extracted global features from uniformly segmented inputs (Average Segments) and local features from patch-based inputs of the same point cloud. These two types of features were processed through parallel autoencoders, fused, and then fed into a shared decoder for upsampling. This dual-input design improved feature completeness and cross-scale consistency, especially in sparse and noisy regions. Our framework was applied to several state-of-the-art autoencoder-based networks and validated on standard datasets. Experimental results demonstrated consistent improvements in geometric fidelity and robustness. In addition, saliency maps confirmed that parallel global-local learning significantly enhanced the interpretability and performance of point cloud upsampling. •ReLPU: a new framework with parallel encoders for local and global features.•Gradient contributions explain local–global feature roles in point cloud upsampling.•Experiments on PU1K and ABC show ReLPU outperforms prior state-of-the-art models.
ISSN:1077-3142
DOI:10.1016/j.cviu.2025.104467