Pointfilter: Point Cloud Filtering via Encoder-Decoder Modeling

Point cloud filtering is a fundamental problem in geometry modeling and processing. Despite of significant advancement in recent years, the existing methods still suffer from two issues: 1) they are either designed without preserving sharp features or less robust in feature preservation; and 2) they...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on visualization and computer graphics Vol. 27; no. 3; pp. 2015 - 2027
Main Authors Zhang, Dongbo, Lu, Xuequan, Qin, Hong, He, Ying
Format Journal Article
LanguageEnglish
Published United States IEEE 01.03.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Point cloud filtering is a fundamental problem in geometry modeling and processing. Despite of significant advancement in recent years, the existing methods still suffer from two issues: 1) they are either designed without preserving sharp features or less robust in feature preservation; and 2) they usually have many parameters and require tedious parameter tuning. In this article, we propose a novel deep learning approach that automatically and robustly filters point clouds by removing noise and preserving their sharp features. Our point-wise learning architecture consists of an encoder and a decoder. The encoder directly takes points (a point and its neighbors) as input, and learns a latent representation vector which goes through the decoder to relate the ground-truth position with a displacement vector. The trained neural network can automatically generate a set of clean points from a noisy input. Extensive experiments show that our approach outperforms the state-of-the-art deep learning techniques in terms of both visual quality and quantitative error metrics. The source code and dataset can be found at https://github.com/dongbo-BUAA-VR/Pointfilter.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1077-2626
1941-0506
1941-0506
DOI:10.1109/TVCG.2020.3027069