Masked Generative Extractor for Synergistic Representation and 3D Generation of Point Clouds

Representation and generative learning, as reconstruction-based methods, have demonstrated their potential for mutual reinforcement across various domains. In the field of point cloud processing, although existing studies have adopted training strategies from generative models to enhance representat...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zeng, Hongliang, Zhang, Ping, Li, Fang, Wang, Jiahua, Ye, Tingyu, Guo, Pengteng
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 15.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Representation and generative learning, as reconstruction-based methods, have demonstrated their potential for mutual reinforcement across various domains. In the field of point cloud processing, although existing studies have adopted training strategies from generative models to enhance representational capabilities, these methods are limited by their inability to genuinely generate 3D shapes. To explore the benefits of deeply integrating 3D representation learning and generative learning, we propose an innovative framework called \textit{Point-MGE}. Specifically, this framework first utilizes a vector quantized variational autoencoder to reconstruct a neural field representation of 3D shapes, thereby learning discrete semantic features of point patches. Subsequently, we design a sliding masking ratios to smooth the transition from representation learning to generative learning. Moreover, our method demonstrates strong generalization capability in learning high-capacity models, achieving new state-of-the-art performance across multiple downstream tasks. In shape classification, Point-MGE achieved an accuracy of 94.2% (+1.0%) on the ModelNet40 dataset and 92.9% (+5.5%) on the ScanObjectNN dataset. Experimental results also confirmed that Point-MGE can generate high-quality 3D shapes in both unconditional and conditional settings.
ISSN:2331-8422