The shape variational autoencoder: A deep generative model of part‐segmented 3D objects

We introduce a generative model of part‐segmented 3D objects: the shape variational auto‐encoder (ShapeVAE). The ShapeVAE describes a joint distribution over the existence of object parts, the locations of a dense set of surface points, and over surface normals associated with these points. Our mode...

Full description

Saved in:
Bibliographic Details
Published inComputer graphics forum Vol. 36; no. 5; pp. 1 - 12
Main Authors Nash, C., Williams, C. K. I.
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.08.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We introduce a generative model of part‐segmented 3D objects: the shape variational auto‐encoder (ShapeVAE). The ShapeVAE describes a joint distribution over the existence of object parts, the locations of a dense set of surface points, and over surface normals associated with these points. Our model makes use of a deep encoder‐decoder architecture that leverages the part‐decomposability of 3D objects to embed high‐dimensional shape representations and sample novel instances. Given an input collection of part‐segmented objects with dense point correspondences the ShapeVAE is capable of synthesizing novel, realistic shapes, and by performing conditional inference enables imputation of missing parts or surface normals. In addition, by generating both points and surface normals, our model allows for the use of powerful surface‐reconstruction methods for mesh synthesis. We provide a quantitative evaluation of the ShapeVAE on shape‐completion and test‐set log‐likelihood tasks and demonstrate that the model performs favourably against strong baselines. We demonstrate qualitatively that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape‐embedding. In addition we show that the ShapeVAE facilitates mesh reconstruction by sampling consistent surface normals.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.13240