Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data
Recent work has shown the ability to learn generative models for 3D shapes from only unstructured 2D images. However, training such models requires differentiating through the rasterization step of the rendering process, therefore past work has focused on developing bespoke rendering models which sm...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.02.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent work has shown the ability to learn generative models for 3D shapes
from only unstructured 2D images. However, training such models requires
differentiating through the rasterization step of the rendering process,
therefore past work has focused on developing bespoke rendering models which
smooth over this non-differentiable process in various ways. Such models are
thus unable to take advantage of the photo-realistic, fully featured,
industrial renderers built by the gaming and graphics industry. In this paper
we introduce the first scalable training technique for 3D generative models
from 2D data which utilizes an off-the-shelf non-differentiable renderer. To
account for the non-differentiability, we introduce a proxy neural renderer to
match the output of the non-differentiable renderer. We further propose
discriminator output matching to ensure that the neural renderer learns to
smooth over the rasterization appropriately. We evaluate our model on images
rendered from our generated 3D shapes, and show that our model can consistently
learn to generate better shapes than existing models when trained with
exclusively unstructured 2D images. |
---|---|
DOI: | 10.48550/arxiv.2002.12674 |