A Layer-Based Sequential Framework for Scene Generation with GANs
The visual world we sense, interpret and interact everyday is a complex composition of interleaved physical entities. Therefore, it is a very challenging task to generate vivid scenes of similar complexity using computers. In this work, we present a scene generation framework based on Generative Adv...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.02.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The visual world we sense, interpret and interact everyday is a complex
composition of interleaved physical entities. Therefore, it is a very
challenging task to generate vivid scenes of similar complexity using
computers. In this work, we present a scene generation framework based on
Generative Adversarial Networks (GANs) to sequentially compose a scene,
breaking down the underlying problem into smaller ones. Different than the
existing approaches, our framework offers an explicit control over the elements
of a scene through separate background and foreground generators. Starting with
an initially generated background, foreground objects then populate the scene
one-by-one in a sequential manner. Via quantitative and qualitative experiments
on a subset of the MS-COCO dataset, we show that our proposed framework
produces not only more diverse images but also copes better with affine
transformations and occlusion artifacts of foreground objects than its
counterparts. |
---|---|
DOI: | 10.48550/arxiv.1902.00671 |