Learning Controllable Face Generator from Disjoint Datasets

Recently, GANs have become popular for synthesizing photorealistic facial images with desired facial attributes. However, crucial to the success of such networks is the availability of large-scale datasets that are fully-attributed, i.e., datasets in which the Cartesian product of all attribute valu...

Full description

Saved in:
Bibliographic Details
Published inComputer Analysis of Images and Patterns pp. 209 - 223
Main Authors Li, Jing, Wong, Yongkang, Sim, Terence
Format Book Chapter
LanguageEnglish
Published Cham Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, GANs have become popular for synthesizing photorealistic facial images with desired facial attributes. However, crucial to the success of such networks is the availability of large-scale datasets that are fully-attributed, i.e., datasets in which the Cartesian product of all attribute values is present, as otherwise the learning becomes skewed. Such fully-attributed datasets are impractically expensive to collect. Many existing datasets are only partially-attributed, and do not have any subjects in common. It thus becomes important to be able to jointly learn from such datasets. In this paper, we propose a GAN-based facial image generator that can be trained on partially-attributed disjoint datasets. The key idea is to use a smaller, fully-attributed dataset to bridge the learning. Our generator (i) provides independent control of multiple attributes, and (ii) renders photorealistic facial images with target attributes.
Bibliography:This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Strategic Capability Research Centres Funding Initiative.
ISBN:9783030298876
3030298876
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-29888-3_17