Pose Manipulation with Identity Preservation
This paper describes a new model which generates images in novel poses e.g. by altering face expression and orientation, from just a few instances of a human subject. Unlike previous approaches which require large datasets of a specific person for training, our approach may start from a scarce set o...
Saved in:
Published in | International journal of computers, communications & control Vol. 15; no. 2 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Oradea
Agora University of Oradea
01.04.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper describes a new model which generates images in novel poses e.g. by altering face expression and orientation, from just a few instances of a human subject. Unlike previous approaches which require large datasets of a specific person for training, our approach may start from a scarce set of images, even from a single image. To this end, we introduce Character Adaptive Identity Normalization GAN (CainGAN) which uses spatial characteristic features extracted by an embedder and combined across source images. The identity information is propagated throughout the network by applying conditional normalization. After extensive adversarial training, CainGAN receives figures of faces from a certain individual and produces new ones while preserving the person’s identity. Experimental results show that the quality of generated images scales with the size of the input set used during inference. Furthermore, quantitative measurements indicate that CainGAN performs better compared to other methods when training data is limited. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1841-9836 1841-9844 |
DOI: | 10.15837/ijccc.2020.2.3862 |