Emotional facial expression transfer from a single image via generative adversarial nets

Facial expression transfer from a single image is a challenging task and has drawn sustained attention in the fields of computer vision and computer graphics. Recently, generative adversarial nets (GANs) have provided a new approach to facial expression transfer from a single image toward target fac...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 29; no. 3-4
Main Authors Qiao, Fengchun, Yao, Naiming, Jiao, Zirui, Li, Zhihao, Chen, Hui, Wang, Hongan
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.05.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Facial expression transfer from a single image is a challenging task and has drawn sustained attention in the fields of computer vision and computer graphics. Recently, generative adversarial nets (GANs) have provided a new approach to facial expression transfer from a single image toward target facial expressions. However, it is still difficult to obtain a sequence of smoothly changed facial expressions. We present a novel GAN‐based method for generating emotional facial expression animations given a single image and several facial landmarks for the in‐between stages. In particular, landmarks of other subjects are incorporated into a GAN model to control the generated facial expression from a latent space. With the trained model, high‐quality face images and a smoothly changed facial expression sequence can be effectively obtained, which are showed qualitatively and quantitatively in our experiments on the Multi‐PIE and CK+ data sets. We present a novel GAN‐based method for generating emotional facial expression animations given a single image and several facial landmarks for the in‐between stages. With the trained model, high‐quality face images and smoothly‐changed facial expression sequence can be effectively obtained.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1546-4261
1546-427X
DOI:10.1002/cav.1819