HR-Net: a landmark based high realistic face reenactment network

In the past, GAN-based face reenactment methods concentrated mostly on transferring the facial expressions and positions of the source. However, the generated results were susceptible to blurring in some minute details of the face, such as teeth and hair, and their backgrounds were also not guarante...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 33; no. 11; p. 1
Main Authors Ren, Qiuyu, Lu, Zhiying, Wu, Haopeng, Zhang, Jianfeng, Dong, Zijian
Format Journal Article
LanguageEnglish
Published New York IEEE 01.11.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the past, GAN-based face reenactment methods concentrated mostly on transferring the facial expressions and positions of the source. However, the generated results were susceptible to blurring in some minute details of the face, such as teeth and hair, and their backgrounds were also not guaranteed to be consistent with the manipulated images in terms of light and shadow. Because of these issues, the generated results could be distinguishable as fakes. In this paper, we proposed a landmark based method named HR-Net, which can render source facial expressions and postures on any identity and simultaneously generate realistic face details. Firstly, a lightweight landmark identity conversion module (LIC) was designed to address the identity leakage problem, and it represented facial expressions and poses with only 68 2D landmarks. On this basis, a boundary-guided face reenactment module (BFR) was presented to only learn the background of the reference images; thus, the results generated by BFR can be consistent with the reference images' light and shadow. Moreover, a novel local perceptual loss function was implemented to support the BFR module in generating more realistic details. Extensive experiments demonstrated that our method achieved the state of the art.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3268062