RSFace: subject agnostic face swapping with expression high fidelity
Face swapping has shown remarkable progress with the flourishing development of deep learning. In particular, the emergence of subject agnostic methods has broadened the range of applications of face swapping. Furthermore, high fidelity implementation has improved the naturalness of generated faces....
Saved in:
Published in | The Visual computer Vol. 39; no. 11; pp. 5497 - 5511 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.11.2023
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Face swapping has shown remarkable progress with the flourishing development of deep learning. In particular, the emergence of subject agnostic methods has broadened the range of applications of face swapping. Furthermore, high fidelity implementation has improved the naturalness of generated faces. However, some high fidelity face swapping methods still suffer from expression distortion at this stage. In this work, we propose an extended Adaptive Embedding Integration Network (AEI-Net) to improve the performance of this network in synthesizing swapped faces on faces in the wild. First, we add a face reenactment module to synchronize the expressions of the input faces and reduce the influence of irrelevant attributes on the synthesis results. Second, we train AEI-Net using a new attribute matching loss to improve the consistency of the generated results and the target face expressions. Finally, extensive experiments on wild faces demonstrate that our method can better restore expression and posture while maintaining identity than previous methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0178-2789 1432-2315 |
DOI: | 10.1007/s00371-022-02675-z |