AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial la...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.03.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this study, we propose AniPortrait, a novel framework for generating
high-quality animation driven by audio and a reference portrait image. Our
methodology is divided into two stages. Initially, we extract 3D intermediate
representations from audio and project them into a sequence of 2D facial
landmarks. Subsequently, we employ a robust diffusion model, coupled with a
motion module, to convert the landmark sequence into photorealistic and
temporally consistent portrait animation. Experimental results demonstrate the
superiority of AniPortrait in terms of facial naturalness, pose diversity, and
visual quality, thereby offering an enhanced perceptual experience. Moreover,
our methodology exhibits considerable potential in terms of flexibility and
controllability, which can be effectively applied in areas such as facial
motion editing or face reenactment. We release code and model weights at
https://github.com/scutzzj/AniPortrait |
---|---|
DOI: | 10.48550/arxiv.2403.17694 |