CV-VAE: A Compatible Video VAE for Latent Generative Video Models
Spatio-temporal compression of videos, utilizing networks such as Variational Autoencoders (VAE), plays a crucial role in OpenAI's SORA and numerous other video generative models. For instance, many LLM-like video models learn the distribution of discrete tokens derived from 3D VAEs within the...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Spatio-temporal compression of videos, utilizing networks such as Variational
Autoencoders (VAE), plays a crucial role in OpenAI's SORA and numerous other
video generative models. For instance, many LLM-like video models learn the
distribution of discrete tokens derived from 3D VAEs within the VQVAE
framework, while most diffusion-based video models capture the distribution of
continuous latent extracted by 2D VAEs without quantization. The temporal
compression is simply realized by uniform frame sampling which results in
unsmooth motion between consecutive frames. Currently, there lacks of a
commonly used continuous video (3D) VAE for latent diffusion-based video models
in the research community. Moreover, since current diffusion-based approaches
are often implemented using pre-trained text-to-image (T2I) models, directly
training a video VAE without considering the compatibility with existing T2I
models will result in a latent space gap between them, which will take huge
computational resources for training to bridge the gap even with the T2I models
as initialization. To address this issue, we propose a method for training a
video VAE of latent video models, namely CV-VAE, whose latent space is
compatible with that of a given image VAE, e.g., image VAE of Stable Diffusion
(SD). The compatibility is achieved by the proposed novel latent space
regularization, which involves formulating a regularization loss using the
image VAE. Benefiting from the latent space compatibility, video models can be
trained seamlessly from pre-trained T2I or video models in a truly
spatio-temporally compressed latent space, rather than simply sampling video
frames at equal intervals. With our CV-VAE, existing video models can generate
four times more frames with minimal finetuning. Extensive experiments are
conducted to demonstrate the effectiveness of the proposed video VAE. |
---|---|
DOI: | 10.48550/arxiv.2405.20279 |