Video Pre-trained Transformer: A Multimodal Mixture of Pre-trained Experts
We present Video Pre-trained Transformer. VPT uses four SOTA encoder models from prior work to convert a video into a sequence of compact embeddings. Our backbone, based on a reference Flan-T5-11B architecture, learns a universal representation of the video that is a non-linear sum of the encoder mo...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.03.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We present Video Pre-trained Transformer. VPT uses four SOTA encoder models
from prior work to convert a video into a sequence of compact embeddings. Our
backbone, based on a reference Flan-T5-11B architecture, learns a universal
representation of the video that is a non-linear sum of the encoder models. It
learns using an autoregressive causal language modeling loss by predicting the
words spoken in YouTube videos. Finally, we evaluate on standard downstream
benchmarks by training fully connected prediction heads for each task. To the
best of our knowledge, this is the first use of multiple frozen SOTA models as
encoders in an "embedding -> backbone -> prediction head" design pattern - all
others have trained their own joint encoder models. Additionally, we include
more modalities than the current SOTA, Merlot Reserve, by adding explicit Scene
Graph information. For these two reasons, we believe it could combine the
world's best open-source models to achieve SOTA performance. Initial
experiments demonstrate the model is learning appropriately, but more
experimentation and compute is necessary, and already in progress, to realize
our loftier goals. Alongside this work, we build on the YT-20M dataset,
reproducing it and adding 25,000 personally selected YouTube videos to its
corpus. All code and model checkpoints are open sourced under a standard MIT
license. |
---|---|
DOI: | 10.48550/arxiv.2304.10505 |