A CLIP-Enhanced Method for Video-Language Understanding
This technical report summarizes our method for the Video-And-Language Understanding Evaluation (VALUE) challenge (https://value-benchmark.github.io/challenge\_2021.html). We propose a CLIP-Enhanced method to incorporate the image-text pretrained knowledge into downstream video-text tasks. Combined...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.10.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This technical report summarizes our method for the Video-And-Language
Understanding Evaluation (VALUE) challenge
(https://value-benchmark.github.io/challenge\_2021.html). We propose a
CLIP-Enhanced method to incorporate the image-text pretrained knowledge into
downstream video-text tasks. Combined with several other improved designs, our
method outperforms the state-of-the-art by $2.4\%$ ($57.58$ to $60.00$)
Meta-Ave score on VALUE benchmark. |
---|---|
DOI: | 10.48550/arxiv.2110.07137 |