A CLIP-Enhanced Method for Video-Language Understanding

This technical report summarizes our method for the Video-And-Language Understanding Evaluation (VALUE) challenge (https://value-benchmark.github.io/challenge\_2021.html). We propose a CLIP-Enhanced method to incorporate the image-text pretrained knowledge into downstream video-text tasks. Combined...

Full description

Saved in:
Bibliographic Details
Main Authors Li, Guohao, He, Feng, Feng, Zhifan
Format Journal Article
LanguageEnglish
Published 13.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This technical report summarizes our method for the Video-And-Language Understanding Evaluation (VALUE) challenge (https://value-benchmark.github.io/challenge\_2021.html). We propose a CLIP-Enhanced method to incorporate the image-text pretrained knowledge into downstream video-text tasks. Combined with several other improved designs, our method outperforms the state-of-the-art by $2.4\%$ ($57.58$ to $60.00$) Meta-Ave score on VALUE benchmark.
DOI:10.48550/arxiv.2110.07137