TCLR: Temporal contrastive learning for video representation

Contrastive learning has nearly closed the gap between supervised and self-supervised learning of image representations, and has also been explored for videos. However, prior work on contrastive learning for video data has not explored the effect of explicitly encouraging the features to be distinct...

Full description

Saved in:
Bibliographic Details
Published inComputer vision and image understanding Vol. 219; p. 103406
Main Authors Dave, Ishan, Gupta, Rohit, Rizve, Mamshad Nayeem, Shah, Mubarak
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Contrastive learning has nearly closed the gap between supervised and self-supervised learning of image representations, and has also been explored for videos. However, prior work on contrastive learning for video data has not explored the effect of explicitly encouraging the features to be distinct across the temporal dimension. We develop a new temporal contrastive learning framework consisting of two novel losses to improve upon existing contrastive self-supervised video representation learning methods. The local–local temporal contrastive loss adds the task of discriminating between non-overlapping clips from the same video, whereas the global–local temporal contrastive aims to discriminate between timesteps of the feature map of an input clip in order to increase the temporal diversity of the learned features. Our proposed temporal contrastive learning framework achieves significant improvement over the state-of-the-art results in various downstream video understanding tasks such as action recognition, limited-label action classification, and nearest-neighbor video retrieval on multiple video datasets and backbones. We also demonstrate significant improvement in fine-grained action classification for visually similar classes. With the commonly used 3D ResNet-18 architecture with UCF101 pretraining, we achieve 82.4% (+5.1% increase over the previous best) top-1 accuracy on UCF101 and 52.9% (+5.4% increase) on HMDB51 action classification, and 56.2% (+11.7% increase) Top-1 Recall on UCF101 nearest neighbor video retrieval. Code released at https://github.com/DAVEISHAN/TCLR. •TCLR is a contrastive learning framework for video understanding tasks.•Explicitly enforces within instance temporal feature variation without pretext tasks.•Proposes novel local–local and global–local temporal contrastive losses.•Significantly outperforms state-of-art pre-training on video understanding tasks.•Uses fine-grained action classification task for evaluating learned representations.
ISSN:1077-3142
1090-235X
DOI:10.1016/j.cviu.2022.103406