Source-free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition
Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments. However, these methods require constant access to source data during the adaptation process. Yet in many real-world...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
09.03.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Video-based Unsupervised Domain Adaptation (VUDA) methods improve the
robustness of video models, enabling them to be applied to action recognition
tasks across different environments. However, these methods require constant
access to source data during the adaptation process. Yet in many real-world
applications, subjects and scenes in the source video domain should be
irrelevant to those in the target video domain. With the increasing emphasis on
data privacy, such methods that require source data access would raise serious
privacy issues. Therefore, to cope with such concern, a more practical domain
adaptation scenario is formulated as the Source-Free Video-based Domain
Adaptation (SFVDA). Though there are a few methods for Source-Free Domain
Adaptation (SFDA) on image data, these methods yield degenerating performance
in SFVDA due to the multi-modality nature of videos, with the existence of
additional temporal features. In this paper, we propose a novel Attentive
Temporal Consistent Network (ATCoN) to address SFVDA by learning temporal
consistency, guaranteed by two novel consistency objectives, namely feature
consistency and source prediction consistency, performed across local temporal
features. ATCoN further constructs effective overall temporal features by
attending to local temporal features based on prediction confidence. Empirical
results demonstrate the state-of-the-art performance of ATCoN across various
cross-domain action recognition benchmarks. |
---|---|
DOI: | 10.48550/arxiv.2203.04559 |