Semantic Self-adaptation: Enhancing Generalization with a Single Sample
Transactions on Machine Learning Research (TMLR) 2023 The lack of out-of-domain generalization is a critical weakness of deep networks for semantic segmentation. Previous studies relied on the assumption of a static model, i. e., once the training process is complete, model parameters remain fixed a...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.08.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Transactions on Machine Learning Research (TMLR) 2023 The lack of out-of-domain generalization is a critical weakness of deep
networks for semantic segmentation. Previous studies relied on the assumption
of a static model, i. e., once the training process is complete, model
parameters remain fixed at test time. In this work, we challenge this premise
with a self-adaptive approach for semantic segmentation that adjusts the
inference process to each input sample. Self-adaptation operates on two levels.
First, it fine-tunes the parameters of convolutional layers to the input image
using consistency regularization. Second, in Batch Normalization layers,
self-adaptation interpolates between the training and the reference
distribution derived from a single test sample. Despite both techniques being
well known in the literature, their combination sets new state-of-the-art
accuracy on synthetic-to-real generalization benchmarks. Our empirical study
suggests that self-adaptation may complement the established practice of model
regularization at training time for improving deep network generalization to
out-of-domain data. Our code and pre-trained models are available at
https://github.com/visinf/self-adaptive. |
---|---|
Bibliography: | 2835-8856 |
DOI: | 10.48550/arxiv.2208.05788 |