Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias
Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS) 2024, Valencia, Spain. PMLR: Volume 238 Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Proceedings of the 27th International Conference on Artificial
Intelligence and Statistics (AISTATS) 2024, Valencia, Spain. PMLR: Volume 238 Neural networks trained with (stochastic) gradient descent have an inductive
bias towards learning simpler solutions. This makes them highly prone to
learning spurious correlations in the training data, that may not hold at test
time. In this work, we provide the first theoretical analysis of the effect of
simplicity bias on learning spurious correlations. Notably, we show that
examples with spurious features are provably separable based on the model's
output early in training. We further illustrate that if spurious features have
a small enough noise-to-signal ratio, the network's output on the majority of
examples is almost exclusively determined by the spurious features, leading to
poor worst-group test accuracy. Finally, we propose SPARE, which identifies
spurious correlations early in training and utilizes importance sampling to
alleviate their effect. Empirically, we demonstrate that SPARE outperforms
state-of-the-art methods by up to 21.1% in worst-group accuracy, while being up
to 12x faster. We also show that SPARE is a highly effective but lightweight
method to discover spurious correlations. |
---|---|
DOI: | 10.48550/arxiv.2305.18761 |