Unsupervised Concept Discovery Mitigates Spurious Correlations

ICLM 2024 Models prone to spurious correlations in training data often produce brittle predictions and introduce unintended biases. Addressing this challenge typically involves methods relying on prior knowledge and group annotation to remove spurious correlations, which may not be readily available...

Full description

Saved in:
Bibliographic Details
Main Authors Arefin, Md Rifat, Zhang, Yan, Baratin, Aristide, Locatello, Francesco, Rish, Irina, Liu, Dianbo, Kawaguchi, Kenji
Format Journal Article
LanguageEnglish
Published 20.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:ICLM 2024 Models prone to spurious correlations in training data often produce brittle predictions and introduce unintended biases. Addressing this challenge typically involves methods relying on prior knowledge and group annotation to remove spurious correlations, which may not be readily available in many applications. In this paper, we establish a novel connection between unsupervised object-centric learning and mitigation of spurious correlations. Instead of directly inferring subgroups with varying correlations with labels, our approach focuses on discovering concepts: discrete ideas that are shared across input samples. Leveraging existing object-centric representation learning, we introduce CoBalT: a concept balancing technique that effectively mitigates spurious correlations without requiring human labeling of subgroups. Evaluation across the benchmark datasets for sub-population shifts demonstrate superior or competitive performance compared state-of-the-art baselines, without the need for group annotation. Code is available at https://github.com/rarefin/CoBalT.
DOI:10.48550/arxiv.2402.13368