Conditional entropy minimization principle for learning domain invariant representation features

Invariance-principle-based methods such as Invariant Risk Minimization (IRM), have recently emerged as promising approaches for Domain Generalization (DG). Despite promising theory, such approaches fail in common classification tasks due to the mixing of true invariant features and spurious invarian...

Full description

Saved in:
Bibliographic Details
Main Authors Nguyen, Thuan, Lyu, Boyang, Ishwar, Prakash, Scheutz, Matthias, Aeron, Shuchin
Format Journal Article
LanguageEnglish
Published 25.01.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Invariance-principle-based methods such as Invariant Risk Minimization (IRM), have recently emerged as promising approaches for Domain Generalization (DG). Despite promising theory, such approaches fail in common classification tasks due to the mixing of true invariant features and spurious invariant features. To address this, we propose a framework based on the conditional entropy minimization (CEM) principle to filter-out the spurious invariant features leading to a new algorithm with a better generalization capability. We show that our proposed approach is closely related to the well-known Information Bottleneck (IB) framework and prove that under certain assumptions, entropy minimization can exactly recover the true invariant features. Our approach provides competitive classification accuracy compared to recent theoretically-principled state-of-the-art alternatives across several DG datasets.
DOI:10.48550/arxiv.2201.10460