Avoiding Discrimination through Causal Reasoning

Advances in Neural Information Processing Systems 30, 2017, p. 656--666 Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, pr...

Full description

Saved in:
Bibliographic Details
Main Authors Kilbertus, Niki, Rojas-Carulla, Mateo, Parascandolo, Giambattista, Hardt, Moritz, Janzing, Dominik, Schölkopf, Bernhard
Format Journal Article
LanguageEnglish
Published 08.06.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Advances in Neural Information Processing Systems 30, 2017, p. 656--666 Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively. Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
DOI:10.48550/arxiv.1706.02744