Counterfactual Fairness in Text Classification through Robustness

In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that "Some people...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Garg, Sahaj, Perot, Vincent, Limtiaco, Nicole, Taly, Ankur, Chi, Ed H, Beutel, Alex
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 13.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that "Some people are gay" is toxic while "Some people are straight" is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classifiers, and describe its relationship with group fairness. Further, we offer three approaches, blindness, counterfactual augmentation, and counterfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we find that blindness and CLP address counterfactual token fairness. The methods do not harm classifier performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classification.
ISSN:2331-8422