How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness

Fair machine learning has been focusing on the development of equitable algorithms that address discrimination. Yet, many of these fairness‐aware approaches aim to obtain a unique solution to the problem, which leads to a poor understanding of the statistical limits of bias mitigation interventions....

Full description

Saved in:
Bibliographic Details
Published inInternational journal of intelligent systems Vol. 36; no. 4; pp. 1619 - 1643
Main Authors Valdivia, Ana, Sánchez‐Monedero, Javier, Casillas, Jorge
Format Journal Article
LanguageEnglish
Published New York Hindawi Limited 01.04.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Fair machine learning has been focusing on the development of equitable algorithms that address discrimination. Yet, many of these fairness‐aware approaches aim to obtain a unique solution to the problem, which leads to a poor understanding of the statistical limits of bias mitigation interventions. In this study, a novel methodology is presented to explore the tradeoff in terms of a Pareto front between accuracy and fairness. To this end, we propose a multiobjective framework that seeks to optimize both measures. The experimental framework is focused on logistiregression and decision tree classifiers since they are well‐known by the machine learning community. We conclude experimentally that our method can optimize classifiers by being fairer with a small cost on the classification accuracy. We believe that our contribution will help stakeholders of sociotechnical systems to assess how far they can go being fair and accurate, thus serving in the support of enhanced decision making where machine learning is used.
ISSN:0884-8173
1098-111X
DOI:10.1002/int.22354