ALGORITHMIC DISCRIMINATION AND INPUT ACCOUNTABILITY UNDER THE CIVIL RIGHTS ACTS

The disproportionate burden of COVID-19 among communities of color and a necessary renewed attention to racial inequalities have lent new urgency to concerns that algorithmic decision-making can lead to unintentional discrimination against members of historically marginalized groups. These concerns...

Full description

Saved in:
Bibliographic Details
Published inBerkeley technology law journal Vol. 36; no. 2; p. 675
Main Authors Bartlett, Robert, Morse, Adair, Wallace, Nancy, Stanton, Richard
Format Journal Article
LanguageEnglish
Published Berkeley University of California, Boalt Hall School of Law 01.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The disproportionate burden of COVID-19 among communities of color and a necessary renewed attention to racial inequalities have lent new urgency to concerns that algorithmic decision-making can lead to unintentional discrimination against members of historically marginalized groups. These concerns are being expressed through Congressional subpoenas, regulatory investigations, and an increasing number of algorithmic accountability bills pending in both state legislatures and Congress. To date, however, prominent efforts to define algorithmic accountability have tended to focus on output-oriented policies that may facilitate illegitimate discrimination or involve fairness corrections unlikely to be legally valid. Worse still, other approaches focus merely on a model's predictive accuracy-an approach at odds with long-standing U.S. anti-discrimination law. We provide a workable definition of algorithmic accountability that is rooted in case law addressing statistical discrimination in the context of Title VII of the Civil Rights Act of 1964. Using instruction from the burden-shifting framework codified to implement Title VII, we formulate a simple statistical test to apply to the design and review of the inputs used in any algorithmic decision-making process. Application of the test, which we label the Input Accountability Test, constitutes a legally viable, deployable tool that can prevent an algorithmic model from systematically penalizing members of protected groups who are otherwise qualified in a legitimate target characteristic of interest.
ISSN:1086-3818
2380-4742
DOI:10.15779/Z381N7XN5B