Artificial Intelligence, Intersectionality, and the Future of Public Health
Artificial intelligence (AI) encompasses a broad collection of algorithms that increasingly affect public health both positively and negatively through applications in health promotion, health care, criminal justice, finance, social networks, employment, and other social determinants of health. Alth...
Saved in:
Published in | American journal of public health (1971) Vol. 111; no. 1; pp. 98 - 100 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
United States
American Public Health Association
01.01.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Artificial intelligence (AI) encompasses a broad collection of algorithms that increasingly affect public health both positively and negatively through applications in health promotion, health care, criminal justice, finance, social networks, employment, and other social determinants of health. Although fairness, accountability, transparency, and ethics (FATE) have been recognized in the AI research community as principles for evaluating algorithms, an intersectional approach is needed to ensure that negative impacts of AI on marginalized groups are understood and avoided and that AI reaches its full potential to support public health. Emerging from Black feminist legal and sociological scholarship, intersectionality makes explicit the shaping of experiences by social power in specific ways for those at different intersections of social identities or positions. The potential for bias in AI algorithms is illustrated by high-profile examples, such as Microsoft's short-lived racist, antisemitic, and misogynistic chatbot Tay and the face depixelizer application that "reconstructed" a high-resolution facial image of a White Barack Obama. When individuals and organizations in positions of power use AI applications for decision-making, they can directly affect social determinants of health for individuals subject to that power. For example, recidivism prediction systems are used to inform judicial decisionmaking on bail, parole, and sentencing, and Amazon's abandoned resume review system penalized applicants whose resumes contained the word "women's."Underlying reasons for biases are often complex and technical, but because AI applications "learn" from data produced in biased societies, they are shaped by both information biases and societal biases. The observed reproduction and intensification of societal biases is therefore unsurprising. Algorithmic bias against a particular group can exist even if that group's social identity or position is not provided to the algorithm directly, because AI methods readily identify latent constructs reflected in combinations of other variables. Moreover, algorithmic bias may apply not only across single social identities or positions (e.g., race, gender) but across their intersections. For example, image recognition applications identify gender particularly poorly for dark-skinned women. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 ObjectType-Editorial-2 ObjectType-Commentary-1 G. R. Bauer conceptualized the editorial and wrote the first draft. D. J. Lizotte revised it for substantive content. Both authors edited the editorial and approved the final version. CONTRIBUTORS |
ISSN: | 0090-0036 1541-0048 1541-0048 |
DOI: | 10.2105/AJPH.2020.306006 |