Think Your Artificial Intelligence Software Is Fair? Think Again

Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning...

Full description

Saved in:
Bibliographic Details
Published inIEEE software Vol. 36; no. 4; pp. 76 - 80
Main Authors Bellamy, Rachel K.E., Dey, Kuntal, Hind, Michael, Hoffman, Samuel C., Houde, Stephanie, Kannan, Kalapriya, Lohia, Pranay, Mehta, Sameep, Mojsilovic, Aleksandra, Nagar, Seema, Ramamurthy, Karthikeyan Natesan, Richards, John, Saha, Diptikalyan, Sattigeri, Prasanna, Singh, Moninder, Varshney, Kush R., Zhang, Yunfeng
Format Journal Article
LanguageEnglish
Published Los Alamitos IEEE 01.07.2019
IEEE Computer Society
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning software is also biased, sometimes in similar ways to humans, often in different ways. While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.
ISSN:0740-7459
1937-4194
DOI:10.1109/MS.2019.2908514