To Err is (Not) Human: Examining Beliefs about Errors Made by Artificial Intelligence

Algorithm aversion research largely demonstrates algorithm aversion in tasks related to human intelligence. We offer a deeper understanding by investigating lay beliefs about AI, per se: We show that consumers believe AI commits fewer total errors, but is more likely to commit a severe error, than h...

Full description

Saved in:
Bibliographic Details
Published inAdvances in consumer research Vol. 50; pp. 406 - 407
Main Authors Escoe, Brianna, Vanbergen, Noah, Irmak, Caglar
Format Conference Proceeding
LanguageEnglish
Published Urbana Association for Consumer Research 01.01.2022
Subjects
Online AccessGet full text
ISSN0098-9258

Cover

Loading…
Abstract Algorithm aversion research largely demonstrates algorithm aversion in tasks related to human intelligence. We offer a deeper understanding by investigating lay beliefs about AI, per se: We show that consumers believe AI commits fewer total errors, but is more likely to commit a severe error, than humans. Companies are increasingly relying on artificial intelligence (AI) in various aspects of their operations. Yet, a great deal of work has demonstrated that consumers are hesitant to adopt AI, a phenomenon referred to as "algorithm aversion," (Jussupow, Benbasat, and Heinzl 2020). Algorithm aversion has been found to occur in domains where uniquely human capabilities (e.g., moral judgment, accounting for uniqueness, competence at making subjective judgments, etc.) are relevant (Bigman and Gray 2018; Granulo, Fuchs and Puntoni 2020; Longoni and Cian 2020). While this work successfully identifies the contexts in which algorithm aversion is likely to be observed, we know less about the psychological underpinnings for algorithm aversion or why algorithm aversion is observed in domains where uniquely human skills are not relevant (see Dietvorst, Simmons, and Massey 2014). We propose that to understand the psychological processes driving algorithm aversion, we must understand what beliefs consumers have about AI. Furthermore, over many interactions with computers, we propose that consumers learn AIs' incorrect responses are not consistent or systematic in magnitude: a response that is wildly incorrect is just as likely as a response that is slightly incorrect. This further implies that consumers should believe that AI is incapable of differentiating between errors, such that a response that is greatly incorrect (a severe error) is just as likely as a response that is slightly incorrect (a minor error). Due to this belief, consumers expect AI, as compared to a human, to be more likely to make a severe error and are reluctant to adopt it. We demonstrate the existence of this lay theory and its impact on consumer preferences in four studies. In studies la - b, we provide initial evidence of people's lay beliefs about the likelihood of relatively minor versus severe errors when a task is performed by AI versus a human. Study 1a investigates lay beliefs in a medical context, and study 1b replicates the results of study 1a in the context of a driverless vehicle. In both studies, participants are shown two line graphs depicting the performance of two service providers (i.e., a human vs. robot surgeon in study 1a; a human driver vs. driverless car in study 1b). One line (labeled "Surgeon 1" or "Driver 1") was steeper, illustrating more errors overall, but predominantly minor errors. The other line (labeled "Surgeon 2" or "Driver 2") was flatter, illustrating fewer overall errors but similar occurrences of major and minor errors. In study 1a and lb, the majority reported that the flatter line was more representative of AI (χ2(l) = 9.33,p = .002 and f(l) = 8.00, p = .005). In Study 2, participants were assigned to one of two conditions in which they were told they would be having an expensive or inexpensive item delivered. We predict that people will be willing to adopt AI in circumstances that are less risky, because the difference between Al's error avoidance tendencies versus a human's error avoidance tendencies are less relevant if highly consequential errors are implausible. In line with our predictions, participants displayed grater algorithm aversion when the package was expensive (vs. inexpensive; M = 5.35 vs. M = 4.65; t = 2.01, p = .047). In study 3, we further demonstrate the importance of error likelihood and type when choosing to adopt AI by manipulating error consequentiality within a medical domain. Algorithm aversion should only be displayed when there is a possibility for severe errors. Therefore, study 3 used a 2 (error type: minor vs. severe) x 2 (medical service provider type: human vs. AI) cell between-subject design. Participants were told to imagine they were suffering from acute pain in their stomach, had a fever, and had gone to the emergency room. In the minor error condition, participants were told they needed to have an abdominal X-ray done and that only minor errors were possible. In the severe error condition, participants were told they needed to have an emergency surgery to remove their appendix and minor, moderate, and severe errors were possible. As expected, an interaction emerged between the error type and medical service provider type factors (F(l,296) = 36.5, p < .001). Algorithm aversion was displayed in the severe error condition where people were more likely to undergo the surgery if it was performed by a human as compared to AI (p < .001). However, this effect was attenuated and there was no preference between service provider types within the minor error condition (p = .406). In study 4 we show that differences in risk aversion impact willingness to adopt AI when a severe error is implicit, but not explicit. The design was a 2(tram operator type: human vs. AI) x 2(error severity: high vs. low) x continuous (age: lower vs. higher) mixed design. Age was chosen as a proxy for risk aversion because it is easily obtained and it has been shown that younger (vs. older) consumers underestimate their risk for serious consequences in the context of driving (Delohomme, Verhiac, and Martha 2009). We expected that when severe errors were made explicit, age would not impact one's willingness to ride a human (vs. AI) operated tram and all consumers would prefer a tram operated by a human. However, when only minor errors are made explicit and severe errors are implicit, we expect older (but not younger) consumers to display algorithm aversion. Results revealed a three-way interaction between type of tram operator, error severity, and age (B = -0.7, p = .013). In the high error severity condition, we find no interaction between type of tram operator and age (F < 1). In contrast, in the low-severity condition, we find a significant interaction between type of tram operator and age (B = -0.08, p < .001), such that older (younger) consumers were significantly less (more) likely to ride the tram when it was operated by AI. Together, our studies reveal how a novel lay belief about AI impacts consumers' willingness to adopt AI across different contexts, furthering our understanding of when and why consumers display algorithm aversion.
AbstractList Algorithm aversion research largely demonstrates algorithm aversion in tasks related to human intelligence. We offer a deeper understanding by investigating lay beliefs about AI, per se: We show that consumers believe AI commits fewer total errors, but is more likely to commit a severe error, than humans. Companies are increasingly relying on artificial intelligence (AI) in various aspects of their operations. Yet, a great deal of work has demonstrated that consumers are hesitant to adopt AI, a phenomenon referred to as "algorithm aversion," (Jussupow, Benbasat, and Heinzl 2020). Algorithm aversion has been found to occur in domains where uniquely human capabilities (e.g., moral judgment, accounting for uniqueness, competence at making subjective judgments, etc.) are relevant (Bigman and Gray 2018; Granulo, Fuchs and Puntoni 2020; Longoni and Cian 2020). While this work successfully identifies the contexts in which algorithm aversion is likely to be observed, we know less about the psychological underpinnings for algorithm aversion or why algorithm aversion is observed in domains where uniquely human skills are not relevant (see Dietvorst, Simmons, and Massey 2014). We propose that to understand the psychological processes driving algorithm aversion, we must understand what beliefs consumers have about AI. Furthermore, over many interactions with computers, we propose that consumers learn AIs' incorrect responses are not consistent or systematic in magnitude: a response that is wildly incorrect is just as likely as a response that is slightly incorrect. This further implies that consumers should believe that AI is incapable of differentiating between errors, such that a response that is greatly incorrect (a severe error) is just as likely as a response that is slightly incorrect (a minor error). Due to this belief, consumers expect AI, as compared to a human, to be more likely to make a severe error and are reluctant to adopt it. We demonstrate the existence of this lay theory and its impact on consumer preferences in four studies. In studies la - b, we provide initial evidence of people's lay beliefs about the likelihood of relatively minor versus severe errors when a task is performed by AI versus a human. Study 1a investigates lay beliefs in a medical context, and study 1b replicates the results of study 1a in the context of a driverless vehicle. In both studies, participants are shown two line graphs depicting the performance of two service providers (i.e., a human vs. robot surgeon in study 1a; a human driver vs. driverless car in study 1b). One line (labeled "Surgeon 1" or "Driver 1") was steeper, illustrating more errors overall, but predominantly minor errors. The other line (labeled "Surgeon 2" or "Driver 2") was flatter, illustrating fewer overall errors but similar occurrences of major and minor errors. In study 1a and lb, the majority reported that the flatter line was more representative of AI (χ2(l) = 9.33,p = .002 and f(l) = 8.00, p = .005). In Study 2, participants were assigned to one of two conditions in which they were told they would be having an expensive or inexpensive item delivered. We predict that people will be willing to adopt AI in circumstances that are less risky, because the difference between Al's error avoidance tendencies versus a human's error avoidance tendencies are less relevant if highly consequential errors are implausible. In line with our predictions, participants displayed grater algorithm aversion when the package was expensive (vs. inexpensive; M = 5.35 vs. M = 4.65; t = 2.01, p = .047). In study 3, we further demonstrate the importance of error likelihood and type when choosing to adopt AI by manipulating error consequentiality within a medical domain. Algorithm aversion should only be displayed when there is a possibility for severe errors. Therefore, study 3 used a 2 (error type: minor vs. severe) x 2 (medical service provider type: human vs. AI) cell between-subject design. Participants were told to imagine they were suffering from acute pain in their stomach, had a fever, and had gone to the emergency room. In the minor error condition, participants were told they needed to have an abdominal X-ray done and that only minor errors were possible. In the severe error condition, participants were told they needed to have an emergency surgery to remove their appendix and minor, moderate, and severe errors were possible. As expected, an interaction emerged between the error type and medical service provider type factors (F(l,296) = 36.5, p < .001). Algorithm aversion was displayed in the severe error condition where people were more likely to undergo the surgery if it was performed by a human as compared to AI (p < .001). However, this effect was attenuated and there was no preference between service provider types within the minor error condition (p = .406). In study 4 we show that differences in risk aversion impact willingness to adopt AI when a severe error is implicit, but not explicit. The design was a 2(tram operator type: human vs. AI) x 2(error severity: high vs. low) x continuous (age: lower vs. higher) mixed design. Age was chosen as a proxy for risk aversion because it is easily obtained and it has been shown that younger (vs. older) consumers underestimate their risk for serious consequences in the context of driving (Delohomme, Verhiac, and Martha 2009). We expected that when severe errors were made explicit, age would not impact one's willingness to ride a human (vs. AI) operated tram and all consumers would prefer a tram operated by a human. However, when only minor errors are made explicit and severe errors are implicit, we expect older (but not younger) consumers to display algorithm aversion. Results revealed a three-way interaction between type of tram operator, error severity, and age (B = -0.7, p = .013). In the high error severity condition, we find no interaction between type of tram operator and age (F < 1). In contrast, in the low-severity condition, we find a significant interaction between type of tram operator and age (B = -0.08, p < .001), such that older (younger) consumers were significantly less (more) likely to ride the tram when it was operated by AI. Together, our studies reveal how a novel lay belief about AI impacts consumers' willingness to adopt AI across different contexts, furthering our understanding of when and why consumers display algorithm aversion.
Author Irmak, Caglar
Escoe, Brianna
Vanbergen, Noah
Author_xml – sequence: 1
  givenname: Brianna
  surname: Escoe
  fullname: Escoe, Brianna
– sequence: 2
  givenname: Noah
  surname: Vanbergen
  fullname: Vanbergen, Noah
– sequence: 3
  givenname: Caglar
  surname: Irmak
  fullname: Irmak, Caglar
BookMark eNotjc1OAjEYAHvAREDfoYkXPWzSn91u6w3JKiQIFziTtvuVlCwttt1E316NnuYymZmhSYgBJmhKiJKVYo28RbOcz4TQthZiig77iLuUsM_4cRvLE16NFx2ecfepLz74cMIvMHhwGWsTx_LrxpTxu-4Bmy-8SMU7b70e8DoUGAZ_gmDhDt04PWS4_-ccHV67_XJVbXZv6-ViU10p5aVynIKhQlrdCG1aK_saFHBrpQEqeM-47Bmz1nCmHGHGcc143aqaOlCttHyOHv661xQ_RsjleI5jCj_LIyeKCEUa2vJv1WtMzg
ContentType Conference Proceeding
Copyright Copyright Association for Consumer Research 2022
Copyright_xml – notice: Copyright Association for Consumer Research 2022
DBID 0U~
1-H
3V.
7WY
7WZ
7XB
87Z
8FK
8FL
ABUWG
AFKRA
BENPR
BEZIV
CCPQU
DWQXO
FRNLG
F~G
K60
K6~
L.-
L.0
M0C
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQQKQ
PQUKI
PRINS
PSYQQ
Q9U
DatabaseName Global News & ABI/Inform Professional
Trade PRO
ProQuest Central (Corporate)
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central
Business Premium Collection
ProQuest One
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
ABI/INFORM Professional Advanced
ABI/INFORM Professional Standard
ABI/INFORM Global
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Business (UW System Shared)
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest One Psychology
ProQuest Central Basic
DatabaseTitle ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
ProQuest One Psychology
ProQuest One Academic Middle East (New)
ProQuest Central (Alumni Edition)
ProQuest One Community College
Trade PRO
ProQuest Central China
ABI/INFORM Complete
ProQuest Central
Global News & ABI/Inform Professional
ABI/INFORM Professional Advanced
ABI/INFORM Professional Standard
ProQuest Central Korea
ProQuest Central (New)
ABI/INFORM Complete (Alumni Edition)
Business Premium Collection
ABI/INFORM Global
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest One Academic Eastern Edition
ProQuest Business Collection
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
Business Premium Collection (Alumni)
ProQuest One Academic (New)
DatabaseTitleList ABI/INFORM Global (Corporate)
Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Economics
Business
EndPage 407
Genre Feature
GroupedDBID -~X
0U~
1-H
23M
3V.
5GY
7WY
7XB
85S
8FK
8FL
8VB
AAIKC
AAMNW
ABUWG
ACGFO
ACNCT
ADMHG
AEGXH
AEMOZ
AFKRA
AHQJS
AIAGR
AKVCP
ALMA_UNASSIGNED_HOLDINGS
BENPR
BEZIV
CCPQU
DWQXO
EBA
EBE
EBO
EBR
EBU
EMK
EPL
FRB
FRNLG
FRS
ITG
ITH
K1G
K60
K6~
L.-
L.0
M0C
OK1
P2P
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQQKQ
PQUKI
PRINS
PSYQQ
Q9U
QWB
TH9
UPT
WH7
ZL0
~8M
ID FETCH-LOGICAL-p113t-f31eb168ca56ab7c8d4e9e3cc8be163d238d22ccb329f02bf3a2347941fe978c3
IEDL.DBID BENPR
ISSN 0098-9258
IngestDate Mon Jun 30 12:05:57 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-p113t-f31eb168ca56ab7c8d4e9e3cc8be163d238d22ccb329f02bf3a2347941fe978c3
Notes SourceType-Conference Papers & Proceedings-1
content type line 22
ObjectType-Feature-1
PQID 3090690517
PQPubID 30304
PageCount 2
ParticipantIDs proquest_journals_3090690517
PublicationCentury 2000
PublicationDate 20220101
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – month: 01
  year: 2022
  text: 20220101
  day: 01
PublicationDecade 2020
PublicationPlace Urbana
PublicationPlace_xml – name: Urbana
PublicationTitle Advances in consumer research
PublicationYear 2022
Publisher Association for Consumer Research
Publisher_xml – name: Association for Consumer Research
SSID ssj0017466
Score 2.1690989
Snippet Algorithm aversion research largely demonstrates algorithm aversion in tasks related to human intelligence. We offer a deeper understanding by investigating...
SourceID proquest
SourceType Aggregation Database
StartPage 406
SubjectTerms Age
Algorithms
Artificial intelligence
Consumer behavior
Consumers
Decision making
Emergency medical care
Marketing
Preferences
Risk aversion
Surgeons
Title To Err is (Not) Human: Examining Beliefs about Errors Made by Artificial Intelligence
URI https://www.proquest.com/docview/3090690517
Volume 50
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LSwMxEB60BfWmteKjSg4e9BBsks0260WsbKlCS5EWeit5LXiwW3dX0H9vEratIHhOTpPJN69vZgCuFSHGhQ0Sc8YFjmyksaIZx0ZrkQkZyShQ_kfjeDiLXuZ8XifcyppWucbEANQm1z5Hfse6iR-qy0nvYfWB_dYoX12tV2jsQtNBsOANaPbT8eR1U0fo1dVKPzUzoVz8QdxgRgaH0N422KHJxnQcwY5dtmBvTUJvwf66X7g8htk0R2lRoLcS3Yzz6haFvPs9Sr_ke9jugPrWOZJZiQLJ2N_NixKNpLFIfaPHIrCBnJqh51_jN9swG6TTpyGulyHgFSGswhkjDlZjoSWPpeppYSKbWOZEqqzzqYwzvYZSrRWjSdalKmOS-i7RiGTWRYqanUBjmS_tKSDGlDI8UbEVKnzhrtWJti5SSiQxlp5BZy2kRa3R5WIr__P_jy_ggPoWgZCm6ECjKj7tpTPclbqqX-cHiZSbTg
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT8JAEJ4gJOJNEeMDdQ-a6KGR7ralNTFGFAICDTGQcMN9NfEgxRaj_Cl_o7srBRMTb5zbNM3O7Dy_bwbgjNm2UGkDtVzi-pYjHW4xHLmW4NyPfOpQx0D-e6HXGjqPI3eUg6-MC6NhlZlNNIZaxFzXyK9INdBDdV27djt9s_TWKN1dzVZo_KhFR84_VMqW3rQflHzPMW42Bvcta7FVwJraNplZEbGVffJ8Tl2Pshr3hSMDSdS_MamCE6F8mMCYc0ZwEFUxiwjFmm7p2JFUKRcn6rsbUFBhRqBuUaHeCPtPy75FbdEd1VM6A-z6fyy8cVvNbSivCH2ov3RVO5CTkxJsZqD3EhQzfnK6C8NBjBpJgl5SdBHGs0tk6vzXqPFJX802CVSXKnCNUmRAzfrdOElRjwqJ2BzdJQZ9pNQatX-N-yzDcC3HtAf5STyR-4AIYUy4AfOkz4zJqEoecKkys4DaQuIDqGSHNF7coHS8kvfh_49Podga9LrjbjvsHMEW1vQEUyKpQH6WvMtjFTTM2MlCUgie160c3zO12TU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Advances+in+consumer+research&rft.atitle=To+Err+is+%28Not%29+Human%3A+Examining+Beliefs+about+Errors+Made+by+Artificial+Intelligence&rft.au=Escoe%2C+Brianna&rft.au=Vanbergen%2C+Noah&rft.au=Irmak%2C+Caglar&rft.date=2022-01-01&rft.pub=Association+for+Consumer+Research&rft.issn=0098-9258&rft.volume=50&rft.spage=406&rft.epage=407
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0098-9258&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0098-9258&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0098-9258&client=summon