The transparency dilemma: How AI disclosure erodes trust

•AI disclosure erodes trust in the AI user.•Legitimacy perceptions explain trust erosion from AI disclosure.•Framing AI disclosure in different ways, knowing about AI usage prior to disclosure, or making AI disclosure mandatory or voluntary does not prevent trust erosion.•The AI disclosure effect do...

Full description

Saved in:
Bibliographic Details
Published inOrganizational behavior and human decision processes Vol. 188; p. 104405
Main Authors Schilke, Oliver, Reimann, Martin
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.05.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•AI disclosure erodes trust in the AI user.•Legitimacy perceptions explain trust erosion from AI disclosure.•Framing AI disclosure in different ways, knowing about AI usage prior to disclosure, or making AI disclosure mandatory or voluntary does not prevent trust erosion.•The AI disclosure effect does not equate to mere algorithm aversion; it raises attention and produces doubt.•The negative trust impact is stronger when AI usage is exposed (rather than self-disclosed).•Positive technology attitudes and perceiving AI as accurate lessen but do not mute the AI disclosure effect. As generative artificial intelligence (AI) has found its way into various work tasks, questions about whether its usage should be disclosed and the consequences of such disclosure have taken center stage in public and academic discourse on digital transparency. This article addresses this debate by asking: Does disclosing the usage of AI compromise trust in the user? We examine the impact of AI disclosure on trust across diverse tasks—from communications via analytics to artistry—and across individual actors such as supervisors, subordinates, professors, analysts, and creatives, as well as across organizational actors such as investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, we argue that this reduction in trust can be explained by reduced perceptions of legitimacy, as shown across various experimental designs (Studies 6–8). Moreover, we demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory, though it is comparatively weaker than the effect of third-party exposure (Studies 9–13). A within-paper meta-analysis suggests this trust penalty is attenuated but not eliminated among evaluators with favorable technology attitudes and perceptions of high AI accuracy. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions, emphasizing that transparency is not straightforwardly beneficial, and highlighting legitimacy’s central role in trust formation.
ISSN:0749-5978
DOI:10.1016/j.obhdp.2025.104405