Deceptive AI dehumanizes: The ethics of misattributed intelligence in the design of Generative AI interfaces

Designers of artificially intelligent systems create interfaces which induce a misperception of intelligence in their users: users impart capacities to the machine they lack or fail to attribute relevant capacities to themselves. I call this a phenomenology of misattributed intelligence. The design...

Full description

Saved in:
Bibliographic Details
Published inProceedings (IEEE Symposium on Visual Languages and Human-Centric Computing) pp. 96 - 108
Main Author Burgess, Michael
Format Conference Proceeding
LanguageEnglish
Published IEEE 02.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Designers of artificially intelligent systems create interfaces which induce a misperception of intelligence in their users: users impart capacities to the machine they lack or fail to attribute relevant capacities to themselves. I call this a phenomenology of misattributed intelligence. The design methods by which they do this I call "mystification", which comprise techniques to deprive their users of an explanatory phenomenological position, with respect to the manner of operation of the machine and their own capacities. In this paper I evidence the claims that users are vulnerable to: (1) misattributing capacities of intelligence to interactive generative AI; (2) mistaking their own capacities and role in this interaction; (3) severely misattributing capacities in anthropomorphic interfaces; and evidence (4) harms arising therefrom which include self-dehumanization. To do this I: provide a novel analysis of 'instrumental' vs. 'phenomenological' goals of AI design; a novel critique of existing design practice reaching into the anthropomorphism and dehumanization literature; and conduct pilot studies (\mathbf{n}=\mathbf{2 4 0}, \mathbf{n = 2 1 3}) to develop a survey of, and find connections between: misattribution; design practices; use of interactive AI; and user's self-perception and perception of others. Evidenced hypotheses are that: (1) misattribution is strongly predictive of AI-induced dehumanization (\mathbf{p} \mathbf{0 . 0 0 1}); and that (2) modern generative AI design practices make this misattribution worse (p\lt 0.004). These results should inform researchers (HCC, HCI, XAI, IAI), and responsive practitioners, of a new class of design goals and problems.
ISSN:1943-6106
DOI:10.1109/VL/HCC60511.2024.00021