DDoD: Dual Denial of Decision Attacks on Human-AI Teams

Artificial intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient. However, such systems are also at constant risk of being attacked. While the majority of attacks targeting AI-based applications aim to manipulate classifiers...

Full description

Saved in:
Bibliographic Details
Published inIEEE pervasive computing Vol. 22; no. 1; pp. 1 - 8
Main Authors Tag, Benjamin, van Berkel, Niels, Verma, Sunny, Zhao, Benjamin Zi Hao, Berkovsky, Shlomo, Kaafar, Dali, Kostakos, Vassilis, Ohrimenko, Olga
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1536-1268
1558-2590
DOI10.1109/MPRV.2022.3218773

Cover

Loading…
More Information
Summary:Artificial intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient. However, such systems are also at constant risk of being attacked. While the majority of attacks targeting AI-based applications aim to manipulate classifiers or training data and alter the output of an AI model, recently proposed sponge attacks against AI models aim to impede the classifier's execution by consuming substantial resources. In this work, we propose dual denial of decision (DDoD) attacks against collaborative human-AI teams. We discuss how such attacks aim to deplete both computational and human resources, and significantly impair decision-making capabilities. We describe DDoD on human and computational resources and present potential risk scenarios in a series of exemplary domains.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1536-1268
1558-2590
DOI:10.1109/MPRV.2022.3218773