Cross-Shaped Adversarial Patch Attack

Recent studies have shown that deep learning-based classifiers are vulnerable to malicious inputs, i.e., adversarial examples. A practical solution is to construct a perceptible but localized perturbation called patch, making the well-trained models misclassified. However, most existing patch-based...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 34; no. 4; pp. 2289 - 2303
Main Authors Ran, Yu, Wang, Weijia, Li, Mingjie, Li, Lin-Cheng, Wang, Yuan-Gen, Li, Jin
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…