Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability

A recent source of concern for the security of neural networks is the emergence of clean-label dataset poisoning attacks, wherein correctly labeled poison samples are injected into the training dataset. While these poison samples look legitimate to the human observer, they contain malicious characte...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE European Symposium on Security and Privacy (EuroS&P) pp. 159 - 178
Main Authors Aghakhani, Hojjat, Meng, Dongyu, Wang, Yu-Xiang, Kruegel, Christopher, Vigna, Giovanni
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A recent source of concern for the security of neural networks is the emergence of clean-label dataset poisoning attacks, wherein correctly labeled poison samples are injected into the training dataset. While these poison samples look legitimate to the human observer, they contain malicious characteristics that trigger a targeted misclassification during inference. We propose a scalable and transferable clean-label poisoning attack against transfer learning, which creates poison images with their center close to the target image in the feature space. Our attack, Bullseye Polytope, improves the attack success rate of the current state-of-the-art by 26.75% in end-to-end transfer learning, while increasing attack speed by a factor of 12. We further extend Bullseye Polytope to a more practical attack model by including multiple images of the same object (e.g., from different angles) when crafting the poison samples. We demonstrate that this extension improves attack transferability by over 16% to unseen images (of the same object) without using extra poison samples.
DOI:10.1109/EuroSP51992.2021.00021