Multi-head Uncertainty Inference for Adversarial Attack Detection

Deep neural networks (DNNs) are sensitive and susceptible to tiny perturbation by adversarial attacks which causes erroneous predictions. Various methods, including adversarial defense and uncertainty inference (UI), have been developed in recent years to overcome the adversarial attacks. In this pa...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Yang, Yuqi, Yang, Songyun, Jiyang Xie Zhongwei Si, Guo, Kai, Zhang, Ke, Liang, Kongming
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 20.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep neural networks (DNNs) are sensitive and susceptible to tiny perturbation by adversarial attacks which causes erroneous predictions. Various methods, including adversarial defense and uncertainty inference (UI), have been developed in recent years to overcome the adversarial attacks. In this paper, we propose a multi-head uncertainty inference (MH-UI) framework for detecting adversarial attack examples. We adopt a multi-head architecture with multiple prediction heads (i.e., classifiers) to obtain predictions from different depths in the DNNs and introduce shallow information for the UI. Using independent heads at different depths, the normalized predictions are assumed to follow the same Dirichlet distribution, and we estimate distribution parameter of it by moment matching. Cognitive uncertainty brought by the adversarial attacks will be reflected and amplified on the distribution. Experimental results show that the proposed MH-UI framework can outperform all the referred UI methods in the adversarial attack detection task with different settings.
ISSN:2331-8422