Optimization for Robustness Evaluation beyond \(\ell_p\) Metrics

Empirical evaluation of deep learning models against adversarial attacks entails solving nontrivial constrained optimization problems. Popular algorithms for solving these constrained problems rely on projected gradient descent (PGD) and require careful tuning of multiple hyperparameters. Moreover,...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Liang, Hengyue, Liang, Buyun, Cui, Ying, Mitchell, Tim, Sun, Ju
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 14.11.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Empirical evaluation of deep learning models against adversarial attacks entails solving nontrivial constrained optimization problems. Popular algorithms for solving these constrained problems rely on projected gradient descent (PGD) and require careful tuning of multiple hyperparameters. Moreover, PGD can only handle \(\ell_1\), \(\ell_2\), and \(\ell_\infty\) attack models due to the use of analytical projectors. In this paper, we introduce a novel algorithmic framework that blends a general-purpose constrained-optimization solver PyGRANSO, With Constraint-Folding (PWCF), to add reliability and generality to robustness evaluation. PWCF 1) finds good-quality solutions without the need of delicate hyperparameter tuning, and 2) can handle general attack models, e.g., general \(\ell_p\) (\(p \geq 0\)) and perceptual attacks, which are inaccessible to PGD-based algorithms.
ISSN:2331-8422