Exploiting Frequency Spectrum of Adversarial Images for General Robustness

In recent years, there has been growing concern over the vulnerability of convolutional neural networks (CNNs) to image perturbations. However, achieving general robustness against different types of perturbations remains challenging, in which enhancing robustness to some perturbations (e.g., advers...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Chun Yang Tan, Kawamoto, Kazuhiko, Kera, Hiroshi
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 15.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, there has been growing concern over the vulnerability of convolutional neural networks (CNNs) to image perturbations. However, achieving general robustness against different types of perturbations remains challenging, in which enhancing robustness to some perturbations (e.g., adversarial perturbations) may degrade others (e.g., common corruptions). In this paper, we demonstrate that adversarial training with an emphasis on phase components significantly improves model performance on clean, adversarial, and common corruption accuracies. We propose a frequency-based data augmentation method, Adversarial Amplitude Swap, that swaps the amplitude spectrum between clean and adversarial images to generate two novel training images: adversarial amplitude and adversarial phase images. These images act as substitutes for adversarial images and can be implemented in various adversarial training setups. Through extensive experiments, we demonstrate that our method enables the CNNs to gain general robustness against different types of perturbations and results in a uniform performance against all types of common corruptions.
ISSN:2331-8422