Rethinking Training Schedules For Verifiably Robust Networks

New and stronger adversarial attacks can threaten existing defenses. This possibility highlights the importance of certified defense methods that train deep neural networks with verifiably robust guarantees. A range of certified defense methods has been proposed to train neural networks with verifia...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE International Conference on Image Processing (ICIP) pp. 464 - 468
Main Authors Go, Hyojun, Byun, Junyoung, Kim, Changick
Format Conference Proceeding
LanguageEnglish
Published IEEE 19.09.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:New and stronger adversarial attacks can threaten existing defenses. This possibility highlights the importance of certified defense methods that train deep neural networks with verifiably robust guarantees. A range of certified defense methods has been proposed to train neural networks with verifiably robustness guarantees, among which Interval Bound Propagation (IBP) and CROWN-IBP have been demonstrated to be the most effective. However, we observe that CROWN-IBP and IBP are suffering from Low Epsilon Overfitting (LEO), a problem arising from their training schedule that increases the input perturbation bound. We show that LEO can yield poor results even for a simple linear classifier. We also investigate the evidence of LEO from experiments under conditions of worsening LEO. Based on these observations, we propose a new training strategy, BatchMix, which mixes various input perturbation bounds in a mini-batch to alleviate the LEO problem. Experimental results on MNIST and CIFAR-10 datasets show that BatchMix can make the performance of IBP and CROWN-IBP better by mitigating LEO.
ISSN:2381-8549
DOI:10.1109/ICIP42928.2021.9506540