Towards an Intrinsic Definition of Robustness for a Classifier

Finding good measures of robustness - i.e. the ability to correctly classify corrupted input signals - of a trained classifier is an important question for sensitive practical applications. In this paper, we point out that averaging the radius of robustness of samples in a validation set is a statis...

Full description

Saved in:
Bibliographic Details
Published inICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 4015 - 4019
Main Authors Giraudon, Theo, Gripon, Vincent, Lowe, Matthias, Vermet, Franck
Format Conference Proceeding
LanguageEnglish
Published IEEE 06.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Finding good measures of robustness - i.e. the ability to correctly classify corrupted input signals - of a trained classifier is an important question for sensitive practical applications. In this paper, we point out that averaging the radius of robustness of samples in a validation set is a statistically weak measure. We propose instead to weight the importance of samples depending on their difficulty. We motivate the proposed score by a theoretical case study using logistic regression. We also empirically demonstrate the ability of the proposed score to measure robustness of classifiers with little dependence on the choice of samples in more complex settings, including deep convolutional neural networks and real datasets.
ISSN:2379-190X
DOI:10.1109/ICASSP39728.2021.9414573