Calibrating Deep Neural Networks using Explicit Regularisation and Dynamic Data Pruning

Deep neural networks (DNNS) are prone to miscalibrated predictions, often exhibiting a mismatch between the predicted output and the associated confidence scores. Contemporary model calibration techniques mitigate the problem of overconfident predictions by pushing down the confidence of the winning...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) pp. 1541 - 1549
Main Authors Patra, Rishabh, Hebbalaguppe, Ramya, Dash, Tirtharaj, Shroff, Gautam, Vig, Lovekesh
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep neural networks (DNNS) are prone to miscalibrated predictions, often exhibiting a mismatch between the predicted output and the associated confidence scores. Contemporary model calibration techniques mitigate the problem of overconfident predictions by pushing down the confidence of the winning class while increasing the confidence of the remaining classes across all test samples. However, from a deployment perspective an ideal model is desired to (i) generate well calibrated predictions for high-confidence samples with predicted probability say > 0.95 and (ii) generate a higher proportion of legitimate high-confidence samples. To this end, we propose a novel regularization technique that can be used with classification losses, leading to state-of-the-art calibrated predictions at test time; From a deployment standpoint in safety critical applications, only high-confidence samples from a well-calibrated model are of interest, as the remaining samples have to undergo manual inspection. Predictive confidence reduction of these potentially "high-confidence samples" is a downside of existing calibration approaches. We mitigate this via proposing a dynamic traintime data pruning strategy which prunes low confidence samples every few epochs, providing an increase in confident yet calibrated samples. We demonstrate state-of-the-art calibration performance across image classification benchmarks, reducing training time without much compromise in accuracy. We provide insights into why our dynamic pruning strategy that prunes low confidence training samples leads to an increase in high-confidence samples at test time.
ISSN:2642-9381
DOI:10.1109/WACV56688.2023.00159