Quantile Regularization: Towards Implicit Calibration of Regression Models
Recent works have shown that most deep learning models are often poorly calibrated, i.e., they may produce overconfident predictions that are wrong. It is therefore desirable to have models that produce predictive uncertainty estimates that are reliable. Several approaches have been proposed recentl...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.02.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent works have shown that most deep learning models are often poorly
calibrated, i.e., they may produce overconfident predictions that are wrong. It
is therefore desirable to have models that produce predictive uncertainty
estimates that are reliable. Several approaches have been proposed recently to
calibrate classification models. However, there is relatively little work on
calibrating regression models. We present a method for calibrating regression
models based on a novel quantile regularizer defined as the cumulative KL
divergence between two CDFs. Unlike most of the existing approaches for
calibrating regression models, which are based on post-hoc processing of the
model's output and require an additional dataset, our method is trainable in an
end-to-end fashion without requiring an additional dataset. The proposed
regularizer can be used with any training objective for regression. We also
show that post-hoc calibration methods like Isotonic Calibration sometimes
compound miscalibration whereas our method provides consistently better
calibrations. We provide empirical results demonstrating that the proposed
quantile regularizer significantly improves calibration for regression models
trained using approaches, such as Dropout VI and Deep Ensembles. |
---|---|
DOI: | 10.48550/arxiv.2002.12860 |