Out-of-Distribution Detection & Applications With Ablated Learned Temperature Energy
As deep neural networks become adopted in high-stakes domains, it is crucial to be able to identify when inference inputs are Out-of-Distribution (OOD) so that users can be alerted of likely drops in performance and calibration despite high confidence. Among many others, existing methods use the fol...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | As deep neural networks become adopted in high-stakes domains, it is crucial
to be able to identify when inference inputs are Out-of-Distribution (OOD) so
that users can be alerted of likely drops in performance and calibration
despite high confidence. Among many others, existing methods use the following
two scores to do so without training on any apriori OOD examples: a learned
temperature and an energy score. In this paper we introduce Ablated Learned
Temperature Energy (or "AbeT" for short), a method which combines these prior
methods in novel ways with effective modifications. Due to these contributions,
AbeT lowers the False Positive Rate at $95\%$ True Positive Rate (FPR@95) by
$35.39\%$ in classification (averaged across all ID and OOD datasets measured)
compared to state of the art without training networks in multiple stages or
requiring hyperparameters or test-time backward passes. We additionally provide
empirical insights as to how our model learns to distinguish between
In-Distribution (ID) and OOD samples while only being explicitly trained on ID
samples via exposure to misclassified ID examples at training time. Lastly, we
show the efficacy of our method in identifying predicted bounding boxes and
pixels corresponding to OOD objects in object detection and semantic
segmentation, respectively - with an AUROC increase of $5.15\%$ in object
detection and both a decrease in FPR@95 of $41.48\%$ and an increase in AUPRC
of $34.20\%$ on average in semantic segmentation compared to previous state of
the art. |
---|---|
AbstractList | As deep neural networks become adopted in high-stakes domains, it is crucial
to be able to identify when inference inputs are Out-of-Distribution (OOD) so
that users can be alerted of likely drops in performance and calibration
despite high confidence. Among many others, existing methods use the following
two scores to do so without training on any apriori OOD examples: a learned
temperature and an energy score. In this paper we introduce Ablated Learned
Temperature Energy (or "AbeT" for short), a method which combines these prior
methods in novel ways with effective modifications. Due to these contributions,
AbeT lowers the False Positive Rate at $95\%$ True Positive Rate (FPR@95) by
$35.39\%$ in classification (averaged across all ID and OOD datasets measured)
compared to state of the art without training networks in multiple stages or
requiring hyperparameters or test-time backward passes. We additionally provide
empirical insights as to how our model learns to distinguish between
In-Distribution (ID) and OOD samples while only being explicitly trained on ID
samples via exposure to misclassified ID examples at training time. Lastly, we
show the efficacy of our method in identifying predicted bounding boxes and
pixels corresponding to OOD objects in object detection and semantic
segmentation, respectively - with an AUROC increase of $5.15\%$ in object
detection and both a decrease in FPR@95 of $41.48\%$ and an increase in AUPRC
of $34.20\%$ on average in semantic segmentation compared to previous state of
the art. |
Author | Norman, Berk Phillips, Jacob Gil, Fernando Amat LeVine, Will Pikus, Benjamin Hendryx, Sean |
Author_xml | – sequence: 1 givenname: Will surname: LeVine fullname: LeVine, Will – sequence: 2 givenname: Benjamin surname: Pikus fullname: Pikus, Benjamin – sequence: 3 givenname: Jacob surname: Phillips fullname: Phillips, Jacob – sequence: 4 givenname: Berk surname: Norman fullname: Norman, Berk – sequence: 5 givenname: Fernando Amat surname: Gil fullname: Gil, Fernando Amat – sequence: 6 givenname: Sean surname: Hendryx fullname: Hendryx, Sean |
BackLink | https://doi.org/10.48550/arXiv.2401.12129$$DView paper in arXiv |
BookMark | eNotjz1PwzAURT3AUAo_gKme2Bzs2M7HGLWFIkXqEokxek6ewVKaRI6D6L-nTZnuudLVlc4DueuHHgl5FjxSmdb8Ffyv-4lixUUkYhHnK1Id58AGy3ZuCt6ZObihpzsM2Cz0Qotx7FwD1zbRTxe-aWE6CNjSEsH3l6zwNKKHMHuk-x791_mR3FvoJnz6zzWp3vbV9sDK4_vHtigZJGnOQKRKam04ZoDKGmVFgkpnXGAOTZZIia0yNudp0ugbt6Av29ZIbTXINdncbhetevTuBP5cX_XqRU_-AWuDTY0 |
ContentType | Journal Article |
Copyright | http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
Copyright_xml | – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2401.12129 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2401_12129 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a679-a174355b0e8ae4fb4f16e45801e9ac8633ed4bf9076c53ed4bfda58aedb35f5a3 |
IEDL.DBID | GOX |
IngestDate | Wed Jan 24 12:13:54 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a679-a174355b0e8ae4fb4f16e45801e9ac8633ed4bf9076c53ed4bfda58aedb35f5a3 |
OpenAccessLink | https://arxiv.org/abs/2401.12129 |
ParticipantIDs | arxiv_primary_2401_12129 |
PublicationCentury | 2000 |
PublicationDate | 2024-01-22 |
PublicationDateYYYYMMDD | 2024-01-22 |
PublicationDate_xml | – month: 01 year: 2024 text: 2024-01-22 day: 22 |
PublicationDecade | 2020 |
PublicationYear | 2024 |
Score | 1.9128722 |
SecondaryResourceType | preprint |
Snippet | As deep neural networks become adopted in high-stakes domains, it is crucial
to be able to identify when inference inputs are Out-of-Distribution (OOD) so
that... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
Title | Out-of-Distribution Detection & Applications With Ablated Learned Temperature Energy |
URI | https://arxiv.org/abs/2401.12129 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1NSwMxEA1tT15EUamf5CDegjYfu5vjYluLoL2suLcl2UywIFupW_HnO8mu2IunhGROE8J7k8m8IeTaCa1qzjVz2jkmvU6ZtkJjlJKleLcyqVzI6D49J4sX-ViqckDoby2M2Xyvvjp9YPt5i3AzCfoHXA_JkPPwZethWXbJySjF1dv_2SHHjEs7IDE_IPs9u6N5dxyHZADNESmW25atPZsGjdq-vRSdQhs_QTX0huY7SWT6umrfaG7fkQM6GtVPcSwA2W2nfkxnsVrvmBTzWXG_YH0zA2aSVDMTmL9S9g4yA9Jb6ScJSIX4ANrUWSIEOGk9hqpJrbq5MwptnRXKKyNOyKhZNzAmVABCbOKV4E7JGjI9McA1UjOTGZVyOCXj6ILqo9OrqIJ3quids_-3zskeR7wOrwucX5BRu9nCJeJta6-i038AJD2AYw |
link.rule.ids | 228,230,786,891 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Out-of-Distribution+Detection+%26+Applications+With+Ablated+Learned+Temperature+Energy&rft.au=LeVine%2C+Will&rft.au=Pikus%2C+Benjamin&rft.au=Phillips%2C+Jacob&rft.au=Norman%2C+Berk&rft.date=2024-01-22&rft_id=info:doi/10.48550%2Farxiv.2401.12129&rft.externalDocID=2401_12129 |