Uncertainty for deep image classifiers on out of distribution data.

1 Jan 2021  ·  Tiago Salvador, Alexander Iannantuono, Adam M Oberman ·

In addition to achieving high accuracy, in many applications, it is important to estimate the probability that a model prediction is correct. Predictive uncertainty is particularly important on out of distribution (OOD) data where accuracy degrades. However, models are typically overconfident, and model calibration on OOD data remains a challenge. In this paper we propose a simple post hoc calibration method that significantly improves on benchmark results [Ovadia et al 2019] on a wide range of corrupted data. Our method uses outlier exposure to properly calibrate the model probabilities.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here