What I Read: The Case for Bayesian Deep Learning

https://cims.nyu.edu/~andrewgw/caseforbdl/

The Case for Bayesian Deep Learning
Andrew Gordon Wilson
January 11, 2020


“…ignoring epistemic uncertainty is a key reason that standard neural network training is miscalibrated. By erroneously assuming that the model… is completely determined by a finite dataset, the predictive distribution becomes overconfident… ignoring epistemic uncertainty also leads to worse accuracy in point predictions, because we are now ignoring all the other compelling explanations for the data.”