Hacker News new | past | comments | ask | show | jobs | submit login

> The BMA [Bayesian Model Average, or total probability] represents epistemic uncertainty — that is, uncertainty over which setting of weights (hypothesis) is correct, given limited data

It's easy to come up with model families where a given data set has a high total probability because it has high probability in every model in the family, so the total probability on its own cannot function as a general epistemological measurement.

Unless you're already modeling a neural net it's extremely unlikely that the model family represented by your Bayesian Deep Learning system includes anything representing the actual data-generating process. It's not just me saying this; Gelman & Shalizi point this out in their "Philosophy and the Practice of Bayesian Statistics":

> ...it is hard to claim that the prior distributions used in applied work represent statisticians’ states of knowledge and belief before examining their data, if only because most statisticians do not believe their models are true, so their prior degree of belief in all of ϴ is not 1 but 0. The prior distribution is more like a regularization device, akin to the penalization terms added to the sum of squared errors when doing ridge regression and the lasso (Hastie, Tibshirani, & Friedman, 2009) or spline smoothing (Wahba, 1990)

(Although they're only talking about the prior here, it applies equally well to the total probability, which is just the mean of the data given the prior.)

It makes some sense to talk about Bayesian methods quantifying epistemological information when you have some good reason to believe that some portion of your model family accurately captures the data-generating process, or at least the parts of that process you care about and are relevant for the predictions you want to make. But that's almost never the case for non-parametric methods.




It seems like what you're saying is: Bayesian methods may be used as models of causal processes, or as mere mechanisms to generate predictions.

And in the former case, models will be parameterized by meaningful causal variables & their effect-strength; in the latter case, parameters have no explanatory role.

And finally that: only in the explanatory case can a bayesian model be interpreted epistemically.

I think I agree with this -- the relevant epistemic interpretation of a model is how well it fits to the world -- NOT -- to data! Data is the means by which models are selected. So if a model is not explanatory (ie., about the world) there is no sense in which it "fits"; and thus no epistemic interpretation.


Yes, without some kind of semantics to the model, Bayesian methods are essentially an elaborate form of regularization.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: