
The Truth about Scientific Models - MindGods
https://www.scientificamerican.com/article/the-truth-about-scientific-models/
======
btrettel
Sabine's argument is basically that many predictions are conditional, and the
conditions often aren't met in practice, therefore we shouldn't judge
scientific models based on predictions. Well, the conditions _are_ met
_sometimes_ , and those are the times when the models can be fairly validated
or not. Also, you can make "unit tests" that examine submodels with less
conditions/assumptions. Getting all the conditions/assumptions right for the
"integration test" is of course hard.

If you can show that you model can make many blind predictions that end up
being correct, that's a good model! I don't think Sabine denies that, but she
sees predictions as impractical. Her alternative is not better as-is to me,
though.

In my experience, physicists tend to prefer "explanatory power" or other
qualitative metrics to more objective ways to determine whether a model is
validated. In contrast, my experience is that engineers and statisticians tend
to prefer objective measures. Explanatory power is a useful concept, but at
present it's rather vague. I did recommend AIC as a similar objective measure
to what Sabine argues for on her blog, but no one responded:

[http://backreaction.blogspot.com/2020/05/predictions-are-
ove...](http://backreaction.blogspot.com/2020/05/predictions-are-
overrated.html?showComment=1588873188265#c6900263718570345454)

