I only have a master in (engineering) physics but working with statistical modeling and AI I’ve come to appreciate the “all models are wrong but some models are useful”-mindset, and I’ve started applying that to physical “laws” as well: I no longer see them as some divine truths waiting to be discovered, but more like models of the world that will always be wrong but sometimes useful.
From that point of view what’s happening in physics today is no surprise, but it is a bit depressing: we’ve probably passed the level of complexity where models are useful and are now adding detail that make them less so. I guess you can see it as a form of overfitting, like when less scrupulous AI researchers use the test set for validation.
I understand that her critizism goes beyond the models that are usefull, but address the problem that there is a whole industry (wasting money and human resources) of inventing and promoting models that do not solve any urgent 'gap' and/or are not even practically testable.
Rejoice in the fact that this is only one specific area of physics, and that physics in general is making faster progress than it ever has before, particularly in quantum optics and quantum fundamentals
Interesting analogy with overfitting, though I would say this is more of an underfitting problem? i.e. we have enough 'training data' to know places where our current model fails (inconsistencies as author puts it, such as dark matter). Therefore our inductive biases must be too strong/incorrect, and relaxing them to increase expressivity would be the typical ML approach here.
The author's point that we must not add complexity to solve "non-problems" is very consistent with the ML analogy, though again this would mainly be to avoid adding too much inductive bias and underfitting as well.
Overfitted models do serve a purpose here though - particle physics has reached a point where you can't build models and theories on direct observation so instead the community is throwing a huge number of falsifiable overfitted models with the expectation that all but one or two will actually be confirmed experimentally. It's certainly a time consuming, expensive, non-ideal approach, but valid. The authors argument seems to be that this approach is too expensive and slow, and it's time to revisit original assumptions to develop a new approach.
Many of these models aren't really falsifiable though - or at least, not falsifiable with any technology we can even conceive of today.
Supersymmetry has been ruled out twice already, but with a different choice of parameters, it is being proposed again - and, in fact, you can chose those parameters such that you would need the whole energy of the Sun, or more, to rule them out. String Theory and its few predictions are even worse in this regard.
Models of black hole evaporation would not only require extreme measurement precision, but also a good few billion years to collect any sort of data to confirm or deny.
Personally, I will note that I am hopeful some new insights into the measurement problem will come about from work on quantum computing, which is also eminently practical work.
She proposes to develop theories in more promising areas, ones that have explicit conflicts between different theories (quantum gravity, quantum measurement) or conflicts between theories and observation (dark matter).
The biggest problem with something like Supersymmetry is that, contrary to what the previous poster was saying, it is not a falsifiable theory. It has free parameters that can be used to predict that supersymmetric partners to the known particles would have any mass that hasn't been ruled out yet, more or less (maybe they can't be planet sized, but they can certainly have masses that would require accelerators the size of a solar system to rule out).
How does this work in practice. Do you force everyone to work on a theory they don’t believe in or are not interested in? Do we think that will actually yield useful results?
Scientists are individually working on theories/ideas that they find most interesting.
The only way this sounds like practically enforceable is if we have a situation where funding agencies are only funding a single theory and refusing to fund people doing research in other areas. But I don’t believe that is the case.
I used this quote in the heading of my dissertation on uncertainty in deep learning, and I think its simultaneously true for lots of fields. In ML research we celebrate these enormous models that do everything (GATO, Flamingo, CoCa, etc.) since it feels like we're getting close to something real or universal. I imagine particle physicists feel something similar about expanding the standard model (SUSY(?), quantum gravity),
So in the sense that people get excited about science (see AI lately), I think these models are useful, even if they are pretty mis-specified in the grand scheme of things. I can't speak to Sabine's specific frustrations, but it sounds a little bit like Gary Marcus's concerns about neural networks. In my day-to-day I definitely value a useful model over an exact one.
I rather think that we're past our ability to make useful yet comprehensible (to us) models in physics. The only avenue is to make useful and incomprehensible models. Using machine learning for that is one way. String theory might be on the boundary where it's borderline comprehensible (for a few select people), but at the cost of being only borderline useful (maybe).
This could well be the case, though I don't know how one could falsify this without a superhuman intelligence, so is rather unscientific.
I think the less controversial stance is that we're running out of road on what we can test with Earth-bound particle accelerators, and there are still open problems. That's a problem with what we can verify rather than what we can conceive, which could be an impasse to a grand unifying theory.
Thought experiments are mostly interesting, hardly to be useful. Something about speed of light, how can we make use of it. Nuclear bombs, he didn't like it.
“all models are wrong but some models are useful” tells us just to go find useful models, and the beautiful truth is not important. I don't think it is not important.
Re the first paragraph: It's good you think that because that literally is what we teach and should believe. People who believe otherwise are mistaken. Even so called fundamental physics is studied within an approximation almost always.
The moment you say "photon" you're making an approximation (usually monochromatic and plane wave without sources, in the infrared region, and so on). Physicists who think otherwise simply do not understand what they are doing.
From that point of view what’s happening in physics today is no surprise, but it is a bit depressing: we’ve probably passed the level of complexity where models are useful and are now adding detail that make them less so. I guess you can see it as a form of overfitting, like when less scrupulous AI researchers use the test set for validation.