Hacker News new | past | comments | ask | show | jobs | submit login

The standard model is far from a 'theory of everything'. To name but a few problems:

* gravity * massive neutrinos * dark matter * dark energy

It is also a highly parameterised model tuned to fit the data.

The biggest concern is whether we can realistically probe the failings of the standard model using a collider at ~TeV scale? If that is the case, then the standard model may be the best model of particle physics we will ever achieve.




"Highly parameterized" meaning O(20) free parameters. It matches thousands upon thousands of detailed precision data points.


Pedantically, that's not how O notation works.

But yeah, I agree that the "highly parameterized" part is a statement from fashion, and the number of parameters is really not a good reason to try to replace the Standard Model. (There are many good reasons, but this one isn't one of them.)

Also, I am yet to see any alternative proposal with fewer parameters.


There is a philosophical discussion to be had about whether 19 physical parameters is "a lot", and another discussion about fine tuning. However, I was primarily referring to the artifical parameters that arise from doing real calculations (renormalisation scale, mass factorisation scale, PDFs etc). These plague pretty much all perturbative QCD calculations, and then particle physicists play games like varying them by a factor of 1/2 and 2 to get something that looks like error bars...


The number of SM parameters is not a lot, given the reach of the model, which is literally every physical phenomenon ever observed on Earth with enough detail, but gravity. Thousands of independent experiments, and observational data on a scale so absurdly large it's hard to state plainly. Any philosopher who wants to claim nineteen parameters is large is out of their minds!

Fine tuning, I agree, is a philosophical issue. I'm a physicist, and I don't buy it. Why does everything have to be perturbatively pleasant? Nobody promised us that.

The issue of artificial parameters is a red herring, I think. Properly computed, of course, well-defined observables are renormalization scale independent. You might have to pick a scheme/scale to do the calculation, but whatever scale dependence remains is an indication of some perturbative truncation. The continuum limit of LQCD, for example, produces real observables with no renormalization scale dependence. Hell, renormalization is not even mysterious in a computational approach.


> The standard model is far from a 'theory of everything'. To name but a few problems...

You missed a bit of detail: Reality, and The Hard Problem of Consciousness.

Granted, this is often not a popular topic of discussion (if not ~taboo), but it's actually rather important imho.

The best thing I've ever come across that illustrates the gap/difference between how materialists think about reality vs (some) "non-materialists" (in this case Tibetan Buddhist Alan Wallace) is this video....seeing the way two highly competent but very different thinkers approach the problem space is enlightening, although it might require some background in both domains to appreciate (so Alan's case doesn't appear as "woo woo").

The Nature of Reality: A Dialogue Between a Buddhist Scholar and a Theoretical Physicist (Sean Carroll)

https://youtu.be/pLbSlC0Pucw




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: