
Can Foundational Physics Be Saved? - telotortium
https://www.overcomingbias.com/2018/12/can-foundational-physics-be-saved.html
======
Maro
One thing I wish would happen: create a nice, asthetic, smooth way for a smart
student to go from the two-slit experiment to QED/QCD [in 2-3 years], while
having clarity what the input parameters are in the model (=needs to be
measured), what is predicted by the model (=this comes out and is measurable),
where an approximation is being made, what the current spacetime model is
assumed to be, etc. Also, a mathematician should be able to look at this
"journey" and approve that it's not too handwavy.

Given that there's "not much to do", it's weird that there isn't a 1% group
somewhere out there who thinks this is worthwhile.

Context = I studied particle/astro physics 15 years ago and then "dodged the
bullet" as the OP nicely puts it. I felt the way we were taught modern phsyics
was quite poor/handwavy, esp. once the cross was made to QFT territory, and
led to a lot of misunderstanding/confusion, which I still see in comment
threads today, incl. between academics (!). Also, when I speak to
mathematicians about this, they deeply disapprove of the way physics is
taught/run today in this respect, and can routinely point to
misunderstandings/confusion that hinders progress.

A good example (but probably too mathy) is the work of Tamas Matolcsi:
`Spacetime without reference frames` and `Ordinary thermodynamics`.

[https://www.amazon.com/s/ref=dp_byline_sr_book_1?ie=UTF8&tex...](https://www.amazon.com/s/ref=dp_byline_sr_book_1?ie=UTF8&text=Tamas+Matolcsi&search-
alias=books&field-author=Tamas+Matolcsi&sort=relevancerank)

~~~
TheOtherHobbes
It's handwavy because no one really knows what they're doing. Even Feynman
said that no one understands QM, and things have hardly improved since then.
Some details have been filled in, but there's a lot of hostility to the idea
that science should even be thinking about quantum fundamentals, because it's
such a philosophical tar pit.

Modern QFT is like being given a laptop with an Excel spreadsheet that does
some clever things, without having any idea what a processor is, what memory
is, how a hard drive works, or why the case is that funny shape.

The kinder way to put this is to say there's a lot of educated guessing going
on. But the fundamental problem - reconciling GR and QFT - can't be solved
without a completely new mental model. And academic research isn't funded in
ways that reward the generation of creative new models.

It's become a bit of a cargo-culty pursuit. You're rewarded if you know the
words to the songs that everyone else is singing, but if you try to invent a
new genre you'll probably be told it's career suicide and Don't Go There.

~~~
XorNot
No the problem is that any new model needs to be more then "interesting" \-
plenty of those kicking around. What it needs to do is generate a testable
prediction - either explain something we currently can't explain, and ideally
point the way to something we should expect to see if it's correct in a
convincing way.

The entire argument _against_ string theory, is, in many ways, founded on the
fact that as a group of hypotheses, none of them predict anything that would
rule them in or out that's within a practically reachable detection limit.

------
walrus1066
'During experiments, the LHC creates about a billion proton-proton collisions
per second. … The events are filtered in real time and discarded unless an
algorithm marks them as interesting. From a billion events, this “trigger
mechanism” keeps only one hundred to two hundred selected ones. … That CERN
has spent the last ten years deleting data that hold the key to new
fundamental physics is what I would call the nightmare scenario.'

I don't know what the author is proposing. We don't have enough storage to
persist _all_ collision events, that would require zettabytes of disk space.
The detectors are bottlenecked to store a few hundred events a second.
Therefore they need to filter out the majority of the 40 million collision
events per second that occur in the LHC.

Even then, I recall around 1% or so of the stored events were saved via a
'minimal bias' trigger, one that doesn't apply any filter criteria. This was
mainly for calibration purposes, and cross checking stimulation data. So we
still have petabytes of collision events that didn't have any selection
criteria applied.

~~~
magicalhippo
I found this[1] talk about LHCb results and future direction illuminating. He
explains the trigger setup during the first few minutes, later on he explains
how they're searching for new physics.

For run 2 of LHC they used 50000 CPU cores for their software triggers, after
the hardware trigger has reduced the 40 MHz input rate down to 1 MHz. The
final output of the software triggers is 12.5 kHz, which is persisted to disk.
Keep in mind this is just for the LHCb detector.

For run 3, they're planning to remove the hardware trigger bit, running the
software triggers directly from the 40 Mhz signal. This would allow them to
reprogram the triggers during the run, in case some new interesting theory
comes along which for some reason has a signal their current trigger won't
identify.

[1]: [http://pirsa.org/16010060](http://pirsa.org/16010060)

~~~
walrus1066
Cool stuff.

The computation side of the LHC is really impressive. For a full software
trigger, you have 25 nanoseconds in which to load all the raw collision data,
reconstruct 100's of particle tracks, calculate their momentum, join them up
to figure out their decay vertices etc etc, and then, decide whether to store
the event.

I recall LHCb could afford higher trigger-rates than CMS/Atlas, an LHCb event
is smaller (~100 kB vs 1 MB for CMS) because the detector only covers 300
milli-radians from the collision-axis, in one direction, whereas CMS/Atlas
have full coverage.

------
JoshCalbet
I'm sorry, I stopped reading when I found:

> From a billion events, this “trigger mechanism” keeps only one hundred to
> two hundred selected ones. … That CERN has spent the last ten years deleting
> data that hold the key to new fundamental physics is what I would call the
> nightmare scenario.

These words show their opinions come from someone that does not
know/understand the technology behind these systems and behind the computing
power and capabilities at CERN. Of course, they have limitations, but are the
kind of limitation that keeps pushing technology forwards. Using quotes on the
trigger mechanism is saying how strange those concepts are for them.

The trigger system is a fine tuned filter that allows the electronics to work,
otherwise they will be overloaded and will crash as they will be receiving a
rate of data that simply can not be handled. How this trigger is being set
depends on the specific physical process that is being studied and is
supported by theory and simulation, the scope that can be tested is also
limited so the selection is highly scrutinized and reviewed

~~~
antt
You do realize you just said what the author said but in a mocking tone?

Their point is exactly that the trigger systems are controlled by algorithms
based on current theories, which so far have shown nothing for all their
efforts.

I haven't been in the field since the LHC started operating but from memory it
was exactly billions of events per second and storage could only keep up with
hundreds.

~~~
abeld
Every experiment will be based on some sort of theory: without that, you can't
design the equipment, and you wouldn't be able to interpret the data that is
generated.

If by "so far have shown nothing for all their effort" you mean that no new
results were found, that isn't due to the current theories being bad: in fact,
it is due to them being too good: they describe the results too well and thus
there is not enough difference between the current theories and current
results that would require a new theory.

~~~
antt
If we threw out 99% of data based on current theories we wouldn't have figured
out that there was a problem with heliocentric until the 20th century.

~~~
ubernostrum
The counterpoint is that if we were more eager to ditch theories as soon as
they fail to explain absolutely everything in the universe, we'd have tossed
Newton's laws in the early 1800s because they didn't accurately predict the
orbit of Uranus.

(turned out they did, but nobody at the time knew to account for perturbations
caused by the as-yet-undiscovered Neptune)

And depending on how far you want to take this, the neutrino probably would've
been dismissed, too, before eventually being detected.

~~~
antt
Which is nonsensical since we are talking about collecting data, not changing
theories because of the data.

"Oh look, Uranus is doing that odd things again. Doesn't fit in with Newtonian
mechanics or my current pet theory so better throw it out."

------
letitgo12345
This kind of glosses over some of the very real progress made in the last 30
years -- the Ads/CFT correspondence, solar neutrinos that led to discovery of
neutrino oscillation, AMPS thought experiment with respect to black holes,
etc.

LIGO with gravitational wave astronomy and the Event Horizon telescope with
very long baseline interferometry are opening up new ways to observe the
universe.

Yes the data is not as abundant as it used to be a few decades ago but that's
the nature of the game. Our current models work very well in terms of
describing accessible energies. So this is going to take longer and require
more and more ingenuity. I don't think the problem here is lack of motivation
for getting good answers -- to the contrary, anyone who can discovery
something major is going to have a lot of fame and credit come to them.

------
archgoon
Sabine Hossenfelder also recently commented on the market proposal that Robin
Hanson proposed. Interestingly; she's felt over the past decade that it's a
much better idea than she originally felt.

>But what if scientists could make larger gains by betting

>smartly than they could make by promoting their own

>research? “Who would bet against their career?” I asked

>Robin when we spoke last week.

>

>“You did,” he pointed out.

[http://backreaction.blogspot.com/2018/12/dont-ask-what-
scien...](http://backreaction.blogspot.com/2018/12/dont-ask-what-science-can-
do-for-you.html)

------
rjf72
There's one correlation I find really interesting here. The time frame where
physics started to "dry up" as the article puts it, corresponds extremely
strongly with the transition away from a strong connection between
experimental:theoretical physics. Instead theoretical physics began to more
actively just build upon itself with extensive use of model driven tests.

The biggest issue with using models as a tool in science is that they start to
become unfalsifiable. To avoid the hornet's nest of modern models consider
geocentricism - the belief that Earth was uniquely at the center of the solar
system, universe, and everything. In times before telescopes this belief was
justified by models. If you assume this is true, then you get some really
bizarre behavior from the planets that now orbit the Earth. In particular some
planets will suddenly stop and start moving the other way, most planets will
travel in 'swirly' patterns, and so on. But when you have a model none of this
matters. Planets need to go backwards? Sure, why not. They travel in swirlies?
Sure, why not.

So you get these increasingly convoluted and complex theories, but in spite of
how irrational they seem - they are supported by what we see. But at some
point you're going to reach a dead end when the model becomes so intractable
that it becomes impossible to juryrig yet another observation into it. And
it's only at that point that we start to scratch our head and wonder what's
going on. And finding the problem there can be inconceivably difficult because
it can be something far more fundamental than you'd ever look for. For
instance in a geocentric universe you might search for why planets travel in
swirlies. Yet you're at a much higher level than the actual problem - which is
that they don't actually travel in swirlies. And in this toy example things
are much better than they might be in our reality. There you're only a couple
of 'fundamentals' separated from the real problem. With our rapid pace of
publication and 'stair stepping', models advance and build upon themselves
exponentially more rapidly.

Like a single cog in a clock breaking, all it takes is a single falsehood be
assumed as truth in a model to begin to undermine the entire phenomenally
complex system.

------
fouc
Don't miss the author's (Bee) comment

[https://www.overcomingbias.com/2018/12/can-foundational-
phys...](https://www.overcomingbias.com/2018/12/can-foundational-physics-be-
saved.html#comment-4240946455)

>But I have on my blog discussed what I think should be done, eg here:

>[http://backreaction.blogspot.com/2017/03/academia-is-
fucked-...](http://backreaction.blogspot.com/2017/03/academia-is-fucked-up-so-
why-isnt.html)

>Which is a project I have partly realized, see here

>[http://scimeter.org](http://scimeter.org)

>And in case that isn't enough, I have a 15 page proposal here:

>[https://fias.uni-
frankfurt.de/~hossi/Physics/PartB2_SciMeter...](https://fias.uni-
frankfurt.de/~hossi/Physics/PartB2_SciMeter.pdf)

------
rumcajz
If LHC is producing data that confirm our theories, then all the better, no?
Maybe we should instead focus of different, under-reasearched areas of
physics? Or maybe on different sciences?

~~~
fantispug
Absolutely. There are some known anomalies in particle physics (e.g. neutrino
oscillations) but for the most part it is insanely robust.

What's really hard is moving from these first principles to modelling real
world phenomena; there are huge problems like protein folding, low temperature
superconductors and even just predicting properties of new materials and
chemicals.

It's still important for some people to work on fundamental physics, but I
suspect there is a lot more opportunity in mere phenomenology.

------
AlexCoventry
I expect informative new physics data will come out of the failure of quantum
computing devices. Actual quantum supremacy would be a nice consolation prize,
though.

~~~
garmaine
What do you mean? There are working devices today.

~~~
AlexCoventry
There's no clear path from today's devices to quantum supremacy.

~~~
garmaine
Actually there are. But in any case, it's not the same as "the failure of
quantum computing devices." Maybe you meant market failure?

~~~
AlexCoventry
No, I mean failure to achieve the anticipated speed up.

~~~
garmaine
That's like saying quicksort will fail to achieve O(n log n) sort times. The
processes underlying quantum computers doesn't exist in isolation. It's not
some unique property of some untested physical theory (that string theory or
quantum gravity) that might end up being totally wrong. It's the same physical
processes you can test in any college physics laboratory, even your garage if
you're willing to put in some work, and which underlie basic chemistry and
material science. The computer or phone you're reading this on works by the
same underlying physical properties as quantum computers are built on.

The only reason we don't have quantum supremacy today is because they require
atomic precision in their construction, something we lack the ability to do
now but for which there is nothing preventing us from doing in principle.

We will get there, eventually.

~~~
AlexCoventry
It's exactly that necessary precision which makes me skeptical. It's a
different regime than we've been able to test, so far.

Anyway, I'm not saying it's impossible it'll work out that way, just that I
think it's unlikely, and the less interesting possibility. I definitely think
the attempts to make it work are worthwhile.

------
meroes
I was under the impression we aren't expecting by to see any new physics at
CERN anymore anyway as it's too low energy to probe. Aren't physicists kind of
waiting for the next scale of colliders?

------
baxtr
Reminds me of the book “Constructing Quarks”

------
wmnwmn
It's funny how nonphysicists (this author) and non-productive physicists
(Hossenfelder) beat this drum most loudly. How about we do this: let the
people who are obviously smartest make their own decision about what is most
promising to work on, and have enough modesty to realize that their decision
is better informed than our efforts to advise.

~~~
pdonis
_> non-productive physicists (Hossenfelder)_

Why do you think Hossenfelder is a non-productive physicist?

 _> How about we do this: let the people who are obviously smartest make their
own decision about what is most promising to work on_

We've been doing that all along in physics, and it doesn't seem to be working
out.

~~~
wallace_f
It's remarkable how rotten the state of things are in academia while everyone
beats around this bush without outright saying it.

Some of the best physicists in the world right now are likely to be caught up
in just providing for themselves. Why cant someone in Africa, or wherever, get
a degree in physics from Harvard? Why do they need anyone's permission to have
access to this? Why can't they at least have access to course material and
testing, so they can openly compete? Who is afraid of the competition? There's
no ethical justification for that. The world spends trillions in public and
private money already. This is just one criticism, and I'm not alone in
classifying academia as rotten. Feynman said the same. So few people have the
bravery to stand up to an entire socioeconomic complex, even when it means
people will die and projects will fail dramatically: see the Nasa Challenger
Groupthink Disaster (which Feynman also criticized).

~~~
pdonis
_> Why cant someone in Africa, or wherever, get a degree in physics from
Harvard?_

I would put this somewhat differently: why should you need a degree in physics
from Harvard to do physics? What value does that credential actually add?

Note, btw, that my own alma mater, MIT, has all of its course materials
(lectures, problem sets, selected solutions) available online for free:

[https://ocw.mit.edu/index.htm](https://ocw.mit.edu/index.htm)

~~~
wallace_f
Yea, good point. But some kind of signalling is helpful, no?

And OCW is a great project, but _all_ the course material isn't online. Also,
saying to an employer or researcher "I took some OCW courses" isn't a very
good signal. A much better signal would be, "I passed such and such
examinations with such and such scores." Open competition I think is
important.

~~~
pdonis
_> some kind of signalling is helpful, no?_

I think calling it "signaling" highlights an important (and troubling) point.
Employers and researchers are trying to predict future performance; degrees
are supposed to be a measure of one's potential for future performance, and
the quality of the institution that granted the degree is supposed to factor
into that measure. But over time, institutions have an incentive to reduce
rigor and quality in order to cut costs, while still taking advantage of the
full perceived value of the degrees they grant based on their past rigor and
quality (for example, when charging tuition). I think the common tendency to
regard degrees as a form of "signaling" is a tacit recognition that this goes
on.

 _> A much better signal would be, "I passed such and such examinations with
such and such scores."_

I agree that this would be a much better predictor of potential for future
performance, _if_ the institutions grading the examinations and providing the
scores were completely unconnected with the institutions that constructed the
examinations. (And of course the examinations used for this would have to be
different from the ones available over the Internet to everyone.)

------
nyc111
We also need to suspect that no state will spend immense amounts of cash to
build an expensive machine to chase supposed particles unless they can also
develop and test militaryly useful technologies such as high vacuum
technologies. CERN's website explicitely states that they are not about
military technology but as per some post at HN recently we know that CERN
officials can lie when it suits them. I'm not saying it's good or bad that
they do military research if they do (after all internet came out of DARPA)
but I'm saying that particle search may just be a cover and not their main
purpose.

~~~
T-A
Here's a list of CERN's member state, observers and non-members with
cooperation agreements:

[https://home.cern/about/who-we-are/our-governance/member-
sta...](https://home.cern/about/who-we-are/our-governance/member-states)

It's basically every developed country on the planet, including US, Russia and
China. And if you had ever visited CERN, you would know that anyone can go
pretty much anywhere; the strongest deterrent you are likely to encounter are
signs warning about possible radiation exposure.

It's hard to imagine a worse place to try doing military research.

~~~
nyc111
> It's hard to imagine a worse place to try doing military research.

You are right of course. I don't mean overt military research. They can
develop all kinds of high vacuum and laser technologies to search for elusive
particles then it's trivial to turn that research into laser guns. But this is
a guess. To me, you need to suspend disbelief to believe that a government,
any government, will spend money to add few more particles to the Standard
Model unless there is something in it for themselves. This is only a guess. I
might be wrong.

~~~
T-A
It's a combination of two things:

1) Training facility for new engineers and scientists, most of whom will
eventually leave academia for jobs (it is hoped) in the tax-paying sector.

2) Boondoggle to economically support contractors (because they hire large
numbers of voters and/or fill an important function but suffer from uneven
demand).

