Hacker News new | comments | ask | show | jobs | submit login
Can Foundational Physics Be Saved? (overcomingbias.com)
67 points by telotortium 36 days ago | hide | past | web | favorite | 65 comments



One thing I wish would happen: create a nice, asthetic, smooth way for a smart student to go from the two-slit experiment to QED/QCD [in 2-3 years], while having clarity what the input parameters are in the model (=needs to be measured), what is predicted by the model (=this comes out and is measurable), where an approximation is being made, what the current spacetime model is assumed to be, etc. Also, a mathematician should be able to look at this "journey" and approve that it's not too handwavy.

Given that there's "not much to do", it's weird that there isn't a 1% group somewhere out there who thinks this is worthwhile.

Context = I studied particle/astro physics 15 years ago and then "dodged the bullet" as the OP nicely puts it. I felt the way we were taught modern phsyics was quite poor/handwavy, esp. once the cross was made to QFT territory, and led to a lot of misunderstanding/confusion, which I still see in comment threads today, incl. between academics (!). Also, when I speak to mathematicians about this, they deeply disapprove of the way physics is taught/run today in this respect, and can routinely point to misunderstandings/confusion that hinders progress.

A good example (but probably too mathy) is the work of Tamas Matolcsi: `Spacetime without reference frames` and `Ordinary thermodynamics`.

https://www.amazon.com/s/ref=dp_byline_sr_book_1?ie=UTF8&tex...


It's handwavy because no one really knows what they're doing. Even Feynman said that no one understands QM, and things have hardly improved since then. Some details have been filled in, but there's a lot of hostility to the idea that science should even be thinking about quantum fundamentals, because it's such a philosophical tar pit.

Modern QFT is like being given a laptop with an Excel spreadsheet that does some clever things, without having any idea what a processor is, what memory is, how a hard drive works, or why the case is that funny shape.

The kinder way to put this is to say there's a lot of educated guessing going on. But the fundamental problem - reconciling GR and QFT - can't be solved without a completely new mental model. And academic research isn't funded in ways that reward the generation of creative new models.

It's become a bit of a cargo-culty pursuit. You're rewarded if you know the words to the songs that everyone else is singing, but if you try to invent a new genre you'll probably be told it's career suicide and Don't Go There.


No the problem is that any new model needs to be more then "interesting" - plenty of those kicking around. What it needs to do is generate a testable prediction - either explain something we currently can't explain, and ideally point the way to something we should expect to see if it's correct in a convincing way.

The entire argument against string theory, is, in many ways, founded on the fact that as a group of hypotheses, none of them predict anything that would rule them in or out that's within a practically reachable detection limit.


I think this is very unfair.

>no one really knows what they're doing. Even Feynman said that no one understands QM

Quantum field theory makes possible the single most accurate predictions in all of science. Time and time again people whip out the Feynman (mis)quote to try to argue "oh we don't really know what's happening", but just cannot see why they think that way. Science is about finding observing reality and formulating descriptions with predictive power, and qft is objectively the best description any person has figured out so far.

>Modern QFT is like being given a laptop with an Excel spreadsheet that does some clever things, without having any idea what a processor is, what memory is, how a hard drive works, or why the case is that funny shape.

Again, preposterous. We know how qft works (we've come up with it, after all), and we know how to use it to make predictions and confirm them experimentally. If you mean know in a deeper, fuzzy sense, they maybe, maybe not.

>the fundamental problem - reconciling GR and QFT - can't be solved without a completely new mental model

>It's become a bit of a cargo-culty pursuit.

If you come up with a better idea than the current understanding, publish your findings and you will instantly become the most respected, popular, and influential physicist alive.

I will concede though that sadly funding is allocated more to "safer" avenues of research than novel ones. That's certainly a problem.


I don't think this is about reconciling QM and GR.

Even without gravity, modern particle physics theory is a big mess imo.


What's interesting is that this is happening in aerodynamics today as students and researchers rediscover on youtube the amazing Shell Oil-sponsored "High Speed Flight" videos from the 1950's - they're like an instant Masters Degree:

https://www.youtube.com/watch?v=bELu-if5ckU


Watching this now. It is great!


The unfortunate truth is that even among good theoretical physicists there are few who would be able to teach such a course. The main reason is that a mathematical description of some of these things is still scattered in the research literature. The best attempt that I am aware of is the special year at Princeton in the 90s it produces a fantastic two volume collection of lectures “quantum fields and strings: a course for mathematicians”. Weinberg’s three part book on QFT is also pretty nice.


I (tried to) used Vol. 1 and 2 of Weinberg for QFT courses back 12 years ago, I don't agree that they're good in the sense of being clear / not handwavy.


Matolcsi’s books are available here for those interested: http://szofi.elte.hu/~szaboa/MatolcsiKonyvek/


Thanks! The 7 page preface of the Spacetime book gives a good argument/overview of my original point.


I wonder whether using computers more (whether numerical or symbolic calculations) could remove some of the burden of understanding mathematical techniques and make this process smoother. Or maybe it would just obscure the fundamental assumptions to software and increase the burden on learning computational techniques.

Does it matter whether mathematicians think it is handwavy? If any two trained practitioners can get to the same result of a calculation and it matches experiment to appropriate approximations that seems good enough to me.


> If any two trained practitioners can get to the same result of a calculation and it matches experiment to appropriate approximations that seems good enough to me.

I think that's already true today. The problem is, when theorists are out manufacturing new theories, I think it gets very confusing/handwavy (because everybody is brought up on handwavy fundamentals).

Disclaimer: I'm not a practicing physicist.


There are no "handwavy" fundamentals. The theory is well-studied, well-understood, sound, and produces verifiable results. It is, however, incomplete, in that it does not provide an explanation for certain phenomena.


I took introductory lectures in physics and one prof would repeatedly state that quantum mechanics just cannot be understood, which was probably true for the scope of the audience, but still struck me as cringe-worthy. If I remember correctly that came up with wavefunction collapse, which is indeed just one model, that's begging for an explanation, and with fixed energy bands. The conclusion that there's a lot to memorize by heart is true nevertheless for any advanced science.

Whereas, I'm currently hearing an introductory philosophy lecture and, while physics informs my view in large parts, it's mostly about knocking down misconceptions.

The basic fundamentals, the first principles deriving conclusions, should be basic common knowledge. I don't actually need a mysterious slit experiment to be able to tell that my life, so to speek, is unpredictable, yet very regular, in the bigger picture. One common problem is language and there's almost no focus on that in physics education--if I may leave at least one lament--and since I can't say that the assembly of quantum states that is me is very likely to diverge, disentagle and grow decoherent ... philosophy still puts up with concepts like soul; or free will which is in the end not about inherent physical properties but just about regret and commitment.


>create a nice, asthetic, smooth way for a smart student to go from the two-slit experiment to QED/QCD

Not quite sure what you mean by this. This is (part of) the standard curriculum for a physics degree. Of course physics is a vastly bigger field than quantum mechanics, so it takes 4 years to do this journey.


> A good example (but probably too mathy) is the work of Tamas Matolcsi:

Good examples as cautionary tales, or worth a look as how it should be done?


It's good, but very dry.

Look at the spacetime one and read the Preface, it gives a good overview of the "program". Read the first chapter and see what you think.

http://szofi.elte.hu/~szaboa/MatolcsiKonyvek/pdf/konyv/Matol...

Disclaimer: I took these courses 10+ years ago from Matolcsi.


Thanks, I already started reading it. Initially my impression was that it's a fairly standard geometric formulation of physics, but I really appreciate the simple explanations.


'During experiments, the LHC creates about a billion proton-proton collisions per second. … The events are filtered in real time and discarded unless an algorithm marks them as interesting. From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.'

I don't know what the author is proposing. We don't have enough storage to persist all collision events, that would require zettabytes of disk space. The detectors are bottlenecked to store a few hundred events a second. Therefore they need to filter out the majority of the 40 million collision events per second that occur in the LHC.

Even then, I recall around 1% or so of the stored events were saved via a 'minimal bias' trigger, one that doesn't apply any filter criteria. This was mainly for calibration purposes, and cross checking stimulation data. So we still have petabytes of collision events that didn't have any selection criteria applied.


I found this[1] talk about LHCb results and future direction illuminating. He explains the trigger setup during the first few minutes, later on he explains how they're searching for new physics.

For run 2 of LHC they used 50000 CPU cores for their software triggers, after the hardware trigger has reduced the 40 MHz input rate down to 1 MHz. The final output of the software triggers is 12.5 kHz, which is persisted to disk. Keep in mind this is just for the LHCb detector.

For run 3, they're planning to remove the hardware trigger bit, running the software triggers directly from the 40 Mhz signal. This would allow them to reprogram the triggers during the run, in case some new interesting theory comes along which for some reason has a signal their current trigger won't identify.

[1]: http://pirsa.org/16010060


Cool stuff.

The computation side of the LHC is really impressive. For a full software trigger, you have 25 nanoseconds in which to load all the raw collision data, reconstruct 100's of particle tracks, calculate their momentum, join them up to figure out their decay vertices etc etc, and then, decide whether to store the event.

I recall LHCb could afford higher trigger-rates than CMS/Atlas, an LHCb event is smaller (~100 kB vs 1 MB for CMS) because the detector only covers 300 milli-radians from the collision-axis, in one direction, whereas CMS/Atlas have full coverage.


I haven't read the book but from what I gather the author would expect better guesses at what is going to be important from theorists so that experimental physicists can apply better filters.


I'm sorry, I stopped reading when I found:

> From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.

These words show their opinions come from someone that does not know/understand the technology behind these systems and behind the computing power and capabilities at CERN. Of course, they have limitations, but are the kind of limitation that keeps pushing technology forwards. Using quotes on the trigger mechanism is saying how strange those concepts are for them.

The trigger system is a fine tuned filter that allows the electronics to work, otherwise they will be overloaded and will crash as they will be receiving a rate of data that simply can not be handled. How this trigger is being set depends on the specific physical process that is being studied and is supported by theory and simulation, the scope that can be tested is also limited so the selection is highly scrutinized and reviewed


Absolutely, also worth adding the detectors also have 'minimum bias' triggers, that store a small fraction of events without applying any selection criteria. I recall this was mainly used for calibration and verifying stimulation data. So we still have many petabytes of collision events that did not have any selection criteria applied.


You do realize you just said what the author said but in a mocking tone?

Their point is exactly that the trigger systems are controlled by algorithms based on current theories, which so far have shown nothing for all their efforts.

I haven't been in the field since the LHC started operating but from memory it was exactly billions of events per second and storage could only keep up with hundreds.


The trigger mechanisms aren't throwing away "data which doesn't fit" - they're throwing away data which does. That's the point - the vast majority of CERN's output is going to be extremely common events we'd expect to see all the time.

No one is setting these things up to go "hmm a particle on a totally unusual trajectory - but not where the Higgs should be so junk it".


Every experiment will be based on some sort of theory: without that, you can't design the equipment, and you wouldn't be able to interpret the data that is generated.

If by "so far have shown nothing for all their effort" you mean that no new results were found, that isn't due to the current theories being bad: in fact, it is due to them being too good: they describe the results too well and thus there is not enough difference between the current theories and current results that would require a new theory.


If we threw out 99% of data based on current theories we wouldn't have figured out that there was a problem with heliocentric until the 20th century.


The counterpoint is that if we were more eager to ditch theories as soon as they fail to explain absolutely everything in the universe, we'd have tossed Newton's laws in the early 1800s because they didn't accurately predict the orbit of Uranus.

(turned out they did, but nobody at the time knew to account for perturbations caused by the as-yet-undiscovered Neptune)

And depending on how far you want to take this, the neutrino probably would've been dismissed, too, before eventually being detected.


Which is nonsensical since we are talking about collecting data, not changing theories because of the data.

"Oh look, Uranus is doing that odd things again. Doesn't fit in with Newtonian mechanics or my current pet theory so better throw it out."


This kind of glosses over some of the very real progress made in the last 30 years -- the Ads/CFT correspondence, solar neutrinos that led to discovery of neutrino oscillation, AMPS thought experiment with respect to black holes, etc.

LIGO with gravitational wave astronomy and the Event Horizon telescope with very long baseline interferometry are opening up new ways to observe the universe.

Yes the data is not as abundant as it used to be a few decades ago but that's the nature of the game. Our current models work very well in terms of describing accessible energies. So this is going to take longer and require more and more ingenuity. I don't think the problem here is lack of motivation for getting good answers -- to the contrary, anyone who can discovery something major is going to have a lot of fame and credit come to them.


Sabine Hossenfelder also recently commented on the market proposal that Robin Hanson proposed. Interestingly; she's felt over the past decade that it's a much better idea than she originally felt.

>But what if scientists could make larger gains by betting

>smartly than they could make by promoting their own

>research? “Who would bet against their career?” I asked

>Robin when we spoke last week.

>

>“You did,” he pointed out.

http://backreaction.blogspot.com/2018/12/dont-ask-what-scien...


There's one correlation I find really interesting here. The time frame where physics started to "dry up" as the article puts it, corresponds extremely strongly with the transition away from a strong connection between experimental:theoretical physics. Instead theoretical physics began to more actively just build upon itself with extensive use of model driven tests.

The biggest issue with using models as a tool in science is that they start to become unfalsifiable. To avoid the hornet's nest of modern models consider geocentricism - the belief that Earth was uniquely at the center of the solar system, universe, and everything. In times before telescopes this belief was justified by models. If you assume this is true, then you get some really bizarre behavior from the planets that now orbit the Earth. In particular some planets will suddenly stop and start moving the other way, most planets will travel in 'swirly' patterns, and so on. But when you have a model none of this matters. Planets need to go backwards? Sure, why not. They travel in swirlies? Sure, why not.

So you get these increasingly convoluted and complex theories, but in spite of how irrational they seem - they are supported by what we see. But at some point you're going to reach a dead end when the model becomes so intractable that it becomes impossible to juryrig yet another observation into it. And it's only at that point that we start to scratch our head and wonder what's going on. And finding the problem there can be inconceivably difficult because it can be something far more fundamental than you'd ever look for. For instance in a geocentric universe you might search for why planets travel in swirlies. Yet you're at a much higher level than the actual problem - which is that they don't actually travel in swirlies. And in this toy example things are much better than they might be in our reality. There you're only a couple of 'fundamentals' separated from the real problem. With our rapid pace of publication and 'stair stepping', models advance and build upon themselves exponentially more rapidly.

Like a single cog in a clock breaking, all it takes is a single falsehood be assumed as truth in a model to begin to undermine the entire phenomenally complex system.


Don't miss the author's (Bee) comment

https://www.overcomingbias.com/2018/12/can-foundational-phys...

>But I have on my blog discussed what I think should be done, eg here:

>http://backreaction.blogspot.com/2017/03/academia-is-fucked-...

>Which is a project I have partly realized, see here

>http://scimeter.org

>And in case that isn't enough, I have a 15 page proposal here:

>https://fias.uni-frankfurt.de/~hossi/Physics/PartB2_SciMeter...


If LHC is producing data that confirm our theories, then all the better, no? Maybe we should instead focus of different, under-reasearched areas of physics? Or maybe on different sciences?


Absolutely. There are some known anomalies in particle physics (e.g. neutrino oscillations) but for the most part it is insanely robust.

What's really hard is moving from these first principles to modelling real world phenomena; there are huge problems like protein folding, low temperature superconductors and even just predicting properties of new materials and chemicals.

It's still important for some people to work on fundamental physics, but I suspect there is a lot more opportunity in mere phenomenology.


I expect informative new physics data will come out of the failure of quantum computing devices. Actual quantum supremacy would be a nice consolation prize, though.


What do you mean? There are working devices today.


There's no clear path from today's devices to quantum supremacy.


Actually there are. But in any case, it's not the same as "the failure of quantum computing devices." Maybe you meant market failure?


No, I mean failure to achieve the anticipated speed up.


That's like saying quicksort will fail to achieve O(n log n) sort times. The processes underlying quantum computers doesn't exist in isolation. It's not some unique property of some untested physical theory (that string theory or quantum gravity) that might end up being totally wrong. It's the same physical processes you can test in any college physics laboratory, even your garage if you're willing to put in some work, and which underlie basic chemistry and material science. The computer or phone you're reading this on works by the same underlying physical properties as quantum computers are built on.

The only reason we don't have quantum supremacy today is because they require atomic precision in their construction, something we lack the ability to do now but for which there is nothing preventing us from doing in principle.

We will get there, eventually.


It's exactly that necessary precision which makes me skeptical. It's a different regime than we've been able to test, so far.

Anyway, I'm not saying it's impossible it'll work out that way, just that I think it's unlikely, and the less interesting possibility. I definitely think the attempts to make it work are worthwhile.


I was under the impression we aren't expecting by to see any new physics at CERN anymore anyway as it's too low energy to probe. Aren't physicists kind of waiting for the next scale of colliders?


Reminds me of the book “Constructing Quarks”


It's funny how nonphysicists (this author) and non-productive physicists (Hossenfelder) beat this drum most loudly. How about we do this: let the people who are obviously smartest make their own decision about what is most promising to work on, and have enough modesty to realize that their decision is better informed than our efforts to advise.


> non-productive physicists (Hossenfelder)

Why do you think Hossenfelder is a non-productive physicist?

> How about we do this: let the people who are obviously smartest make their own decision about what is most promising to work on

We've been doing that all along in physics, and it doesn't seem to be working out.


It's remarkable how rotten the state of things are in academia while everyone beats around this bush without outright saying it.

Some of the best physicists in the world right now are likely to be caught up in just providing for themselves. Why cant someone in Africa, or wherever, get a degree in physics from Harvard? Why do they need anyone's permission to have access to this? Why can't they at least have access to course material and testing, so they can openly compete? Who is afraid of the competition? There's no ethical justification for that. The world spends trillions in public and private money already. This is just one criticism, and I'm not alone in classifying academia as rotten. Feynman said the same. So few people have the bravery to stand up to an entire socioeconomic complex, even when it means people will die and projects will fail dramatically: see the Nasa Challenger Groupthink Disaster (which Feynman also criticized).


> Why cant someone in Africa, or wherever, get a degree in physics from Harvard?

I would put this somewhat differently: why should you need a degree in physics from Harvard to do physics? What value does that credential actually add?

Note, btw, that my own alma mater, MIT, has all of its course materials (lectures, problem sets, selected solutions) available online for free:

https://ocw.mit.edu/index.htm


Yea, good point. But some kind of signalling is helpful, no?

And OCW is a great project, but all the course material isn't online. Also, saying to an employer or researcher "I took some OCW courses" isn't a very good signal. A much better signal would be, "I passed such and such examinations with such and such scores." Open competition I think is important.


> some kind of signalling is helpful, no?

I think calling it "signaling" highlights an important (and troubling) point. Employers and researchers are trying to predict future performance; degrees are supposed to be a measure of one's potential for future performance, and the quality of the institution that granted the degree is supposed to factor into that measure. But over time, institutions have an incentive to reduce rigor and quality in order to cut costs, while still taking advantage of the full perceived value of the degrees they grant based on their past rigor and quality (for example, when charging tuition). I think the common tendency to regard degrees as a form of "signaling" is a tacit recognition that this goes on.

> A much better signal would be, "I passed such and such examinations with such and such scores."

I agree that this would be a much better predictor of potential for future performance, if the institutions grading the examinations and providing the scores were completely unconnected with the institutions that constructed the examinations. (And of course the examinations used for this would have to be different from the ones available over the Internet to everyone.)


There is a very interesting interview with Edward Teller, who makes the exact point that throwing money at science does NOT produce results, whereas throwing money at technology sometimes does.

I think Teller qualifies as smart.


Because being smart doesn't make you immune from social biases and institutional incentives, which is the point of the OP.


Those that decide to stay in the race, are not necessarily the smartest. Maybe they are the most focussed, more blind to external impulses and "distractions". Plus they are more likely, being in the field, to be affected by these biases.

It would be unwise for any field, even that of brilliant physicists, to ignore external opinions and inputs.


Sounds good, as long as the public isn't expected to cough up the funds for their research. Otherwise, don't blame the public for wondering if the LHC was worth the cost.


"Their research" is the one that keeps pushing technology forwards, the technology that you are using in this moment


What technology came from the LHC? Or string theory for that matter?


https://kt.cern/technologies

https://kt.cern/cern-technologies-society

If you want to include past contributions, that WWW thing is kind of neat...


Except it had nothing to do with physics and was not a research project, just yet another hypertext system.


Tim Berners-Lee developed HTTP (and the underlying idea of "hypertext" made of interlinked documents) to support scientific collaboration at CERN.

I'm not aware of any technological advances produced by the measurements at the LHC, but it has been running for barely 10 years. On the other hand, building the thing probably required significant innovations in magnets and sensors.


We also need to suspect that no state will spend immense amounts of cash to build an expensive machine to chase supposed particles unless they can also develop and test militaryly useful technologies such as high vacuum technologies. CERN's website explicitely states that they are not about military technology but as per some post at HN recently we know that CERN officials can lie when it suits them. I'm not saying it's good or bad that they do military research if they do (after all internet came out of DARPA) but I'm saying that particle search may just be a cover and not their main purpose.


Here's a list of CERN's member state, observers and non-members with cooperation agreements:

https://home.cern/about/who-we-are/our-governance/member-sta...

It's basically every developed country on the planet, including US, Russia and China. And if you had ever visited CERN, you would know that anyone can go pretty much anywhere; the strongest deterrent you are likely to encounter are signs warning about possible radiation exposure.

It's hard to imagine a worse place to try doing military research.


> It's hard to imagine a worse place to try doing military research.

You are right of course. I don't mean overt military research. They can develop all kinds of high vacuum and laser technologies to search for elusive particles then it's trivial to turn that research into laser guns. But this is a guess. To me, you need to suspend disbelief to believe that a government, any government, will spend money to add few more particles to the Standard Model unless there is something in it for themselves. This is only a guess. I might be wrong.


It's a combination of two things:

1) Training facility for new engineers and scientists, most of whom will eventually leave academia for jobs (it is hoped) in the tax-paying sector.

2) Boondoggle to economically support contractors (because they hire large numbers of voters and/or fill an important function but suffer from uneven demand).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: