Given that there's "not much to do", it's weird that there isn't a 1% group somewhere out there who thinks this is worthwhile.
Context = I studied particle/astro physics 15 years ago and then "dodged the bullet" as the OP nicely puts it. I felt the way we were taught modern phsyics was quite poor/handwavy, esp. once the cross was made to QFT territory, and led to a lot of misunderstanding/confusion, which I still see in comment threads today, incl. between academics (!). Also, when I speak to mathematicians about this, they deeply disapprove of the way physics is taught/run today in this respect, and can routinely point to misunderstandings/confusion that hinders progress.
A good example (but probably too mathy) is the work of Tamas Matolcsi: `Spacetime without reference frames` and `Ordinary thermodynamics`.
Modern QFT is like being given a laptop with an Excel spreadsheet that does some clever things, without having any idea what a processor is, what memory is, how a hard drive works, or why the case is that funny shape.
The kinder way to put this is to say there's a lot of educated guessing going on. But the fundamental problem - reconciling GR and QFT - can't be solved without a completely new mental model. And academic research isn't funded in ways that reward the generation of creative new models.
It's become a bit of a cargo-culty pursuit. You're rewarded if you know the words to the songs that everyone else is singing, but if you try to invent a new genre you'll probably be told it's career suicide and Don't Go There.
The entire argument against string theory, is, in many ways, founded on the fact that as a group of hypotheses, none of them predict anything that would rule them in or out that's within a practically reachable detection limit.
>no one really knows what they're doing. Even Feynman said that no one understands QM
Quantum field theory makes possible the single most accurate predictions in all of science. Time and time again people whip out the Feynman (mis)quote to try to argue "oh we don't really know what's happening", but just cannot see why they think that way. Science is about finding observing reality and formulating descriptions with predictive power, and qft is objectively the best description any person has figured out so far.
>Modern QFT is like being given a laptop with an Excel spreadsheet that does some clever things, without having any idea what a processor is, what memory is, how a hard drive works, or why the case is that funny shape.
Again, preposterous. We know how qft works (we've come up with it, after all), and we know how to use it to make predictions and confirm them experimentally. If you mean know in a deeper, fuzzy sense, they maybe, maybe not.
>the fundamental problem - reconciling GR and QFT - can't be solved without a completely new mental model
>It's become a bit of a cargo-culty pursuit.
If you come up with a better idea than the current understanding, publish your findings and you will instantly become the most respected, popular, and influential physicist alive.
I will concede though that sadly funding is allocated more to "safer" avenues of research than novel ones. That's certainly a problem.
Even without gravity, modern particle physics theory is a big mess imo.
Does it matter whether mathematicians think it is handwavy? If any two trained practitioners can get to the same result of a calculation and it matches experiment to appropriate approximations that seems good enough to me.
I think that's already true today. The problem is, when theorists are out manufacturing new theories, I think it gets very confusing/handwavy (because everybody is brought up on handwavy fundamentals).
Disclaimer: I'm not a practicing physicist.
Whereas, I'm currently hearing an introductory philosophy lecture and, while physics informs my view in large parts, it's mostly about knocking down misconceptions.
The basic fundamentals, the first principles deriving conclusions, should be basic common knowledge. I don't actually need a mysterious slit experiment to be able to tell that my life, so to speek, is unpredictable, yet very regular, in the bigger picture. One common problem is language and there's almost no focus on that in physics education--if I may leave at least one lament--and since I can't say that the assembly of quantum states that is me is very likely to diverge, disentagle and grow decoherent ... philosophy still puts up with concepts like soul; or free will which is in the end not about inherent physical properties but just about regret and commitment.
Not quite sure what you mean by this. This is (part of) the standard curriculum for a physics degree. Of course physics is a vastly bigger field than quantum mechanics, so it takes 4 years to do this journey.
Good examples as cautionary tales, or worth a look as how it should be done?
Look at the spacetime one and read the Preface, it gives a good overview of the "program". Read the first chapter and see what you think.
Disclaimer: I took these courses 10+ years ago from Matolcsi.
I don't know what the author is proposing. We don't have enough storage to persist all collision events, that would require zettabytes of disk space. The detectors are bottlenecked to store a few hundred events a second. Therefore they need to filter out the majority of the 40 million collision events per second that occur in the LHC.
Even then, I recall around 1% or so of the stored events were saved via a 'minimal bias' trigger, one that doesn't apply any filter criteria. This was mainly for calibration purposes, and cross checking stimulation data. So we still have petabytes of collision events that didn't have any selection criteria applied.
For run 2 of LHC they used 50000 CPU cores for their software triggers, after the hardware trigger has reduced the 40 MHz input rate down to 1 MHz. The final output of the software triggers is 12.5 kHz, which is persisted to disk. Keep in mind this is just for the LHCb detector.
For run 3, they're planning to remove the hardware trigger bit, running the software triggers directly from the 40 Mhz signal. This would allow them to reprogram the triggers during the run, in case some new interesting theory comes along which for some reason has a signal their current trigger won't identify.
The computation side of the LHC is really impressive. For a full software trigger, you have 25 nanoseconds in which to load all the raw collision data, reconstruct 100's of particle tracks, calculate their momentum, join them up to figure out their decay vertices etc etc, and then, decide whether to store the event.
I recall LHCb could afford higher trigger-rates than CMS/Atlas, an LHCb event is smaller (~100 kB vs 1 MB for CMS) because the detector only covers 300 milli-radians from the collision-axis, in one direction, whereas CMS/Atlas have full coverage.
> From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.
These words show their opinions come from someone that does not know/understand the technology behind these systems and behind the computing power and capabilities at CERN. Of course, they have limitations, but are the kind of limitation that keeps pushing technology forwards. Using quotes on the trigger mechanism is saying how strange those concepts are for them.
The trigger system is a fine tuned filter that allows the electronics to work, otherwise they will be overloaded and will crash as they will be receiving a rate of data that simply can not be handled. How this trigger is being set depends on the specific physical process that is being studied and is supported by theory and simulation, the scope that can be tested is also limited so the selection is highly scrutinized and reviewed
Their point is exactly that the trigger systems are controlled by algorithms based on current theories, which so far have shown nothing for all their efforts.
I haven't been in the field since the LHC started operating but from memory it was exactly billions of events per second and storage could only keep up with hundreds.
No one is setting these things up to go "hmm a particle on a totally unusual trajectory - but not where the Higgs should be so junk it".
If by "so far have shown nothing for all their effort" you mean that no new results were found, that isn't due to the current theories being bad: in fact, it is due to them being too good: they describe the results too well and thus there is not enough difference between the current theories and current results that would require a new theory.
(turned out they did, but nobody at the time knew to account for perturbations caused by the as-yet-undiscovered Neptune)
And depending on how far you want to take this, the neutrino probably would've been dismissed, too, before eventually being detected.
"Oh look, Uranus is doing that odd things again. Doesn't fit in with Newtonian mechanics or my current pet theory so better throw it out."
LIGO with gravitational wave astronomy and the Event Horizon telescope with very long baseline interferometry are opening up new ways to observe the universe.
Yes the data is not as abundant as it used to be a few decades ago but that's the nature of the game. Our current models work very well in terms of describing accessible energies. So this is going to take longer and require more and more ingenuity. I don't think the problem here is lack of motivation for getting good answers -- to the contrary, anyone who can discovery something major is going to have a lot of fame and credit come to them.
>But what if scientists could make larger gains by betting
>smartly than they could make by promoting their own
>research? “Who would bet against their career?” I asked
>Robin when we spoke last week.
>“You did,” he pointed out.
The biggest issue with using models as a tool in science is that they start to become unfalsifiable. To avoid the hornet's nest of modern models consider geocentricism - the belief that Earth was uniquely at the center of the solar system, universe, and everything. In times before telescopes this belief was justified by models. If you assume this is true, then you get some really bizarre behavior from the planets that now orbit the Earth. In particular some planets will suddenly stop and start moving the other way, most planets will travel in 'swirly' patterns, and so on. But when you have a model none of this matters. Planets need to go backwards? Sure, why not. They travel in swirlies? Sure, why not.
So you get these increasingly convoluted and complex theories, but in spite of how irrational they seem - they are supported by what we see. But at some point you're going to reach a dead end when the model becomes so intractable that it becomes impossible to juryrig yet another observation into it. And it's only at that point that we start to scratch our head and wonder what's going on. And finding the problem there can be inconceivably difficult because it can be something far more fundamental than you'd ever look for. For instance in a geocentric universe you might search for why planets travel in swirlies. Yet you're at a much higher level than the actual problem - which is that they don't actually travel in swirlies. And in this toy example things are much better than they might be in our reality. There you're only a couple of 'fundamentals' separated from the real problem. With our rapid pace of publication and 'stair stepping', models advance and build upon themselves exponentially more rapidly.
Like a single cog in a clock breaking, all it takes is a single falsehood be assumed as truth in a model to begin to undermine the entire phenomenally complex system.
>But I have on my blog discussed what I think should be done, eg here:
>Which is a project I have partly realized, see here
>And in case that isn't enough, I have a 15 page proposal here:
What's really hard is moving from these first principles to modelling real world phenomena; there are huge problems like protein folding, low temperature superconductors and even just predicting properties of new materials and chemicals.
It's still important for some people to work on fundamental physics, but I suspect there is a lot more opportunity in mere phenomenology.
The only reason we don't have quantum supremacy today is because they require atomic precision in their construction, something we lack the ability to do now but for which there is nothing preventing us from doing in principle.
We will get there, eventually.
Anyway, I'm not saying it's impossible it'll work out that way, just that I think it's unlikely, and the less interesting possibility. I definitely think the attempts to make it work are worthwhile.
Why do you think Hossenfelder is a non-productive physicist?
> How about we do this: let the people who are obviously smartest make their own decision about what is most promising to work on
We've been doing that all along in physics, and it doesn't seem to be working out.
Some of the best physicists in the world right now are likely to be caught up in just providing for themselves. Why cant someone in Africa, or wherever, get a degree in physics from Harvard? Why do they need anyone's permission to have access to this? Why can't they at least have access to course material and testing, so they can openly compete? Who is afraid of the competition? There's no ethical justification for that. The world spends trillions in public and private money already. This is just one criticism, and I'm not alone in classifying academia as rotten. Feynman said the same. So few people have the bravery to stand up to an entire socioeconomic complex, even when it means people will die and projects will fail dramatically: see the Nasa Challenger Groupthink Disaster (which Feynman also criticized).
I would put this somewhat differently: why should you need a degree in physics from Harvard to do physics? What value does that credential actually add?
Note, btw, that my own alma mater, MIT, has all of its course materials (lectures, problem sets, selected solutions) available online for free:
And OCW is a great project, but all the course material isn't online. Also, saying to an employer or researcher "I took some OCW courses" isn't a very good signal. A much better signal would be, "I passed such and such examinations with such and such scores." Open competition I think is important.
I think calling it "signaling" highlights an important (and troubling) point. Employers and researchers are trying to predict future performance; degrees are supposed to be a measure of one's potential for future performance, and the quality of the institution that granted the degree is supposed to factor into that measure. But over time, institutions have an incentive to reduce rigor and quality in order to cut costs, while still taking advantage of the full perceived value of the degrees they grant based on their past rigor and quality (for example, when charging tuition). I think the common tendency to regard degrees as a form of "signaling" is a tacit recognition that this goes on.
> A much better signal would be, "I passed such and such examinations with such and such scores."
I agree that this would be a much better predictor of potential for future performance, if the institutions grading the examinations and providing the scores were completely unconnected with the institutions that constructed the examinations. (And of course the examinations used for this would have to be different from the ones available over the Internet to everyone.)
I think Teller qualifies as smart.
It would be unwise for any field, even that of brilliant physicists, to ignore external opinions and inputs.
If you want to include past contributions, that WWW thing is kind of neat...
I'm not aware of any technological advances produced by the measurements at the LHC, but it has been running for barely 10 years. On the other hand, building the thing probably required significant innovations in magnets and sensors.
It's basically every developed country on the planet, including US, Russia and China. And if you had ever visited CERN, you would know that anyone can go pretty much anywhere; the strongest deterrent you are likely to encounter are signs warning about possible radiation exposure.
It's hard to imagine a worse place to try doing military research.
You are right of course. I don't mean overt military research. They can develop all kinds of high vacuum and laser technologies to search for elusive particles then it's trivial to turn that research into laser guns. But this is a guess. To me, you need to suspend disbelief to believe that a government, any government, will spend money to add few more particles to the Standard Model unless there is something in it for themselves. This is only a guess. I might be wrong.
1) Training facility for new engineers and scientists, most of whom will eventually leave academia for jobs (it is hoped) in the tax-paying sector.
2) Boondoggle to economically support contractors (because they hire large numbers of voters and/or fill an important function but suffer from uneven demand).