And now everyone: Noooo, not again.
(Explanation: it's well-known that the Standard Model can't be completely correct but again and again physicists fail to find an experiment contradicting its predictions, see https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo... for example)
I forgot the numbers, but they have severals layers of filtering between sensors and long-term storage. First there are FPGA-based real-time filters, next to the sensors, which throw away most of the data as "noise." Then there are local CPUs which throw away most of the remaining data, again classified as "noise" or uninteresting. Finally, what remains (30,000 TB/year) is stored long-term to be later analyzed by physicists.
All levels of filtering and analysis, from the FPGA to the physicist's algorithms, make use of the Standard Model itself and the rest of known physics to figure out what's "interesting" and what's "noise."
One big problem is thus: how can we find new things if we are only looking for what we already know? Hence the need for machine learning and automatic pattern discovery.
It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM.
> The term magic number or magic constant refers to the anti-pattern of using numbers directly in source code
You have to run `git format-patch` and email the results to Him.
edit: thanks for the explanation
For any experts reading this: what are plausible-looking extensions of the standard model which can probably be challenged/"proved" by current generation accelerators (in particular the LHC) and would lead to interesting extensions of the standard model?
As far as I am aware, the LHC could e.g. find no sign of lots of variants of supersymmetry, which was a plausible candiate for this in the past:
Until recently they were mostly looking for specific deviations predicted by extensions. They only recently announced that they're now going for a broader search.
They'll write down a model that describes it, which is all theorists' job to begin with.
We can add an extra parameter to general relativity (known as the "cosmological constant") to describe "dark energy" observations via a kind of 'anti gravity'; although we still don't fully understand what that means, or whether it's a correct description. It's also unclear whether this would have anything to do with quantum theories (like the standard model).
General relativity can explain "dark matter" observations by assuming there is more mass/matter than we can see (i.e. it's electrically neutral and doesn't interact with light). Since the standard model tries to describe all of the fundamental constituents of matter, and forces including electromagnetism (light), having nothing to say about such a seemingly large amount of stuff is a rather large discrepancy in the standard model.
AFAIK the standard model also says that neutrinos have zero mass; yet we've observed them undergoing radioactive decay ("neutrino oscillation", where each sort of neutrino can decay into the others). Particles which decay require some amount of time to do so. Particles with zero mass always travel at the speed of light (like photons, and hypothetical gravitons) and hence don't experience any time passing (this sentence is a consequence of special relativity). So particles with zero mass can't decay, so neutrinos can't have zero mass. I don't think we've measured their mass very accurately yet; we know it's very small, but it cannot be zero.
OOPS: I see a sibling comment also covers this. Oh well, I'll keep this one up since it's slightly more eye-catching.
If we have a conversation and you reveal that my assumptions are wrong, I can go back, rethink everything, and come out with a stronger world view.
Basically, this experiment didn't tell scientists anything they didn't already know, or didn't prove them wrong--which might lead to newer, more interesting models that reveal more about the universe.
As the cosmologist Sean Carroll said in his podcast, particle physicists haven't really been surprised by an observation since the 1970s. Presumably many were hoping for a surprise.
The error bars are ±20%, so there's quite a lot of wiggle room for new physics.
Context: at 10^16 TeV you start probing the planck energy scale, where gravity becomes strong enough to influence particle interactions. In nature, these energies are reached only inside black holes and just after the big bang. The standard model does not describe gravity, so it has no predictive power at this scale, which is why the above phenomena are a complete mystery to us.
The standard model gets away with this at LHC energy scale because gravity is so weak it can literally be neglected in the calculations.
It's of course possible we'd see some new physics before then, but it's not guaranteed, building a collider 10 times would be a bit of a stab in the dark.
With the improved understanding of the Higgs that we have now, an e+/e- machine can be custom-built to study it in great detail. The Higgs is the newest fundamental particle to physics, and the first/only known scalar. We should expect that it has more to tell us than simply its own existence.
Even when the LHC was being planned, LHC was the discovery machine, and precision experiments were to follow. In my opinion, only when those measurements are complete should we consider a huge leap toward higher energy, unless new accelerator technology emerges.
A question to physicists: Is there (strong) evidence that there might exist other scalars (if yes: at which energy level?). If not: Do there exist attempts of theories that predict why there is only one scalar?
At some level, it is something of a surprise to have found that there is just one particle responsible for the Higgs mechanism, and it is indeed spin-0, as Nature hasn't given us one of those before. Prior to the discovery of the Higgs, perhaps the majority of physicists were betting on something Higgs-like, but not quite the pure vanilla Standard Model Higgs.
I would not bet any money on it, but if one were looking for another scalar, one might find it in whatever mechanism underlies dark energy. Many theoretical models for dark energy use scalars, again, because they are easier...
Improved colliders, and improved experiments generally, tend to increase the number of ways in which we can search for a departure from the plan.
What we may need, at least as much as new colliders, is the right insight to open a new way forward. The standard model is large and complicated -- there aren't that many people who really understand its complexity enough to find something analogous to Gell-Mann's eightfold way.
Are proposal estimating new efforts at 100's of km? 1000s?
Rather than trying to rule out a hypothesis completely, it's easier to place limits outside of which a hypothesis is essentially useless. IIUC, if the LHC didn't spot the Higgs mechanism below 14TeV then that wouldn't work as an explanation of other particles' mass. At that point there might have still been a Higgs mechanism at higher energies, but we wouldn't really care either way since the whole point of coming up with the Higgs mechanism was to explain the mass of other particles: if it can't do that, it becomes inconsequential. That's why the LHC was so important for the Higgs: we would either find it, and hence have a better explanation of particles' mass; or we would know that particle mass doesn't follow the Higgs mechanism.
At the moment we have a bunch of hypotheses which predict effects that larger colliders might see, but I don't think we have (feasible) limits which let us discard these hypotheses. Hence new colliders don't have a goal to aim at: it's just a case of going as big as possible to have the highest chance of seeing new effects. Yet we might see nothing, and that wouldn't actually tell us much, since the predictions could just be tweaked again.
Edit: Nope I'm wrong.
Yeah, that was the splice consolidation.
> but the luminosity was increased in 2016 and 2017 as well.
From what I know, that's through conditioning, repairs and improved techniques (BCMS, ATS, anti-levelling). The increase from 2015 to 2016 was because they didn't finish the intensity ramp in 2015 as the scrubbing campaign lasted so long. 2016 had the TIDVG dump problem as well as outgassing from the injection kickers. 2017 was marred by 16L2.
If a model of the universe was completely correct, wouldn't it have equivalent complexity to the universe and hence not be a model?
This is false both due to the nature of mathematical modelling in physics, and the ad-hoc nature of the term "complexity" when comparing the universe and the science we use to understand (or, perhaps, only model it).
To elaborate on the first point: No measurement can be perfect enough for their to be no uncertainty on it. It is therefore impossible for a model to be completely correct, although it can (obviously) agree with experiment to within current experimental uncertainties - like the standard model is now, so far.
For example, we had Newtonian Mechanics and that looked good for a couple of centuries, but then it turned out that it was too simple and we needed to add more laws.
We would have to settle for merely being able to explain every single phenomena in the observable universe at every scale.
Then see a recent essay at Quanta Magazine that explains that physics has long looked to some largely esthetic concerns especially about symmetry to pick and choose among alternate theoretical explanations -- again my rough summary from my rough memory.
Lost in Math: How Beauty Leads Physics Astray
and is an excerpt from her book of the same name.
I was led to that excerpt by page
"The End of Theoretical Physics As We Know It",
August 27, 2018.
It's not currently (maybe ever?) possible to know when you're 'done' with an experimental science. With newtonian mechanics, we didn't know we needed something better until we did.
So when you go from particle physics to chemistry, new complexities emerge that can't be explained in the realm of particle physics alone (iirc).
sadly certain "sciences" still cling to the idea that they can simply aggregate the results from multiple of their "particles" and get a solution for larger systems.
Correct me if I'm misunderstanding, but I think you're confusing general theory vs practical applicable models here maybe? Yes, fundamental principles of interaction can combine at scale to create new large scale effects, but that doesn't change the fact that they came out of fundamental principles nor that they can't be "explained" via those principles. There is no magic that pops into existence up the chain. The asymmetry of water molecules and the way their electron clouds distribute create all sorts of fascinating effects in bulk water, but they're still directly coming out of physics of course.
The issue in practice however is that the level of computing necessary to accurately model reality at scale from fundamentals matches or exceeds actually doing it in reality, and for us rapidly becomes absolutely, utterly infeasible for anything but the simplest systems. "New emergent complexities" absolutely "can be explained" from a correct lower level set of principles, but that doesn't mean we can actually crunch the math at any scale we want. So we need higher level bulk models too, at many levels all the way up, which are good enough to be effective approximations to a given level of accuracy in practical computing time. The low level fundamentals often at bulk average out due to random variance in sufficient quantity and are irrelevant to whatever we care about, so there is no need to do it that expensively (even if we could). But that isn't the same thing as the fundamentals being wrong somehow or not being at the root of everything above.
But why would a non-reducible universe be so predictable?
Most descriptions of particle physics that I have encountered begin right away with an enumeration of the different types of particles, and the statement that some of them are composed of various combinations of quarks, but don't include (at least not without investing some hours of my time) any indication of how these things are observed, what set of data this model fits, what is the nature(if any) of a quark independent of the hadron in which it is a constituent, what are the laws governing quarks that cause these particles to arise, etc.
I don't feel I'm learning much of anything by just memorizing the names of all the members of the particle zoo. But it seems I must spend some hours doing this before I can gain any understanding of what particle physics means, or how particle physics is done?
> any indication of how these things are observed
In the last 50 years or so, the bulk of the evidence has come from particle accelerators, but there's been meaningful results from other experiments as well. Sean Carrol organizes the current state of physics into two broad categories: intensity experiments, like the LHC, which are attempting to reach energy concentrations we haven't probed before, and sensitivity experiments, which observe natural but rarely produced or interacting particles, like neutrino detectors.
> what set of data this model fits
All the data. The standard model is the best model we've found to explain all experiments observed in the history of physics.
> what is the nature(if any) of a quark independent of the hadron in which it is a constituent
Quarks and Gluons are bound together in the nucleus by the strong force. This force is, as its name indicates, very strong, however it falls off with distance sharply. The way it works out, the force is such that if you try to pull two bound quarks apart, the energy you add is sufficient to create new quarks. So lone quarks never appear, they're always bound into a composite of two or three, and if you try to pull them apart, you just end up making a second composite when they separate.
> But it seems I must spend some hours doing this before I can gain any understanding of what particle physics means, or how particle physics is done?
The blunt truth is fully understanding the standard model requires a lot of non trivial mathematics. I can't work with the math, but I've read through enough textbooks I've got some intuition for the big picture now. This isn't a topic where you can swoop in, spend 15 minutes, and suddenly understand it all. It's not going to just take some hours, it'll take much much more time than that.
Some topics cannot be simplified into a tidy summary that can be skimmed in a couple hours.
> All the data. The standard model is the best model we've found to explain all experiments observed in the history of physics.
This isn't the full story. The Standard Model does not explain neutrino mass (which we know exists from neutrino oscillations) or dark matter & dark energy. These are very big open questions!
Edit: apparently an older term for "superconductors"
For antigravity just look up M.Taijmar's work on the Thirring-Lense effect: https://patents.google.com/patent/WO2007082324A1/en?inventor...
To understand gravity you don't need to build super-expensive devices to find the particle-interpretation of this wave force. The Higgs makes no sense outside the standard model. Studying the wave-interpretation as attracting force is easier and also alignable with general relativity.
Explaining an wide-reaching attractive force as particle really makes no sense at all (outside QM), as Heisenberg also complained.
Not as originally formulated (because back then neutrinos were thought to be massless) but it's straightforward to extend that original version to include neutrino masses, by giving leptons their own equivalent to the quark sector's CKM matrix , the PMNS matrix . In modern parlance, "Standard Model" means this updated version.
Granted, if you do just that, you make neutrinos Dirac spinors (just like the quarks) and there is no obvious reason for why they should be so much lighter than the other fermions. Giving them a Majorana mass induced by a much heavier (GUT-scale) equivalent to the Standard Model Higgs would provide a natural explanation for that mass hierarchy through the seesaw mechanism , and that would be true Beyond the Standard Model physics.
(maybe with some Wikipedia lookups along the way) by one (arguably the greatest) of its masterminds.
Not something you can read, but that's what you're looking for.
The truth is there whether attempts are made to prove it or not. Also proving/disproving does not alter the fact. If you prove it to be true, great, but it was true already before. Now you just have the fact formalized on paper, so to say.
Now, couldn't one just make an educated assumption that some-particle-physics-problem has been already proven and then see what new things can are made possible with this assumption.
And then try some low hanging fruit enabled by this "virtually proven" assumption. Any success would indirectly prove the assumption, too, or at least give strong evidence in favor. Also, it might make interesting discoveries happen faster.
A general example... process A and process B can both have electrons in their final states (with other objects...). ML is used to separate A and B based on the kinematic properties of electrons (in combination with those other objects). Also, higher upstream, ML was used just to know that we had an electron to begin with!
A lot of BDTs are used, with deep learning under very active investigation. For example, when looking for the Higgs decaying to two bottom quarks, a slew of ML algorithms are used to identify so-called "b-jets" (jets which are identified as originating from a b-quark). In ATLAS we have low level taggers using deep neural networks (using Keras) in combination with higher level taggers using BDTs. Another example is the recent ttH observation, where XGBoost was used .