I interned under the scientists that did the original Brookhaven Muon g-2 experiments. I learned a ton during multiple months from them and it was so interesting to learn why this measurement is so important. One of the sad parts is that a decade later, a lot of the original team has either retired or left Federal labs because the environment has become toxic due to a mixture of politics and greener pastures beyond national labs and academia. It’s quite sad to see the US science community degrade over this past decade.
Particle physics is kind of drying up, though. The standard model works too well and the routes to exploring beyond it look narrow or expensive. The failure of supersymmetry didn't help.
The biggest issue was the allocation of the collider. Experiments are booked out to YEARS. If you want to have an experiment on it, there’s a non trivial amount of politic involved to convince others that your experiment is worth for a time slot.
Article doesn't indicate how close this measurement is to any specific theoretical value, and, therefore, doesn't hint at whether this experiment is expected to actually challenge a standing Standard Model calculation.
I suspect that's because
> ... a new experimental measurement of the data that feeds into the prediction and a new calculation based on a different theoretical approach — lattice gauge theory — are in tension with the 2020 calculation
> Scientists of the Muon g-2 Theory Initiative aim to have a new, improved prediction available in the next couple of years that considers both theoretical approaches.
"The muon, like its lighter sibling the electron, acts like a tiny magnet. The parameter known as the "g factor" indicates how strong the magnet is and the rate of its gyration in an externally applied magnetic field. It is this rate of gyration that is indirectly measured in the Muon g − 2 experiment.
The value of g is slightly larger than 2, hence the name of the experiment. This difference from 2 (the "anomalous" part) is caused by higher-order contributions from quantum field theory. In measuring g − 2 with high precision and comparing its value to the theoretical prediction, physicists will discover whether the experiment agrees with theory. Any deviation would point to as yet undiscovered subatomic particles that exist in nature.[4] "
TLDR: Standard model says the ratio measured should be about 2 for g, ie g-2 > 0 would indicate contributions from quantum field theory, which are yet to be well understood and could indicate there are more particles responsible for the excess.
The standard model *is a Quantum field theory". Making a Standard Model vs. QFT thua statement doesn't make sense.
The problem is the lack of an analytical solution and the different systematic errors that come from different numerical approaches and assumptions made.
According to the article, there are multiple standard model theory predictions using different approaches which are in tension.
I am not offering one versus the other. I am saying that g-2 > 0 contributions come from QFT, which is true and that people inspect the g-2 quantity for evidence of beyond standard model particles.
Literally just rephrasing the wikipedia/phys org article, as a TLDR would claim to do.
It’s probably not a problem, probably… the small discrepancy in the theorists’ predictions is expected to be narrowed and should be back within acceptable bounds again within the next couple of years.[0]
Can we foresee what the consequences would be if it doesn't stay within range?
Is it 'just' a matter of adding some sufficiently elusive heavy particle to makes the numbers come out okay without really disturbing anything else, or is there any meaningful sense in which we'd get a fifth force like the BBC article suggests (maybe that we can make nice predictions about)?
I imagine it will most likely be nothing, but curiosity about the counterfactual just can't be helped :)
In essence, you need to enumerate and sum up all possible ways everything can happen in a very little piece of the universe that corresponds to the "reaction center" of the experiment. All possible ways you can pop out particles from the vacuum that interacts with your experimental particle and pop back into the vacuum etc. It's a totally impossible calculation to do in full, so a large chunk of theoretical high energy physics is about finding shortcuts to do that.
All these ways contribute to the reaction probabilities (which are the ones you get a number on at the end of the day).
The muon g-2 experiment is so interesting because there is a vast number of pretty exotic particles and particle pathways that have to be taken into account, and if our understanding of any of those is wrong, or there are (even more interestingly) particles that can pop out and contribute that we don't even know about, it will show up in the muon g-2 number as compared to the theoretical calc.
> All possible ways you can pop out particles from the vacuum that interacts with your experimental particle and pop back into the vacuum etc.
This is a neat statement, but really those diagrams are notation for elements in a perturbation expansion series. Nothing in the theory says that these particles actually pop out and actually interact with the incoming/outgoing momenta. In fact they travel off-shell so we know that they cannot be physical.
If you're referring to Feynman diagrams, sure in the modern treatment they don't mean what people usually think they mean, but it's also not accurate to state that particles don't pop in/out. It's a simplification for sure, but you really do need to enumerate all the field configurations and there is really no difference from a first principles point of view between your "physical" particles and the "off-shell" excitations. You can argue about the reality of the contributing terms but it sure looks like they're needed or you won't get the right results.
Not a physicist but basically yes. Most quantum mechanics calculations involve the path integral[0], a sort of sum of all possible Feynman diagrams[1] that an interaction could involve and the more of these you involve the more accurate you hope to be. Further the field theory that they're calculating within is generally an effective field theory[2] which is known to be an incomplete approximation of the system it's modelling but "close enough" for the energies involved. That approximation costs accuracy as you get closer to the energy cutoff you included but you included it because you don't know what happens above it so it's not like you can just not do that. Depending on how renormalization[2] is done in that effective theory the constants involved can have their values affected.
In this particular case you're also looking "closer" at an interaction than you can model with the particle intuition. For simple electron/photon scattering you can pretend that things are particles and use that intuition to guide you but when you start looking really close at things you can't really do that anymore and have to work with fields and waves in a way that belies normal physical intuition. Part of the way that manifests is "virtual particles" popping in and out of existence (really field interactions that aren't really particle-like but with the virtual particle trick can be modelled almost as if they were). These interactions do affect the real world and the way they affect it depends on the fields involved. If there's some unknown 4th generation of lepton for instance a virtual one of those could interact with the experiment, but since you didn't know about it you didn't include it in your calculation.
Also depending on how you're calculating things you aren't just saying "1+2=3", you're saying "1±theory error + 2±theory error = 3±theory error" so you end up with a range of calculations. That might manifest as "well if there's a graviton then I need to calculate it this way but if there isn't then I need to calculate it this way, and we don't have experimental evidence to guide me one way or the other".
Lastly, QFT is quantum mechanics + special relativity and I think it goes without saying that both of these are just detailed, difficult fields with difficult maths involved. So you're taking an already difficult field and since you're trying to be accurate out to a lot of decimal places while interacting with messy real-world experiments you're forgoing all of the nice physics-class "massless particle in a spherical box" approximations that normally let you skip over all of that trouble.
The quality of this "press release" is really impressive. It is very well written, with clear and easy to understand explanations - no over emphasis on any specific stuff, clean page and wording.
I enjoyed reading through it, this is something I miss more and more with most of the press release made to grab the 5-second-attention of people scrolling through their feed in whatever media app they are using.
> Link to paper [0] - now that’s a concise author list, lol.
Compared to e.g. a paper from the ATLAS experiment that looks very concise, it even fits on a single page.
I have seen papers where the author list is longer than the paper itself...
"The muon, like its lighter sibling the electron, acts like a tiny magnet. The parameter known as the "g factor" indicates how strong the magnet is and the rate of its gyration in an externally applied magnetic field. It is this rate of gyration that is indirectly measured in the Muon g − 2 experiment.
The value of g is slightly larger than 2, hence the name of the experiment. This difference from 2 (the "anomalous" part) is caused by higher-order contributions from quantum field theory. In measuring g − 2 with high precision and comparing its value to the theoretical prediction, physicists will discover whether the experiment agrees with theory. Any deviation would point to as yet undiscovered subatomic particles that exist in nature.[4] "
TLDR: Standard model says the ratio measured should be about 2 for g, ie g-2 > 0 would indicate contributions from quantum field theory, which are yet to be well understood and could indicate there are more particles responsible for the excess.