> For instance, SU(5) groups quarks and antiquarks together with leptons and antileptons into “fiveplets,” which are like the indistinguishable sides of a regular pentagon.
The idea is to put the 5 particles in 5 places that are undistinguishable.
For that you need to use the vertex corners of an hyper-tetraedrum ( https://en.wikipedia.org/wiki/5-cell ). Don't get confused by the bad drawings, if you have one of them in 4 dimensions, you can put each point at the same distance of all the other points.
If you use a regular pentagon, then you must select an order for each of the particles/vertex. If you select one, some are more close than the others.
(An alternative is to use a pentagon, but consider not only the rotations and flip, but also the operations that mix the vertex/particles in any order. But then the nice identification with the symmetry of the geometric figure is gone. You can use a square with the central point.)
In some sense, yes. I remember the class were my physics professor explained why the SU(5) could be The Group of the universe and why it failed (IIRC the predictions were 1% off with the experiment, a good try, but enough to be discarded.) You could feel how sad he was that SU(5) has to be discarded. :(
In some sense, no. The SU(5) group includes all the symmetries of the hyper-dodecahedron, were you can rotate it in the 4-dimmensional space to exchange one of the vertex/particles with another. But it also includes more strange things, like half mixing two particles.
a -> (a+b) / srqt(2)
b -> (a-b) / sqrt(2)
This is more difficult to explain. Technically, it's a rotation in the plane x-y of 45 degrees. And you can rotate another angles, for example in some particles decays the important rotation is 13 degrees. And you can mix three or four or five particles. All of this is more difficult to imagine, but it's easy to write analytically.
The use of geometric shapes like the hyper-dodecahedron is more a nice visualization technique. It's easier to explain than the details of the SU(5) group and it provides a good intuition, even after studding the theory with more details. So I prefer to ignore this technical detail, specially for a popular science article.
Modelling is an interesting thing. Our math is such an elegant language that it can describe such amazingly rich abstractions. On one hand, we have category theorists and computer scientists working on the more familiar modelling of computational patterns. On the other, we have physicists coming up with all kinds of equally curious patterns in particle fields. It seems many of the patterns are just abstract nonsense but there are too many coincidences to ignore. To me this hints of some underlying structure that we are completely blind to still. Oh, how i wish to study math from the future.
Seriously. Real increases in lifespan (I'm thinking about Louis Gridley Wu celebrating his 200 birthday) would have to involve (at least occassionally) revitalizing stem cells to renew the body at rates that would seem normal in a very young person. This would have to include nerve cells. Presumably, this would lead to a reversal of specialization. That implies memory loss ("memory" in all neurological senses: how to write software, but also how to speak English, how to control defecation, how to control urination. Really renewing one's central nervous system would rewind a person back to the unspecialized nervous system they were born with.)
Why would you think that? Information is stored in the configuration of neurons, not in the neurons themselves. Neurons are constantly being replaced, and you don't wake up one morning not knowing how to bike because your "bicycle neuron" died.
Are they ? As far as I know, neurons last more or less from before birth to death. There is neurogenesis in the hippocampus but it doesn't replace existing neurons, and the new cells themselves last until death.
I sometimes wonder if there is a way to automate the generation of mathematical abstractions and reporting the ones that have physical relevance. I guess one of the big problems then (besides tractability) is how to feed it with experimental data. Even if it is not feasible, then it would still be interesting to read how mathematical physicists would theoretically approach this problem (and even how they would define it).
I would like to take a moment to praise Quanta Magazine. They really are the one magazine that got me reading all forms of long form articles about science which if it were written plainly would go high above my head and/or would seem boring.
Quanta Magazine has also a good reputation among many theoretical physicists including myself. They of course do suffer from the occasional misconception, but as a whole the accuracy of their reporting is leaps and bounds above many other popular science sites or blogs.
Lifehacker introduced me to them and I've been a happy reader since then.
The thing I like the most is that don't play the "analogy game" too much and instead generally teach a small concept and then build up on it.
While other popular media outlets like Wired or Verge simply dumb it down too much or are factually incorrect.
I do like WIRED's Science Blogs though even though they are a little bit inactive.
"Japan is considering building a $1 billion detector called Hyper-Kamiokande, which would be between eight and 17 times bigger than Super-K and would be sensitive to proton lifetimes of 10^35 years after two decades."
All this to possibly detect a single proton decay in 20 years. Now that's some serious commitment!
If it were solely to detect proton decay, yes, I agree it would be very impressive. The reality is a bit more prosaic though, as the article suggests in passing:
"In the meantime, a Nobel Prize has been won for a different discovery in the cathedral-esque water tank pertaining to particles called neutrinos."
Hyper-K is a dual-purpose detector, a neutrino detector as well as a proton decay detector. Substantial part of Hyper-K funding should be interpreted as investment to a neutrino detector.
This is an question I've always had about proton decay: if virtual particles can spontaneously appear anywhere at anytime, why couldn't some virtual quark appear in the midst of the three that make up the proton causing it to fall apart? What keeps
The constituent components of a proton immune from this behavior?
You can't have a single virtual particle appearing, it's always two, due to conservation of momentum, charge etc
So it'd be a quark-antiquark pair popping up.
The proton is the lightest Baryon (3 quark particle), there is no lighter Baryon it could decay into. The decay products have to be lighter than the original proton, by at least the mass of the virtual quark pair, to repay the energy 'borrowed' from the vacuum to create the virtual quark pair (because energy is always conserved). So the proton remains unaffected by the virtual particles popping in and out of existence around it. The virtual particles have no choice but to effectively to annihilate with one another and disappear, to pay back the energy debt.
Heavier Baryons (Sigmas, Lambdas) are indeed destabilized by virtual quark pairs, that is the mechanism by which they decay, almost instantaneously, on their own.
You could have an up-anti up quark pair that pops up close to the up quark of the proton, the up quarks could 'swap places', and then the up of the proton annihilates with the anti up of the virtual quark pair, but the the result is still a proton.
r.e. Heavy baryon decay, here is a feynmann diagram showing a delta baryon decaying into a proton and pion. The down + anti-down quark pair that appears in the middle of the diagram are virtual particles:
Virtual particles are not are ordinary particles that pop in and out of existence. Virtual particles are a particular way to model quantum interactions. Some argue virtual particles are "real" in an ontological way, and pop science tends to lean toward that view, but they're certainly not real the way regular particles are real.
It's required that quantum interactions obey conservation laws, and any non-conserving "virtual configuration" must be short-lived, only existing to the extent it might affect interactions with "real", physically allowed configurations. There's no quantum interaction that lets a proton turn into something else while obeying conservation laws. In particular, normal interactions can't change the number of quarks in a given configuration, and the proton is the lowest energy configuration of 3 quarks. So it can't "tunnel" via a virtual configuration into some other real configuration.
Grand unified theories usually introduce additional mechanisms that can turn quarks into leptons, so proton decay is a test of those theories.
The thing that pop-science stories about GUT always seem to fail to explain is- what does it mean for a symmetry to break at a certain point in time? In the present day physical symmetries (isotopy of space, say, or conservation laws) are just static laws of nature; what was going on in the first microseconds that could mess with the laws of physics itself?
...I'm sure my question contains within it at least several misconceptions, but let that just be an illustration of how confused this kind of article leaves laypeople.
The symmetry breaking should be thought of as a phase transition that occurs as the temperature of the universe changes, like liquid freezing and becoming ice. The universe was initially very hot, but rapidly cooled down as it expanded and went through phase transitions when it passed the "freezing temperature", i.e the temperature at which the laws of physics prefer to spontaneously break the symmetry.
I've always been curious about that as well. Suppose you took a small region of the universe and reheated it to be unification temperature today, presumably you would restore the symmetry, but presumably when you let it cool down, the symmetry would not randomly break in a different way, but deterministically in the same way; isn't that a way the analogy does not hold?
To get random breaking in a different way, presumably no amount of mere heating of matter would suffice; you would have to somehow restore the high-energy false vacuum of the Big Bang itself? I don't suppose there's any way to do that in today's universe, even in principle?
You are confused because phase transition is not a good description of symmetry breaking (sorry cohomologo).
You have to understand that there already is a difference between the two things (forces, particles, whatever). It's just in certain temperatures or forces, or size ranges (whatever) that broken symmetry is not visible and the two appear to be identical.
So the search is on to understand why these two things should act so identically in certain ways, and yet not identically in other ways, i.e. what breaks their symmetry.
Finding out what breaks their symmetry tells you a LOT about the particle, it tells you what is identical, and what is different.
For example an up and down quark are identical in all ways - except mass and charge. So in certain experiments they appear identical, in others those things show up - their symmetry is broken.
But noticing that they are identical in certain situations tells you a lot about quarks, and noticing where they differ tells you even more.
> Suppose you took a small region of the universe and reheated it to be unification temperature today...I don't suppose there's any way to do that...?
You're describing a particle accelerator. When we talk about the LHC accessing "higher energies" than the Tevatron [1] we are saying it is "baking" small parts of the universe to higher and higher temperatures.
Symmetry breaking just means when I look at an object at a certain range (energy range, size, whatever), it looks identical to another object. But then when I widen my view suddenly it diverges from that other object and act different.
So there is a search for why.
For example an up and down quark look virtually identical, because they are quite similar. But at a certain point something differs in an experiment, and you realize it's because their masses are not identical, and in a certain situation the symmetry is broken and you can see the different masses.
> ... researchers have found a variety of other symmetry groups that the existing particles might fit into, with extra features and variables that can make protons decay much more slowly.
Well, "extra ... variables that can make ..." is often a sign of EOL desperation for theories. What I got from Kuhn, anyway.
Assume the physical universe takes the form of a set of axiomatic systems whose statements are elementary particles and whose interactions are logical operations; i.e., metals in the presence of ions will form salts. Matter creates the gravitational interaction as a function of mass.
Because, on one side, we've been able to deduce, in principle, all such interactions (and there are maaaaany) from just two. So it'd be surprising. On the other hand, these two interactions, General Relativity and the Standard Model, talk about the exact same things (the same "statements" in your example). While in many energy ranges the effects of one or the other are too small to be noticed, there definitely is an energy range in which both should have strong effects. Yet they aren't compatible. If we performed experiments in those ranges, the theory that describes what we measure would need to be compatible with both, an extension; the unified theory.
I think in an effort to understand exactly how the mechanics of the big bang worked and how you can go from an infinitesimal point of incredible energy to an enormous and expansive universe with the diverse laws we observe today.
From the article:
> If the forces were indeed one during the “grand unification epoch” of the universe’s first trillionth of a trillionth of a trillionth of a second, then particles that now have distinct responses to the three forces would then have been symmetric and interchangeable, like facets of a crystal. As the universe cooled, these symmetries would have broken, like a crystal shattering, introducing distinct particles and the complexity seen in the universe today.
We assume the precondition that the Universe we experience and observe originated in a vacuum of...existences, for lack of a better term. This seems myopic, why should it be so?
What if the universe we experience is the product of interactions between distinct universes, which circumscribed by different physical laws, and the only universe we can sufficiently experience and observe is that which is governed by the Strong Nuclear Force?
It could very well be that way, but what is the evidence for that origin? Why not any number of other plausible explanations? It is not myopic to accept the explanation best supported by available evidence, until sufficient new evidence supports a different explanation. Right now, the Universe coming into existence as a solitary singularity is the best available theory.
Because, even in the simplest case both would involve changes energy/enthalpy and therefore would interact with each other. In particular reactions in one might drive the equilibrium populations in the other and vice versa.
One problem with simply combing the current General Relativity and Standard Model theories is local vs non-local determination. That is moderately complex to explain... perhaps someone has a better source than Wikipedia:
Cross out the Third (of five) Age of the universe in the 1999. Book of that name. That was era dominated by degerate stars untils all bayrons decay. Then the universe would be dominated by evaporating black holes.
https://en.m.wikipedia.org/wiki/The_Five_Ages_of_the_Univers...
A neutrino interaction with the electrons or nuclei of water can produce a charged particle that moves faster than the speed of light in water (not to be confused with exceeding the speed of light in a vacuum). This creates a cone of light known as Cherenkov radiation, which is the optical equivalent to a sonic boom. The Cherenkov light is projected as a ring on the wall of the detector and recorded . . .
> For instance, SU(5) groups quarks and antiquarks together with leptons and antileptons into “fiveplets,” which are like the indistinguishable sides of a regular pentagon.
The idea is to put the 5 particles in 5 places that are undistinguishable.
For that you need to use the vertex corners of an hyper-tetraedrum ( https://en.wikipedia.org/wiki/5-cell ). Don't get confused by the bad drawings, if you have one of them in 4 dimensions, you can put each point at the same distance of all the other points.
If you use a regular pentagon, then you must select an order for each of the particles/vertex. If you select one, some are more close than the others.
(An alternative is to use a pentagon, but consider not only the rotations and flip, but also the operations that mix the vertex/particles in any order. But then the nice identification with the symmetry of the geometric figure is gone. You can use a square with the central point.)