Hacker News new | past | comments | ask | show | jobs | submit login
Physicists rewrite a quantum rule that clashes with our universe (quantamagazine.org)
90 points by akeck on Sept 27, 2022 | hide | past | favorite | 101 comments



> But the extra photon wasn’t created by that special process, so instead of disappearing when you turn back time, its wavelength will eventually get impossibly small, concentrating its energy so greatly that the photon collapses into a black hole. This creates a paradox, absurdly implying that — in this fictional, expanding universe — microscopic black holes convert into photons. The thought experiment suggests that a naïve mashup of unitarity and cosmic expansion doesn’t work.

The Laws of Thermodynamics state that you can’t create a photon from nothing, so isn’t introducing an “extra photon” into the universe already a paradox? Who would be the least bit surprised that paradoxes create paradoxes?


I don't know whether there was some clever construct to the thought experiment that was lost by the time of printing, or whether it's just a bizarre case of "I added in something that violates conservation of energy and lookie here we can make a paradox out of this!" like this is some double negation being used to "prove" something.

If you imagine something physically implausable as your setup, then yeah it's likely to have physically implausible consequences; and in this case seems rigged for paradox, since we're creating a photon in one moment, but then supposing it already existed and had a history, like, this just seems too awkward for a theoretician to actually propose.

Our theories are just maths that we CAN use to describe reality, when reality is suitably captured and represented within it, but can ALSO describe something that has no bearing on reality, just because "the types fit", as it were, to borrow from programming. The theory is still "good" because it works for experimental predictions, even if the implausible can happen if you tried it "outside" reality - if those issues creep into reality, then it breaks down in circumstances where it is "bad" or "incomplete. I'm sure Giddings understands that, so I'd lean more towards that this was just a bad translation from a more robust mathematical thought experiment that wasn't said in English well, or that it just didn't make it to press unscathed from the writer's and perhaps editor's misunderstandings, despite best intentions and efforts.

Because casting aspertions would not be fair when we are not experts and do not have the full context before us. I hope that Quanta releases clarification and edits to help us understand that section better.


His later statement that plausible states are only thus if they "evolve backward without generating contradictions" which is a truism if I ever heard one. You mean plausible states come from a prior plausible state, and can't just be created willynilly? Wow! That would be pretty damning for your thought experiment as presented!

Your thought experiment photon not being created by "special processes" like any of the interactions of quantum mechanics (such as virtual pair annihilation or from atoms) means that it presupposes an origin for such a photon, and you find it comes from an evaporated micro black hole? That doesn't feel like a paradox, that feels like the conclusions from what may be an implausible setup. Conservation of energy demands the photon comes from somewhere, and you found that it was emitted by a black hole, just as it vanishes; so perhaps it's less implausible than imagined, or perhaps not? Either way, the only paradox is the demand that we be able to place a photon in space without any cause and then work backwards to assume where in the vaccuum this photon sprung up from to somehow make this implausible scenario plausible - and then, when you find it, and don't trace back further to ask what made that, you then assert it's a paradox because you don't think a black hole makes sense? I still find Biddings sections in this article very confusing and in need of clarification. But, I doubt the universe is in the habit of retconning, though arguably we wouldn't be able to tell either way, so oh well.


The idea is that they add a magically a photon now, but they add in a very magical way that will change the future and also the past, so the new universe from the beginning to the end is consistent. If the laws of physics are reversible as we currently think, this magic operation make sense. You can't do it actually in the real world, but it's fine to imagine a new universe.

The problem is that the photons appear/disappear when they interact with a charged particle, but you can imagine a universe with a unlucky photon that never collides with a charge particle in the future, and that is so unlucky that when you pass the universe like a reverse movie, it also never collides(collided) with a charge particle in the past.


> you can imagine a universe with a unlucky photon that never collides with a charge particle in the future

Have we proven that though? I thought there was still some debate about how to think about photons. From the photon’s perspective it is a line in a four dimensional space, from source to sink with no time component, no curve (straight line through curved space).


I took away from that example that, even though unitarity clashes with cosmology to an extent, you can’t just naively break it or you get “even worse” paradoxes like this one. Plus, conservation of energy doesn’t apply to the universe as a whole, if it’s expanding (space has energy.) The assumption which is being violated by adding the single photon is really unitarity and not the laws of thermodynamics. (But it’s been a while since grad school for me, could be wrong here.)


> Plus, conservation of energy doesn’t apply to the universe as a whole, if it’s expanding (space has energy.)

I don’t believe that, but we either may never be able to prove it, or discover we live in a bubble of space where physics has different properties. We don’t know where the space is “coming from”. It’s the same problem laypeople have with the Big Bang Theory. Well where did the Bang come from? What happened before? There is no Before. The Bang and Time were born together.

We may well find that we live inside a black hole. The space inside a black hole grows. One theory says that infinite curvature means it grows at the speed of light. If we live in a black hole then perhaps the energy to create the additional space is coming from outside, our “experiments” are not within a closed system, and look like Free Energy but are not.

Or maybe we live in a physics bubble where different rules apply. Which might solve the whole, “how will we ever explore this Universe in time?” ‘dilemma’ because any ship we send outside that bubble starts to disassemble itself. The floor is lava.


Energy is guaranteed to be conserved if the Hamiltonian has time symmetry. The Hamiltonian of the universe may not have that property, and probably doesn’t. You can Wikipedia Noether’s theorem for more info.


Afaik particles can seem to appear “out of nothing”, random in nature. They are called virtual particles.

I believe the effect is very pronounced in the Casimir Effect https://en.wikipedia.org/wiki/Casimir_effect

The analogy is imprecise though, according to people know the maths/physics behind it, maybe someone can answer it better than me.



Probably completely wrong, but the mnemonic I use for virtual particles, ever since I read Hawking as a wee lad (read: before decades of more literature has been published), was that matter and space are part of a whole, any time there's space (pun not intended), things can get wobbly, and variation has a sane average but the outliers can be weird. Like a rogue wave in the ocean if the wobbles line up they 'look' like something but usually they fade back into nothing before they smash into anything that would observe them.

So virtual particles near an event horizon can interact with matter, but most of the time they just don't.

I have persisted with this mnemonic because nobody has made a headline of "Hawking was wrong!" so far.


Hawking Radiation is the process where virtual particles escape annihilation, and the surviving particle steals that energy from a black hole. If the black hole lives in space devoid of enough matter, it could eventually lose its event horizon, and explode.


The Casimir effect is better explained by the zero point energy of the quantum vacuum, rather than "virtual particles". The latter is not necessary to explain it


If the supposed experimental conditions cannot plausibly occur in nature I don't think it's a valid thought experiment.


Conservation of energy is not fundamental. Noether’s Theorem shows that Conservation Of Energy only arises in systems that exhibit “Time Translation Symmetry” i.e. conducting an experiment now, and conducting an experiment later always are the same.

If you fling a charged particle like an electron through an increasing magnetic field, it will appear to gain energy.

Locally our universe seems to obey Time Translation Symmetry and thus also Conservation of Energy. However the expanding universe is not Time Translation Symmetric, at the scale of galactic clusters new “energy” is created all the time just like the electron in the magnetic field. That’s what “dark energy” is.

All the different conservation laws have roots in symmetries of the system.


> If you fling a charged particle like an electron through an increasing magnetic field, it will appear to gain energy.

“If I place a particle into an energetic system, it gains energy.” This is an even worse example than the one in the article. You’re trying to define a closed system that is stealing energy from outside the “closed system”. That’s the same logical fallacy that leads people to think they have discovered a Perpetual Motion Machine. How do people get into advanced topics like Time Translation Symmetry without understanding (and being able to explain) the basics?


I have a Physics degree for what it's worth. Though I'll admit I didn't articulate what I meant clearly.

The electron example was assuming that the increasing magnetic field was part of this hypothetical toy universe that violates time translation symmetry. Obviously constructing such experiment in our universe it's not surprising that the electron gains energy because as you said, it's not an isolated system. When considering the system generating the magnetic field it becomes obvious that the overall system does not gain energy.

However our actual universe, at the galactic scale, is just like the toy example. New Space / Energy is created out of nothing as the universe's expansion accelerates. There is no paradox here because conservation of energy is not fundamental but emergent from symmetries via Noether's theorem. Space is a "material" that is not "diluted" by expansion and contains energy.

The inflaton field at the heart of hyperinflation / big bang theory also expands without dilution.

Perhaps there is some larger system our universe is embedded in that does respect time translation system / conservation of energy and the expansion is being driven by a larger process loosing energy but we can't observe it.


And on the scale of the universe as a whole, energy continually decreases. Expansion entails increase of the wavelength of all light, which implies decrease in energy.


I don’t think the answer is so clear for the universe as a whole. The energy of the photons decreases due to the expansion - but the expansion can have other additional effects.


Energy doesn't decrease. Entropy increases.


Dang, I did my phd in L2-based spaces... completely worthless now!


Don't burn your phd yet. They are embedding the Hilbert space that represents (is?) the current universe in a larger Hilbert space that represents (is?) the universe in the future. They change the "size"[1], but they don't change the L2 norm.

[1] The dimension of both is probably infinite, but one is a proper subspace the other.


C'mon. The L(2+eps) of expanding universe is approximated very well locally by L2.


Once more, in English? (Or maybe in ELI5-speak?)


L2 is also known as Euclidean space. It says the length of a vector is the square root of the sum of the squares of its components. There are other ways of defining vector length (or distance). E.g., in L1 is the sum of the absolute value of the components. The way distance is measured changes the properties of a space. E.g., in L2, Pythagoras is trivially true, but in L1 the hypotenuse would be as long as the sum of both other sides. That has far reaching consequences. E.g., in L1, you can't take a shortcut.

Wikipedia is not your friend, it seems. It's just terse definitions. Perhaps other math sites are more helpful.


And my original "joke" is that the Schrodinger operator is unitary on L2 spaces (which is like a mathematical statement of conservation of "energy"), so if we're throwing out unitarity then I may as well throw out my phd.

I actually studied dispersive pdes (Schodinger, KdV, etc.), though not with a computer. In that study, the properties of the Fourier transform on L2-based spaces are very important.

The other joke is a that I do software now, so of course the phd was useless, haha!


Ah, that. OK. I know it under a different name (2-norm or 2-metric).


It may not describe the entire universe, but it's a useful approximation for short distances. Like pretending the earth is flat.


The earth is flat for mapping say a small town?


Or assuming spherical cows.


IANAP. The article says that as the Universe expands, "new space" gets created. is that correct? I thought it was space itself that expanded. I didn't get the reasoning, but the Paradox of the Irreversible Photon seems to be an implication of this creation of new space.

I find the article frustrating, because I can't follow the reasoning, because they've left out critical steps. I wonder where I can find an explanation of this stuff that I can understand.


It is indeed frustratingly oversimplified and abstruse, in my view.

Space “expanding”: imagine a 10cm ruler (with markings) such that the markings “compress”. Now you have the same ruler but “longer”. That is what is meant by space “expanding “. (in some sense, obviously).


OK, the expansion of universe creates new space but why should it create photons to fill that space? Or why isn't the total energy still a constant (a new photon there one less photon here)?


The total energy of what?

In general it is hard to assign a total energy to a universe in General Relativity. This is a FAQ <https://math.ucr.edu/home/baez/physics/Relativity/GR/energy_...>. To that we can add that we just don't know enough about our universe -- how big is it? how much matter is in it? -- because we can only see a fraction of the total universe both because a lot of it has expanded away from our view and because our instruments are not yet good enough to pick out features obscured by dust and gas and/or by very low interaction with electromagnetism (doesn't create strong spectral emission or absorption lines).

What we can do is consider the energy-density at a point in a general curved spacetime. That quantity can be subdivided into roughly the part that moves ultra-relativistically (including radiation like photons), that which moves slowly compared to that (including the overwhelming majority of protons and neutrons). From there we can justify a universal average because the universe is (at large scales) homogenous and isotropic: with a wide enough view everything looks the same, and in every direction.

Once we have the average energy-density we can think of how it evolves in an expanding universe. Non-relativistic matter stays mostly clumped in galaxies and those separate, but because we take an average this means that non-relativistic matter's contribution to the energy-density at any point drops with expansion. Ultra-relativistic matter spreads everywhere, so it too dilutes, and additionally it gets colder (seen as e.g. a drop in the energy-density of photons at a point, which corresponds with a drop in a the average photon's frequency or equivalently a stretching of the average photon's wavelength; the latter might lead to an intuitive understanding that an average photon gets smeared over increasing numbers of points, so there is less photon or less photon-energy at any point).

So far this is treating photons like classical particles, or even like discrete quantum particles. In physical cosmology, one would want to consider them as the field content of the electromagnetic field of the Standard Model, which is a relativistic quantum field theory (QFT). Often wanting to do it "properly" runs into tractability problems in calculations, but there are some important features of a QFT compared to a non-relativistic theory where there is some constant number of photons.

The most important part is that last: in a QFT, the count of photons depends on the observer's acceleration. An accelerated observer sees more particles than a freely-falling, zero-angular-momentum, zero-force observer who cannot measure any acceleration. This is the Unruh effect. Both the accelerated observer and the unaccelerated one count correctly, and can calculate what the other observer counts, although each may have different explanations about why its count differs from the other's. The equivalence principle tells us that an observer standing on a gravitating object (like the surface of the Earth) sees most of the effects an observer in deep space accelerated by some propulsive mechanism. The energy-density at a point, in a QFT on curved spacetime, is an observer-dependent quantity.

An accelerating expanding universe has tremendous spacetime curvature, and one result is that the particle-counts obtained by an observer depends on when the particles are counted, all other things being equal. A future observer will count more particles than a past one. (This is also how Hawking radiation works: an observer in the past of an evaporating black hole sees fewer particles than a later observer; those particles are mostly in the region near the black hole; by contrast a late-time cosmological observer will see "extra" or "new" particles at great distances).

So, what can we say about the total photon content? Firstly, it's time-dependent. Secondly, it depends on how the observer is moving against the electromagnetic field in the appropriate QFT. Thirdly, it depends on detailed knowledge of the distribution of photons (practically impossible, so we average). Fourthly it depends on detailed knowledge of the expansion, curvature, and acceleration parameters of the expanding spacetime (in practice that's also hard to measure). Fifthly, it would depend on recovering details of the universe that are most likely inaccessible to us because of the expansion carrying that out of view, but also because the hot dense early days of our universe are electromagnetically opaque, and the hotter denser earlier moments of our universe probably have no photons at all (the start of the "electro-weak epoch").

Consequently, our best modelling is only approximations based on the information we have at any given moment. However, such modelling is amenable to perturbative methods, where we add a small difference "by hand" and trace out the consequences to the modelled universe. Adding a photon here does not require removal of a photon there, and in any rate the model should not keep the total photon count constant for general observers (if you hold it constant for a particular chosen observer or family of observers, it won't be constant for other, different observers). So for "Eulerian" observers freely floating and looking at the expansion flow, adding a small perturbation like an "extra" photon should not make much difference. So instead we might want to treat the new photon as a "Lagrangian" observer, tracing out its peculiar experience of the expanding universe in which it exists.

Taking that latter approach, we find that we can readily introduce a photon such that at its distant past it must have had a truly microscopic wavelength, and thus at the point it occupies it vastly exceeds the average photon-energy density of the universe. When we make a small perturbation we expect only small effects. But in this case, by adding a pretty vanilla photon at a point in the universe's distant future and tracing out the consequence, we find we require a big change -- the energy-density "bump" at the points in our test photon's past gets higher and higher, and it ultimately becomes a significant anisotropy and inhomogeneity that disrupts the justification of our averaging. In extremis, the result would be a part of the universe that stands out as obviously very different from what we see in the sky. Indeed, we can set things up so that we can't even make a prediction from Standard Model physics about the behaviour of the ultra-high-energy early photon that we expect to redshift into our test photon we added in as a small perturbation.

In essence, this is just another example of the difficulties one might encounter when mixing quantum field theory with general relativity. General relativity does not in itself forbid even infinite energy at a point, but the behaviour of the Standard Model (a quantum field theory) is not well understood above about 10^14 GeV/c^2 energies. General relativity requires a constraint imposed upon choices of initial values, which in the case of our test photon it might get from an extension to the Standard Model, or which might be put in "by hand"; the Standard Model needs an ultraviolet completion for it to provide or at least justify the constraint.


Wow. Thank you. This could be a post on its own.


I found this fascinating! Unitarity does seem to suggest that the future somehow equals the past, which does seem to leave a bit of explaining to do (why would time exist/happen at all?) I also like the way that the new 'isometry' method resolves this by extending into a new dimension. It reminds me of the idea of the self-organisation of space (from previously dimensionless information) proposed by process-physics back in the 90s.


What is "process-physics", what does it deal with, sorry?


I'm not a physicist, but I read some of Reg Cahill's papers a long time ago when they got a mention in some science magazine. The video in a sibling comment is the same thing.

It's definitely a fringe physics, but my (perhaps old and faulty?) crackpot-meter hasn't gone off yet! (although regular physicists seem to dismiss it quite freely).

Reg's page doesn't seem to link to the old stuff any more like it used to:

https://www.flinders.edu.au/people/reg.cahill

I'm not a strong proponent of it or anything but I just liked the overall philosophical replacement of objects with change/relation as the core thing and also especially the idea that space itself could have self-organized out of something more primordial (ie less structured/dimensional).


Recently stumbled on the "process philosophy" concept and it means something in the lines of: the world is not made up by entities, but it's made up from dynamic transformations.

Basically the world doesn't have matter at its base, but processes.

It's still an idea in progress of being understood for me, but seems quite cool stuff.

I heard it being mentioned that it might be linked with quantum field theory and somehow with string theory also, but I didn't yet understand how.

Maybe process physics is different tho.


> the world is not made up by entities, but it's made up from dynamic transformations

Gosh, that reminds me of Wittgenstein's opening words in the TLP: "The world is the totality of facts, not of things".


Maybe process informatic is functional programming


Process Physics does include some initial ideas/structures that do feel a bit like functional programming I agree but I think that actually it swings far away from that once the rubber of the idea (speculatively, of course) hits the road.

Specifically the way that it explicitly avoids using syntactic information (ie anything symbolic (like how space is symbolic in the standard model)) and instead models the universe with only semantic (indeed ultimately self-referential) information instead. This is curiously similar to Neural-networks where the details of how it arrives at a result may be very well hidden in the details of the entire behavior of the network. A framework to think about bootstrapping the universe when there was no universe before is kinda a major component of the whole PP thing in my opinion.

This does seem to move the idea to a quite different realm than FP in my opinion. (maybe just a point of view thing?), but I imagine it more like some incredibly-infinitely parallel process like GOL/reaction-diffusion (without the virtual aspect we usually have with our normal computers) or even something more turing-machine-like (anything with turing-universal compute-capability can follow rules, right?)

I'm not trying to sound authoritative about this at all! I am certainly not a physicist (Visual Effects Artist/Pipeline/Teacher!), this is just something I took an interest in and I don't think it got anything approaching the reception that it perhaps could've had if there were a little more open-minded-ness and willingness to take conceptual leaps, in the surrounding academic community.


I tried reading your text a few times, but I didn't yet completely understood your point. Will give it a few days and come back with a tentative at describing my own opinion.

(writing this with the intention of informing you that this piece of content you wrote is appreciated by at least one person on earth at this point in time)


That is very kind of you! I reckon that what I said might make a lot more sense when combined with this pdf:

http://mountainman.com.au/process_physics/HPS13.pdf


some googling led me to this: https://www.youtube.com/watch?v=fEAztJdKN88 not sure if it's the one mentioned by GP. the channel that posted this video seems to have others on the subject.


Possibly a dumb question, but why does that article state a photon's wavelength increases with time? I am paraphrasing the line where they explain how expanding universes suggest black holes cause photons in the thought experiment.

Also, if we do chuck out unitarity, does that mean we can "break" experiments like stern Gerlach by just making them take a really long time?


You can easily break Stern-Gerlach by spreading the “measurements” super far apart, so far that interactions with other matter destroys the effects of the first measurements.

It’s incredibly easy to lose coherence in other quantum experiments as well.


I don't doubt that quantum experiments like Stern Gerlach are sensitive! :)

I probably should have specified: can you break stern Gerlach specifically by introducing isometry and the constructing an experiment that allows the particle to move through an expanding spacetime?


If you can expand spacetime locally, why not?


The Expansion oft the universe would cause a slow increase in wavelengh over time


Does this mean that the notion of conservation of energy is actually preserved? Everywhere says its not except on local scales but this almost sounds like dark energy/the creation of more space is fuelled by light.


Energy conservation is... tricky. Under Noether's Theorem it's guaranteed by time translation symmetry, which is kind of sketchy at cosmological scale. My understanding is that it can be violated under conditions of changing gravitational fields or somesuch, but don't really understand what specific circumstances do the trick.

Ed: don't get hung up on "expansion of space is fuelled by light" or whatever, they only picked a photon for their thought experiment because it was easy to reason about.


The total “energy” of all photons in the universe is declining, but the acceleration is increasing. Hard to believe the first is causing the second.


> why does the article state a photon's wavelength increases with time ?

Photons travelling cosmological distances (i.e., leaving galaxy clusters) in an expanding universe (like ours) will experience cosmological redshift.

When we look at distant clusters of galaxies with spiral shapes similar to those in our own (e.g. the Andromeda galaxy) we see the same spiral morphologies, but with reduced angle on the sky, reduced brightness, and a redshifting of the spectral lines associated with ionized gas, neutral hydrogen, and so on. The angle-brightness relationship is evidence of distance (if you are hundreds of metres from a lightbulb it won't seem as bright or as large as if you were right beside it), and the spectral lines are evidence of gas and dust (absorbing and re-radiating light), and the chemical composition of stars on the other side from us of that intervening absorptive material. If we have a smaller, dimmer, redder spiral behind a bigger, brighter, bluer (but still distant) spiral, we expect to see two distinct sets of red-shifted spectral lines. In fact, we see significant stacks <https://www.astro.ucla.edu/~wright/Lyman-alpha-forest.html>. The farther light travels, the redder it gets. More detail: <https://astronomy.swin.edu.au/cosmos/C/Cosmological+Redshift>

There are lots of ways to think about the cosmological redshift, but it is a readily observable feature of our universe, and can be compared with other readily observable features (the small position-dependent gravitational redshift which we can measure here on Earth; and the kinematic "special relativity" redshift reciprocally found between observers in uniform motion, likewise). Some details about local redshift: <http://www.leapsecond.com/pages/atomic-tom/> <https://en.wikipedia.org/wiki/Hafele%E2%80%93Keating_experim...>. Cosmological redshift as an additional feature is so clear that it is more common in extragalactic astrophysics and physical cosmology to talk about the "z" (redshift) of distant galaxies than describing distance in terms of billions of light-years or megaparsecs.

"z" is a relationship between emitted light and detected light. If it's emitted in the past at say 500 nanometres (greenish), and is detected today at 1000 nanometres (in the near infrared), z = 1, and the emission happened between 7 and 8 billion years ago. (Further info: <https://apod.nasa.gov/apod/ap130408.html>; the table there captures the values for the acceleration of the expansion of the universe as known at the time it was prepared).

Under time reversal we expect that the 1000 nm photon "starts" here and ends up at the distant galaxy as a 500 nm photon -- a cosmological blueshift.

If we take a 500 nm photon from today and aim it into deep deep space it might fly off practically forever, becoming extremely infrared in the process. If we start with a very infrared -- radio -- extremely low frequency photon time-travelling backwards from the distant future, we get a green 500 nm photon here.

But what happens if we start billions of years in the future with a 500 nm photon using z=1? The wavelength halves, so "here" in the past we find a 250 nm photon, in the middle-ultraviolet (UV-C, good for killing germs!). If we start backwards-time-travelling green photon even further in the future, we end up with a photon even deeper into the ultraviolet. (Or, conversely, in our usual direction of time, if we launch a 125 nm photon from here, then in something like 40 billion years it would be detected as a green 500 nm photon, thanks to the cosmological redshift).

There is a symmetry: light going into the far future gets redder, light going into the far past gets bluer. But the symmetry isn't perfect because there is probably a minimum wavelength for light and probably no maximum wavelength. That's one challenge for unitarity. It's also not perfect in a universe with structure in it. The structure dilutes away in the future, so the reddening light is less and less likely to collide with things like galaxies or planets. The structure concentrates in the past, so the blue-ing light is more likely to colide with dense gas and dust. And the effects upon collision differ: a very long wave photon is unlikely to disturb whatever it runs into, but a very short wave ultraviolet photon can disrupt molecular bonds, strip electrons from atoms, or disintegrate atomic nuclei. In extremis it might become so energetic and so short-wavelength that it develops an event horizon, at least if it gets near any other matter.

We can do that extremis by starting our green photon many many trillions of years in the future. If we let it time-travel in the ordinary way, into the future, it just eventually becomes deep infrared. But if we let it time-travel back towards here-and-now, it would arrive from the future as an extreme gamma ray. Don't let that hit you, it'd blow you right up.

There are other broken time-travel symmetries that don't require cosmological distances or quantum behaviour, so we should no more expect to see this sort of super-energetic gamma ray around us than we see broken mugs of tea un-breaking themselves and leaping up onto a table in front of the paw of a bored cat. Quantum mechanics might strive to fully describe systems like cats introducing small, fragile, liquid-filled containers to the concept of gravitational free-fall, but (despite the many successes of the Standard Model of Particle Physics) can't. Maybe that's why Erwin Schrödinger thought so hard about doing something unfathomably mean to a domestic feline.

Finally, cosmological redshift is caused by the spacetime curvature of an expanding universe. The parts of the universe within galaxy clusters are not expanding (technically, the interiors and near-exteriors of galaxy clusters are poorly described by any expanding-spacetime metric, and well-described by families of collapsing-spacetime metrics), and the rest (all the space between clusters) is expanding gently enough that one has to travel across a lot of it to see a very strong (rather than merely technically measurable) effect. Since travel speed is capped at the speed of light, it would take a long time to get far enough from the Milky Way to notice the effects of traversing the expanding spacetime well outside it. Inter-galaxy-cluster distances are large though, and we can see very distant galaxies, so redshift accumulates. We see galaxies with redshifts of more than eleven (z = 11 means a 500 nm green photon from then gives us a 6000 nm mid-infrared one now).

There is stronger curvature to be found closer to home, so if you want to break a quantum experiment which only works in very flat spacetime, take the apparatus only a few hundred light years to a nearby neutron star in our own galaxy.

Indeed, whether there is unitarity at all in the presence of strong gravity has been a matter of debate for some decades, so you might end up taking a bunch of experimental physicists with you to try out other things around that compact object. (A black hole, with a horizon, is another good choice, since there are arguments about whether horizons physically special surfaces that somehow preserve or somehow obviously violate unitarity in ways that even the heaviest neutron star cannot -- best to visit a small black hole, with strong curvature at the horizon, and a huge black hole, with curvature at the horizon gentler than curvature here on Earth's surface. Of course there are arguments that outside the horizon you can't observe violations of or preservation of unitarity within, so you might have to wait around the black hole a long time for it to finally, totally, evaporate; in that case there's time to do a lonnng near-the-speed-of-light extragalactic trip too).

tl;dr for your last question: good question, and generalizes to related questions, but with nothing close to a settled answer available at this time.


When particles interact, the probability of all possible outcomes must sum to 100%. Unitarity severely limits how atoms and subatomic particles might evolve from moment to moment. It also ensures that change is a two-way street: Any imaginable event at the quantum scale can be undone, at least on paper.

I know, I'm just a mesely computer scientist and not an honorable physicist, but these two sentences from the third paragraph don't make sense to me.

Why does the fact that the you cannot exceed a probability of 1.0 when considering all possible outcomes (which is typically one of the foundational axioms that build the basis of the definition of probability) be limiting?

How does the bit about "can be undone" (at least at quantum scale!) follow? What is meant by "at least on paper"?


I'll try my best but it has been a few years since quantum for me.

Basically, over time a quantum system can be transformed by applying an operator to the original state to yield the new state. These operators can be represented as a "transformation" between one quantum space to another (the one at the future time).

Because these transformations preserve probability (which is like a norm of the system), they must have a property known as being unitary.

Unitary matrices are invertible, so this is what they mean by "can be undone."


If all probabilities have to sum to 100% at all times, it limits how you can go from one state to another. I think an additional fact you’re unaware of here is that states transition into each other smoothly. So if you’re at 80% state A and 20% state B, at a very small time later, you can only be at 79% A and 21% B, say. This is related to the fact that the time evolution operator can be written as a power series in dt.


The explanation in the press article is quite confusing, but the probability does not axed 1.0. The idea is that instead of a transformation from a space of dimension 2 into a space of dimension 2, they make a transformation from a space of dimension 2 into a space of dimension 3. In both cases the total probability is conserved and is 1.0.


Something related.

In classical computers you can do:

  X = 4
  X = 5
In quantum computers this sequence of operations is impossible, because the program needs to respect unitarity and need to be able to be undone/run backwards.

But you can't run this program backwards because the second operation (=5) completely erases the information from the first step (=4)


Classically, isn't the reverse program just (?):

X = 5

X = 4

I feel like 2 things are being mixed. In QM/QC, the operators must be unitary, which roughly means they "rotate" states and their inverse "rotation" exists, so (in principle) the reverse operation can be performed; so (in principle), we can construct the reverse operations, execute them, and kinda reverse the time evolution. But it doesn't mean that the quantum state (at time t) of the system encodes the time history (t'<t) of the state.


> But it doesn't mean that the quantum state (at time t) of the system encodes the time history (t'<t) of the state.

it does if you take an everettian perspective on QM, the universal wavefunction at time T can always be unwound to time T'.


I should have been more specific, the 4 and 5 values are dynamic runtime values, not constants known ahead of time.


The article didn't really explain why expanding universe is in any kind of a conflict with unitary. The fact that a randomly added photon would become a black hole in the past isn't convincing since one can't just add a particle out of nowhere.


I think that’s the gist actually. Unitarity does seem to raise problems in the cosmos (information loss) but naively violating it by adding a photon gets you this ridiculous paradox of black holes turning into photons. So if you’re going to relax the unitarity assumption, you have to do it carefully. I guess this is their program with isometry.


Surely the physicists are aware of such a trivial fact.

So the photon example is probably a simplification of the real example which does sort of work.


Yes, but we’re discussing the article, not the mind of the physicist. Stating “Surely they have an answer” doesn’t give the answer.


It probes the question, "is the theory stable under small perturbation?", which is almost always a good question.

Stability here means that the effects of introducing a small perturbation are proportional to the perturbation itself. Unitarity implies this kind of stability.

Perturbation theory has been an important tool in physics for hundreds of years <https://en.wikipedia.org/wiki/Perturbation_theory#History>, and randomly adding a photon to a known universe and tracing out its effects is a perfectly reasonable application of it. It amounts to a "What if?" story but with mathematics and ideally some rigour. "What if we had a universe described by the Standard Model of Particle Physics, and the Standard Model of Cosmology, and introduced a single photon at various energies and at various times? How does that change the universe? Can we do better than just 'materialize' it out of thin air, by giving it some consistent back-story/history, no matter where we 'materialize' it and no matter what energy it has at that materialization point? Can we do all of the above within the constraints imposed by the two standard theories?"

In an expanding universe, the broad family of theories discussed in the article seem to be only semi-stable, breaking the world in one direction even given only a tiny perturbation. Small perturbations practically anywhere in the expanding universe all lead to small future effects, but under time reversal may lead to large past effects (or equivalently, our small perturbation can only have a consistent history if produced by an extreme event).

In the direction where the universe's matter gets ever thinner and colder, the small-perturturbation-photon ("spp") gets redder and redder, and might travel eternally without interacting without interacting with anything else. We can effectively guarantee that by introducing our spp in the extremely distant future, when there is less than one galaxy cluster per Hubble volume. Coupled to the expansion, the spp's wavelength grows to cosmological lengths and it traverses increasingly empty space, and for all we know it could continue to evolve that way eternally, growing ever redder (wavelength increasing) and with ever decreasing probability of interaction. Even if it were to interact with some other matter, the effect would be diminishingly tiny over time.

But in the opposite timelike direction, over time the probability of interacting with something else grows. It can't fly to the eternal past because there is dense structure in the past, so the probability of interaction increases. That interaction will be with the ever hotter gamma ray the backwards-in-time spp evolves into. Moreover that hot gamma ray can't grow ever bluer if there is a minimum wavelength. So we've already revealed some shortcomings in the "ultraviolet completion" of the Standard Model as long as the spp is introduced far enough into the big bang's future and allowed to travel back in time with the contraction (time-reversed expansion) of the universe.

We can also ask about whether an spp's earliest state and latest state are physically plausible.

The far future of our universe will have lots of very low frequency photons zipping through practically empty space; the spp is not physically different from them, and if we looked we could probably find a photon with a "natural" origin in the system that we perturbed with the spp such that the spp and the "natural" photon are indistinguishable. We can even start with a super-hot gamma ray photon as our spp and have a very good chance that it will simply cool down (gravitationally redden) in the future direction, eventually becoming just like other low-frequency photons. It barely matters where or when we "initially" introduce a forward-in-time-travelling gamma ray spp, so long as it has an unobstructed path to deepest space. So the spp's late times are perfectly plausible.

The early times however require us to take care in where we introduce the spp and how long its wavelength is "initially" (at the point where we introduce it). If it's very short wavelength then very soon before the "initial" introduction, the backwards-in-time travelling spp hits the wall in our understanding of the extremely-high-energy behaviour of the Standard Model. If the "initial" introduction is too far in the future then even a radio-frequency spp will blueshift so much that it will hit the wall. We don't see arbitrarily high-energy gamma rays in our sky, so our spp might be pretty easy to distinguish from any "mere" <https://en.wikipedia.org/wiki/Ultra-high-energy_gamma_ray> with produced naturally in the universe we are trying to negligibly perturb.

The article's line trying to summarize Giddings, "... its wavelength will eventually get impossibly small, concentrating its energy so greatly that the photon collapses into a black hole" captures the idea above, making an educated guess about the possibility of a minimum wavelength for a photon. But the point that wasn't captured is that a small perturbation doesn't raise any theoretical questions in one timelike direction, but very likely does in the other timelike direction. The feature associated with this "hey, did we break the entire theory by adding just a single low-energy particle?" is the expansion of space in one timelike direction / the contraction of space in the opposite timelike direction.

If we have a non-expanding space, then there will be no cosmological redshift, so our "initial" photon's wavelength is the same as its final wavelength, in both directions of time and out to eternity. The small perturbation photon's effects remain proportionately small everywhere and everywhen (rather than that being only the case practically everywhere in the future, and only in some places in the past).


I wonder if there is an experiment which could test this "rule"?


This is an interesting theory for an expanding universe, but it seems to imply that the universe cannot contract: in a contracting universe, the time evolution operator would have to be a dimension-reducing isometry, which doesn’t exist.

It would certainly be interesting if the true laws of quantum gravity don’t allow contraction regardless of initial conditions.


I'm not a quantum physicist, but it seems like theoreticians always resolve to adding more dimensions when they can't solve a problem. In a way this sounds like string theory 2.0.

Also it might not be clear to every one, but unitary also implies energy conservation, I assume isometry also conserves it otherwise it would be a pretty big ask to adopt it.


> I'm not a quantum physicist, but it seems like theoreticians always resolve to adding more dimensions when they can't solve a problem.

IANAP. It doesn't sound like string theory 2.0 to me; the dimension they added to accomodate their magical photon wasn't a new dimension of spacetime; it was a new dimension in the Hilbert space that they use describe the quantum state of the system. They explain that this is an "abstract space". I think the extra dimensions that ST requires are supposed to be actual dimensions of spacetime.

I share the doubts of the poster upthread who notes that the photon appearing on the way to Andromeda has no past, so is paradoxical already. I didn't understand that bit.


It does seem pretty weird to me. If you poured half of a cup of four dimensional water into a second container, you could certainly have more than a cup of fluid. Why? Because you measured a four dimensional object with a three dimensional container. If you turned the fluid it may take up more three dimensional space.

Like calculating the volume of a swimming pool from a single picture, you’re never going to get it right. You can get really close if you understand the behavior of 3 dimensional space quite well. Shadows, focal lengths, etc.


Well, if you had 4d water in a 3d cup, first it would leak out.


I think I’m playing a little loose with words here. A measuring cup measures volume of a liquid, which is three dimensions. It is a three dimensional ruler. A ruler only measures one dimension, but it is built from an object, and that object has three dimensions.

We (believe we) can only experience three dimensions plus time. So if the cup is four dimensional it may have topological holes we can’t see and yes, it will leak. Or it may be bigger or smaller on the inside, and we can put less or more of the fluid into the cup than we expect.


> unitary also implies energy conservation

That’s not correct. It only implies that the Hamiltonian is Hermitian, and thus that any measured value for energy is a real number. As the article states, unitarity only implies conservation of information.

Noether’s theorem states that the time-translational invariance of the laws of physics implies the conservation of energy. Interestingly, this does not hold in general relativity.


Ok I stand corrected (I'm just a lowly experimentalist ;). Is it correct to say that in a time-invariant system a unitary transformation conserves energy?


I have a very similar feeling, when i read about quantum superposition. It feels like a super theoretical theory. If you don't measure something it can be either state, but if you measure something it has a state, just like a entangled other piece you did not measure before.

Knowing nothing in-depth about this topic, it seems like a philosophical problem. The cat is dead and alive because you just don't know it. But it surely is one or the other, isn't it?

edit: maybe I'm confusing superposition and entanglement. I thought of these to be a similar problem in physics.


The principle of superposition is that any addition of quantum states is also a valid quantum state. For example, you can have a system that has a state which is both "the particle is at x = 1m" and "the particle is at x = 2m". What really happens physically when you measure the position of the particle is not specified by the theory, but the Copenhagen interpretation says that one of those positions will be chosen depending on the coefficient in front of each state, e.g. if we have a state |psi> = 1/sqrt(2) |x = 1m> + 1/sqrt(2) |x = 2m> then we get either 1m or 2m with 50/50 probability. Then if we measure 1m we will find that afterwards the state will be |psi> = |x = 1m>. Again, what actually happens during this process is not well understood by physics yet, and also is one of the things that differentiates different interpretations of quantum mechanics, so yes that is "philosophy".

> The cat is dead and alive because you just don't know it. But it surely is one or the other, isn't it?

The Copenhagen interpretation says it is both dead or alive, or that the above particle really was in two places at once. Other interpretations, such as the Many World Interpretation (MWI), say different things. For example, MWI says that upon measurement the universe actually branches into many universes, one for each possible outcome, and that we live in one such universe (the universe with a living cat, for example), but that the state itself is not ontologically a mixture of those two orthogonal possibilities.

Entanglement is a very different beast. Basically it means that two systems interact such that measurements on them are highly correlated, i.e. by measuring something on system A you immediately know something about system B, even if they are very far apart. Again, how you interpret entanglement (i.e. whether or not "spooky action at a distance" actually happens) depends on your interpretation of quantum mechanics.


It's not really about "measurement" but "interaction with the broader reality".


No, certain interpretations (such as Many Worlds Interpretation (MWI)) say that "measurement" is just "interactions with the broader reality". Copenhagen says that "measurement" is a primitive feature of the theory that cannot be explained via such interactions. I.e. Copenhagen interpretation says that Quantum Mechanics cannot describe the physics of the photodiode entirely, since it cannot describe the measurement process. MWI says that measurement occurs due to decohering interactions with the environment


My go-to ELI5 explanation for entanglement: suppose you take an object, cut it along non-symmetry axis (e.g. cut a bill), place both halves in different envelopes. Upon opening one envelope (measuring) you immediately know what is in the other envelope. Not because of "spooky action at a distance", but rather due to prior knowledge of the system[s].


That is not correct though, at least not in the Copenhagen interpretation.

The reason is that the quantum state is supposed to be a complete representation of the system right now. It doesn't quantify our ignorance, it should actually correspond completely to the physical state of the system (check the PBR theorem for a strong justification of this)

Therefore, according to Copenhagen, the state really is collapsed for the other system upon measurement of the first. There is spooky action at a distance, because the entangled state does not encode our lack of information or ignorance, it encodes an actual physical thing.

The most common example is two entangled spin systems A and B in an initial state:

|psi> = 1/sqrt(2) (|upA upB> + |downA downB>)

if you then measure the spin of A and find it to be |upA>, you then know system B is in |upB>

BUT it wasn't |upB> all along, because |psi> before measurement is the complete physical state of the quantum system. |psi> is not an epistemological construct, it is an ontological one. It does not correspond to our lack of knowledge of the system, but the actual physical system.


so god doesn't following best practices and uses global interdependent variables. maybe our universe was a PoC and the Universe 2.0 is going to be well refactored version.


Not quite, entanglement is indeed spooky action at a distance given bell’s inequality theorem showed that it can’t be local hidden variables (prior knowledge)


Also not quite, since that depends on your interpretation of quantum mechanics. For example, in the Many Worlds Interpretation there is not spooky action at a distance, since there is no wavefunction collapse in the first place, i.e. measurement does not cause a projection operator to occur, but instead occurs through unitary decoherence. Therefore, no spooky action at a distance (although it is still non local)


Yeah, each valid interpretation is consistent with the observations, but each is weird in its own way (usually some form of non-local, sometimes more out there like superdeterminism).


As a non-physicist, I found Jeffrey A. Barrett The Conceptual Foundations of Quantum Mechanics, Oxford UP 2019, quite interesting. It focuses on laying out the experimentally confirmed data that basically any theory and interpretation of QM must take into account and "explain" (unless there is something with the experiments or something groundbreaking new is found). It's understandable by laymen. However, at least my experience was that the more I read about QM, the weirder it appeared to me and the harder it was for me to grasp it conceptually.


Feynman : “The difficulty really is psychological and exists in the perpetual torment that results from your saying to yourself, "But how can it be like that?" which is a reflection of uncontrolled but utterly vain desire to see it in terms of something familiar. But nature is not classical, dammit, the imagination of nature is far greater than the imagination of man.”


So is the rule or the universe off? Or is there something else missing to a more complete description of the quantum universe?


The fine article argues that it's the rule that's off - that the universe that exists doesn't actually do unitarity, because of the expanding universe.

Well, you know, given that the problem is reconciling QM with general relativity, this seems like a very interesting direction to pursue.


No, it’s clearly the universe that is wrong. :P


Works on my universe!


I'm not a physicist but how much of modern physics is real and not just math?


Note that weird stuff is overrepresented in press articles. This is a cutting edge theory, nobody is sure if it's correct or it's a silly speculation. But it has some weird angles that make clicky press articles.

Most physics is boring and technical.

In particular there are a lot of applications of Quantum Mechanics to Chemistry and Solid State Physics (for example transistors). Even weird stuff like superposition change the properties of small molecules that are easy to test in the lab and simulate in computers.

And there are even more weird stuff that mix Quantum Mechanics and Special Relativity, and has effects in Chemistry that are easy to test and Particle Colliders that are also heavily tested.

The problem is trying to mix Quantum Mechanics and Special Relativity. We don't know how to do that, but (un)luckily in (almost) all experiments you can choose to use the QM+SR approximation or the SR+GR approximation.


I'm a postdoc in theoretical quantum optics. The answer is: both all of it and a none of it. All of it is real in the sense that Quantum Mechanics has been aggressively and extensively tested against measured observations of reality and has not yet found a discrepancy with it. None of it is real in the sense that it is all a mathematical model that is supposed to make predictions about reality.


> Quantum Mechanics has been aggressively and extensively tested against measured observations of reality and has not yet found a discrepancy with it

That isn't true. Here's a discrepancy: the universe appears to be following the laws of general relativity, but quantum mechanics does not account for this and returns wrong answers.

Here's another discrepancy: quantum field theory predicts an energy density of the vacuum off by 120 orders of magnitude of the observed value.

"All models are wrong, but some are useful." But it's disingenuous to claim we haven't even found any discrepancies and that it appears to be a perfect model.


Sorry, I meant only non-relativistic quantum mechanics and quantum optics. I don't know anything about QFT or GR


Interesting. So there are mathematical systems that describe reality extremely well and this is testable. There's beauty in that, like the sun flower and Fibonacci. I was looking for something to rekindle my interest in math and this might be it.


Quantum mechanics has been already falsified. Good explanation here: http://backreaction.blogspot.com/2022/05/chaos-real-problem-...


This is a good point, but that is more of a consequence of the fact that we do not yet know how to recover classical mechanics from quantum mechanics





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: