Hacker News new | comments | show | ask | jobs | submit login
You thought quantum mechanics was weird: check out entangled time (aeon.co)
116 points by nikolasavic 8 months ago | hide | past | web | favorite | 49 comments



I can't help but feel that it is very likely our ancestors (if we make it long enough) will look back on the limited understanding we have at this point in time regarding spooky action at a distance, the theory of everything, general relativity meets quantum mechanics, etc. and be astounded for a number of reasons. The biggest reasons being 1) that everyone could be missing something so obvious while at the same time 2) going down such wrong but entertaining rabbit holes.


There is nothing obvious about quantum mechanics; those that say that they fully understand it intuitively probably are the first that don't understand it. The main answers we have about it are the results of math, not the results of something obvious intuitively so there is a very low chance we'll get answers intuitively when nothing we have so far came from that route.

tl;dr: we are too bozos for Quantum Mechanics.


I think the post you replied to implies there must be a much simpler explanation for this stuff.


You mean descendants, not ancestors.


Unless there are loops in which case you're both right.


So there is no time, we just live in a constant moment which changes according to our measurements. A "Groundhog day" with different mornings.


Sometimes this worries me. Am I waking up in the same reality/dream that I slept in last night? It looks like everything is dependent on how you measure it.


Movie idea: Groundhog day but everyone remembers reliving the same day each day. Which begs the question, how different is that from our everyday lives other than changes in the environment?


So poor Billy Pilgram was right all along?


So is this article implying spooky action at a distance also applies to time? Time do relates to distance through velocity correct?


Somewhat related, there is also empty-space entanglement https://www.youtube.com/watch?v=lH-3bFqtJjg&feature=youtu.be...


time is a psychic measurement of action, it doesn't exist by itself


wait so then why can information not travel faster than the speed of light, which is a value that depends upon time existing - you can't measure signals/information propagation at faster than c; c is in the form of dx/dt, even if you do that trick where you just set c=1, right?; ..is there no dt?


I don't understand why physics is spooked out by entanglement. It is just a correlation-- and if we want to have any interesting patterns in the universe at all, we are going to have correlations.

All this seems to be saying is that there can be an entanglement in the present with something in the past. At to me, that seems equivalent to saying that the past and present are correlated, which seems boringly trivial.

Am I missing something?


It not correlation - bell inequalities show that local variables don't explain the correlations: ie no correlated variable exists because states are decided at measurement (or wavefunction collapse) time. This is why entanglement is interesting.


Sure, "no hidden variables" is interesting/weird. And interference is interesting/weird. And both of those things happen independently from entanglement.

But entanglement itself seems very not-weird to me once you buy into those other things.


What you're on to is that all of the big popular examples of "quantum weirdness" (uncertianty, entanglement, tunneling) emerge from a very small and elegant set of axiomatic postulates. That's the beauty of physics.

One such group of postulates that chemists like to use can be found at http://vergil.chemistry.gatech.edu/notes/quantrev/node20.htm... . You can probably beat the elegance of those if you're willing to step further from what's actually used in calculations, but that's not what chemists like to do.

Here's a better shot at being axiomatic and elegant, but it's a lot less clear (from a lay-perspective) how they relate to reality: https://en.wikipedia.org/wiki/Dirac%E2%80%93von_Neumann_axio...


I find Lucien Hardy's discussion enlightening:

Quantum Theory From Five Reasonable Axioms

https://arxiv.org/abs/quant-ph/0101012


Yes, but the time displaced version is more like cause and effect. Non local correlations between variables separated in time is a generic feature of classical mechanics. Frankly I think this paper exploits hype and naivete but what do I know I only have a pnas paper on quantum Dynamics.


As you're the only person so far on this thread who seems to know about this, is my understanding correct that non-locality of entanglement is because all the entangled entities are described by a single hamiltonian which means the energy expressed in each of their degrees of freedom must sum to zero, so only certain combinations of measurements are possible of the system as a whole


If I read it correctly, the implication is that the present affects the past. If photon 1 died a billion years ago (without being 'measured' by sentient observer) - e.g. by just hitting a wall, its polarization remained undefined until we measured photon 4 today. This is not very intuitive. Polarization of photon 1 could, in principle, cause a galactic catastrophe, and by measuring photon 4 today you either trigger this catastrophe (a billion years ago!) or your don't - depending on the results of your measurement.


Sentient observers have got nothing to do with that – interaction with any macroscopic system will destroy the entanglement.

If that macroscopic system (a galaxy that might have a catastrophe triggered or not) is causally connected to us, then it's destiny doesn't depend on the measurement, because it's state have already been "decohered" with us. (That is; the information about the other photon has already indirectly entangled with us.) If it isn't, then it's still entangled, and it indeed does depend on the measurement – it's a Schrödinger's galaxy.


> interaction with any macroscopic system will destroy the entanglement.

How do you know it? I'm not aware of any proof of this statement.


There isn't any proof – of course – because any non-sentient macroscopic system couldn't testify this to us. Otherwise we would call it sentient.

However, I have the premise here that sentient systems are fundamentally similar to non-sentient systems. That is, there isn't any "special sauce" – such as soul or midi-chlorians – that causes sentient systems behave differently on micro-physical level. Instead, sentience is an emergent property, that shouldn't have effect in "low-level" details of how the universe works. This comes basically back to Occam's razor, like the other comment said.


> That is, there isn't any "special sauce"

Well, this is a big claim. In fact, no one knows. My premise is that there is special sauce, but we know nothing about it at the moment. In any case, it's just my word against your word :)


To elaborate: I don't claim to know if there is or isn't anything like that and I'm not interested of arguing about that here.

What I am saying is that if a phenomenon is explainable with less "special casing", we should buy into that explanation rather than a more complicated one. Quantum physics is explainable without the concept of sentience; it's just ABC's of philosophy of science that we shouldn't try to include anything more into our theory than is needed.


I don't feel qualified to judge whether it's explainable or not. But 2 guys familiar with the matter believe it is not :) Not that I appeal to authority, it's just to demonstrate that the jury is still out. https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Wigner_int...


When photon hit rock, rock measure photon. Quantum of energy. Wave function collapse. Human brain not needed. Earth also not flat.


We generally cannot prove things about physics in a mathematical sense, but assuming that our understanding of quantum mechanics is correct, every interaction of an entangled particle causes its entanglement to "spread" to the interacting object. That's just a property of the math involved in describing quantum systems.

This means that once you have macroscopic interaction, the entanglement basically starts to spread at the speed of light, i.e. it'll quickly spread to humans.

Once you are entangled with something, you can no longer observe the effects of that entanglement (you are no longer an external observer), and it looks as if the entanglement was destroyed.


Are you aware of some proof that it is specific to sentient entities? If not, Occams razor implies we assume it is any macroscopic entity.


Can you present a proof that hasn’t been created by a sentient entity?

Edit: of course you can, but only in the formal systems sense, which is relativel trivial.

What we think of as “proof” in physics is a combination of “this belief creates predictable experiences” and “this abstraction of belief is logically consistent.”

The first part is subjective - our beliefs about physics create predictable experiences for us, but we have no idea how large the entire domain of possible experiences is.

The second part is a tar pit full of bones, haunted by undefinability theorems.

https://en.m.wikipedia.org/wiki/Tarski%27s_undefinability_th...

So any suggestion that physics efficiently reveals the secret mechanisms of the universe is a conceit. Physics reveals what we’re able to deduce about the universe given the biases built into our experience of sentience.

When weird things fall out of quantum theory, this makes it hard to know if the weirdness is truly out there, or if it’s an artifact of our limited sentience.


You didn't read it correctly. They measure photon 1 before photon 3 or 4 even exist.


They measured photon 1 because they wanted to prove correlation (it would be impossible to do without measuring it). If you believe they proved it, it works both ways. Hadn't they measured photon 1, our measurement of photon 4 would determine polarization of photon 1 a billion years ago. Entanglement works both ways, according to the theory.


Not just in theory! Isn't this what Alain Aspect tested?


Yes you're missing something. The idea that an event in one place can influence an event in another place, even though it's not possible information traveled from the first place to the second, is spooky. And the idea that this can even be the case even if the associated objects never coexisted in time is anything but "boringly trivial".


I have a red ball and a blue ball. I flip a coin, and if it's heads I give you the red ball, and Alice the blue ball. If it's tails, the reverse. Alice goes off to Alpha Centauri, gives her ball to someone else, dies, the ball gets sent to Andromeda, whatever. When you open your box and see a red ball, you instantly know Alice's was blue, and no information has traveled from Andromeda. Not spooky.

But of course we have no hidden variables, so instead of flipping a coin, we fork the universe. In one universe you get a red ball, in another Alice gets it. In either universe, when you open the box, you know what color Alice's ball is in your universe. Not spooky. Still not spooky if Alice lived a billion years ago.


If you watch for more than 4 minutes, this 9 minute video explains how Bell proved that the particles can't have been given opposite "ball colors" at the time they became entangled.

https://www.youtube.com/watch?v=ZuvK-od647c

It's quite clever how he proved it, and that it's actually possible to rule out "information that we're just not aware of".


The problem with the glove analogy (i.e. having a local hidden variable) is that it cannot explain quantum entanglement. Bell's theorem shows that entangled systems are correlated in a strictly stronger way. Quantum indeterminacy is different than the classical "we don't know which glove is in your pocket but we know they're opposites" sort of uncertainty.


The difference is that the same experiment in quantum terms would have you look at the ball through a colour filter and determine the probability of a range of possibilities (and therefore the other ball wherever it is) but then seeing the statistics show the particular filter you choose changes the probability of what colour ball you have and therefore the distant ball also


Ah, thanks for your comment. I also thought the measurement was just revealing an underlying property. But now if I understand correctly the method of measurement is biasing the results, and the other particle will still be correlated with the biased result. So the way we measure the first particle has an impact on the value of the second one.


Precisely! This has been pretty extensively tested - https://en.wikipedia.org/wiki/Bell_test_experiments - and is true even at distances and within timescales where a photon couldn't travel between the two entangled particles (i.e. to send that information)


This analogy doesn't work because the blue ball was blue the entire time (and vice versa with the red ball).


Not in the second forked-universe version, which is isomorphic to Everett, which is isomorphic to any other formulation of QM.


Step away from balls to start with; particle-paradigm analogies are tricky because the balls are sometimes-red, sometimes-blue (depending on an oscillating internal state), sometimes not even there, and you only know when you shake a ball that is there what its colour is, (which is really the only reliable way to interrogate its oscillating internal state, but the shaking also disturbs thatinternal state enough that the oscillation pattern becomes different).

You can slice a field view into a series of successive fields related by time, and then decompose each such (sub)field into a particle view. This has some advantages calculationally, but probably obscures understanding.

Choose a timelike path through an all-spacetime permeating ternary field, and consider the value at each point along that path. Most often it's 0 sometimes it's 1, sometimes it's -1. In practice you'll want to have the sharp end of a detector at various points along the worldline, to sample the field values. Set up a second detector along a reasonably parallel timelike path. Each records its proper time when a non-zero value is detected, and a system like Poincaré-Einstein synchronization is used to compare the records.

Add to this two-detector setup a perturber that also records its emissions such that comparisons can be made among the three parties.

For starters, our perturber generates a predictable periodic wave radiated in such a way that the two detectors agree that they see 0->1, 1->0, 0->-1, -1->0, ... wavelike transitions strongly timestamp-correlated with the perturber's activities. Note that you can experimentally choose to anti-correlate the values at any given agreed timestamp by sampling the field on a slightly different worldline through the field (e.g. by a translation (i.e. linear movement) operation on one of the detectors): instead of a 1:1 correlation the detectors, having agreed on timestamps, can be set up so that the timestamps on the 1s at A are the same as the timestamps on the -1s at B. Intuitively this is just an experimenter deliberately choosing to measure different parts of a continuous wave's phase.

Once the timestamped records are demonstrated to align reliably, the perturber does something different. For example, it arranges a particle pair-production (e.g. by radionuclide decay). The pairs must differ by a change of sign in some measurable value like charge or spin. Because the produced pairs fly off in random directions, our two detectors will mostly see lots of 0 values. However, from time to time each of the detectors will record a +1 or a -1, and occasionally they both will record a nonzero that will be timestamp-correlated just as when the waves were.

(The waves there are just large numbers of correlated field values, rather than single correlated values: you can get to the latter by turning the perturber's generation rate way down from "effectively continuous" to "just occasional" -- as an example, one could start with a large amount of radioactive substance, and then remove more and more of it until we are into low numbers of Becquerels).

The weirdness enters into things when one drops the perturbation rate way down: when there is a timestamp-correlated dual detection, with extremely high probability they have opposite signs at the two detectors. This remarkable distribution remains even as one attempts to measure different parts of the phase of the perturbation. Where in the large-numbers perturbations one could move the detector and get 1:1 correlations instead of 1:-1 correlations (or indeed get 1:0 or -1:0 as well as all the 0:0, so the correlations vanish), in the single-number case, one cannot: you can get 1:0 or -1:0 by moving the detector around, but you can't get a strong distribution of 1:1 or -1:-1 correlations. (You also can't move a detector around to guarantee that it will (practically) always see 0 or 1 and (practically) never -1, even though you can do this in the large-numbers case: a detector will always see an essentially random mix of +1 and -1 (and lots and lots of zeroes)).

It's hard to not throw dimensions away (and impose coordinate conditions) when considering the system, so embrace it and consider the 1-d picture with with "cells" that contain a ternary value.

The continuous wave picture looks essentially like:

<-...0-0+0-0+0...perturber...0-0+0-0+0...->

and we just choose points on this line to look at.

However, when we start looking at individual pairs periodically emitted along this 1-d line (or equivalently, deliberatly aligning this line to a known trajectory of a pair), our observations instead lead to a picture that looks like this:

<[0-+][0-+][0-+][0-+]...perturber...[0-+][0-+][0-+][0-+]>

When put our detector at any point on the line we have some probability of measuring one of the three values. When we put a detector near the perturber on the left, we have that same probability distribution; a second detector a bit further from the perturber and still on the left experiences a highly similar distribution. But if the second detector is on the right, then one of the options is "eaten"! Instead of an equal probability of measuring + or - (instead of 0), if left-closer detector measures + then the right-further detectors' probability of detecting + or - (vs 0) collapse into a similar probability of detecting -; there is essentially 0 chance of right-further detector measuring +. Moving things so that it's right-closer instead of right-further, then the (unmoved) left detector is the one whose probability distributions have collapsed from three to two options.


Everyone keeps coming back to this point, but I maintain that just because there’s no way that we know of (or that makes sense in the framework of our understanding) for information to travel between the two, it absolutely does not mean information _did not_ travel between the two. It’s the simpler explanation, if you don’t let hubris get in the way.


It's more than that. It can be shown that information could not travel between 2 points provably, so it's not like we're missing some communication method. We can show there is no intermediate communication method.

See the MIT introduction lecture on Quantum Physics:

https://ocw.mit.edu/courses/physics/8-04-quantum-physics-i-s...

I believe the experiment where this is shown is lecture 1, possibly 2. It's been awhile since I watched this but a great source of information.


To further clarify, any communication would have to be superluminal. Sure there are people who claim that there's some unobservable superluminal background to the universe that makes non-local hidden variable theories possible but I don't know that I'd call that the simpler explanation.


I was thinking more along the lines of undetected hyper surfaces or wormholes.


Presumably this could have some significant implications in regards to quantum computing.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: