tl;dr: we are too bozos for Quantum Mechanics.
All this seems to be saying is that there can be an entanglement in the present with something in the past. At to me, that seems equivalent to saying that the past and present are correlated, which seems boringly trivial.
Am I missing something?
But entanglement itself seems very not-weird to me once you buy into those other things.
One such group of postulates that chemists like to use can be found at http://vergil.chemistry.gatech.edu/notes/quantrev/node20.htm... . You can probably beat the elegance of those if you're willing to step further from what's actually used in calculations, but that's not what chemists like to do.
Here's a better shot at being axiomatic and elegant, but it's a lot less clear (from a lay-perspective) how they relate to reality: https://en.wikipedia.org/wiki/Dirac%E2%80%93von_Neumann_axio...
Quantum Theory From Five Reasonable Axioms
If that macroscopic system (a galaxy that might have a catastrophe triggered or not) is causally connected to us, then it's destiny doesn't depend on the measurement, because it's state have already been "decohered" with us. (That is; the information about the other photon has already indirectly entangled with us.) If it isn't, then it's still entangled, and it indeed does depend on the measurement – it's a Schrödinger's galaxy.
How do you know it? I'm not aware of any proof of this statement.
However, I have the premise here that sentient systems are fundamentally similar to non-sentient systems. That is, there isn't any "special sauce" – such as soul or midi-chlorians – that causes sentient systems behave differently on micro-physical level. Instead, sentience is an emergent property, that shouldn't have effect in "low-level" details of how the universe works. This comes basically back to Occam's razor, like the other comment said.
Well, this is a big claim. In fact, no one knows. My premise is that there is special sauce, but we know nothing about it at the moment. In any case, it's just my word against your word :)
What I am saying is that if a phenomenon is explainable with less "special casing", we should buy into that explanation rather than a more complicated one. Quantum physics is explainable without the concept of sentience; it's just ABC's of philosophy of science that we shouldn't try to include anything more into our theory than is needed.
This means that once you have macroscopic interaction, the entanglement basically starts to spread at the speed of light, i.e. it'll quickly spread to humans.
Once you are entangled with something, you can no longer observe the effects of that entanglement (you are no longer an external observer), and it looks as if the entanglement was destroyed.
Edit: of course you can, but only in the formal systems sense, which is relativel trivial.
What we think of as “proof” in physics is a combination of “this belief creates predictable experiences” and “this abstraction of belief is logically consistent.”
The first part is subjective - our beliefs about physics create predictable experiences for us, but we have no idea how large the entire domain of possible experiences is.
The second part is a tar pit full of bones, haunted by undefinability theorems.
So any suggestion that physics efficiently reveals the secret mechanisms of the universe is a conceit. Physics reveals what we’re able to deduce about the universe given the biases built into our experience of sentience.
When weird things fall out of quantum theory, this makes it hard to know if the weirdness is truly out there, or if it’s an artifact of our limited sentience.
But of course we have no hidden variables, so instead of flipping a coin, we fork the universe. In one universe you get a red ball, in another Alice gets it. In either universe, when you open the box, you know what color Alice's ball is in your universe. Not spooky. Still not spooky if Alice lived a billion years ago.
It's quite clever how he proved it, and that it's actually possible to rule out "information that we're just not aware of".
You can slice a field view into a series of successive fields related by time, and then decompose each such (sub)field into a particle view. This has some advantages calculationally, but probably obscures understanding.
Choose a timelike path through an all-spacetime permeating ternary field, and consider the value at each point along that path. Most often it's 0 sometimes it's 1, sometimes it's -1. In practice you'll want to have the sharp end of a detector at various points along the worldline, to sample the field values. Set up a second detector along a reasonably parallel timelike path. Each records its proper time when a non-zero value is detected, and a system like Poincaré-Einstein synchronization is used to compare the records.
Add to this two-detector setup a perturber that also records its emissions such that comparisons can be made among the three parties.
For starters, our perturber generates a predictable periodic wave radiated in such a way that the two detectors agree that they see 0->1, 1->0, 0->-1, -1->0, ... wavelike transitions strongly timestamp-correlated with the perturber's activities. Note that you can experimentally choose to anti-correlate the values at any given agreed timestamp by sampling the field on a slightly different worldline through the field (e.g. by a translation (i.e. linear movement) operation on one of the detectors): instead of a 1:1 correlation the detectors, having agreed on timestamps, can be set up so that the timestamps on the 1s at A are the same as the timestamps on the -1s at B. Intuitively this is just an experimenter deliberately choosing to measure different parts of a continuous wave's phase.
Once the timestamped records are demonstrated to align reliably, the perturber does something different. For example, it arranges a particle pair-production (e.g. by radionuclide decay). The pairs must differ by a change of sign in some measurable value like charge or spin. Because the produced pairs fly off in random directions, our two detectors will mostly see lots of 0 values. However, from time to time each of the detectors will record a +1 or a -1, and occasionally they both will record a nonzero that will be timestamp-correlated just as when the waves were.
(The waves there are just large numbers of correlated field values, rather than single correlated values: you can get to the latter by turning the perturber's generation rate way down from "effectively continuous" to "just occasional" -- as an example, one could start with a large amount of radioactive substance, and then remove more and more of it until we are into low numbers of Becquerels).
The weirdness enters into things when one drops the perturbation rate way down: when there is a timestamp-correlated dual detection, with extremely high probability they have opposite signs at the two detectors. This remarkable distribution remains even as one attempts to measure different parts of the phase of the perturbation. Where in the large-numbers perturbations one could move the detector and get 1:1 correlations instead of 1:-1 correlations (or indeed get 1:0 or -1:0 as well as all the 0:0, so the correlations vanish), in the single-number case, one cannot: you can get 1:0 or -1:0 by moving the detector around, but you can't get a strong distribution of 1:1 or -1:-1 correlations. (You also can't move a detector around to guarantee that it will (practically) always see 0 or 1 and (practically) never -1, even though you can do this in the large-numbers case: a detector will always see an essentially random mix of +1 and -1 (and lots and lots of zeroes)).
It's hard to not throw dimensions away (and impose coordinate conditions) when considering the system, so embrace it and consider the 1-d picture with with "cells" that contain a ternary value.
The continuous wave picture looks essentially like:
and we just choose points on this line to look at.
However, when we start looking at individual pairs periodically emitted along this 1-d line (or equivalently, deliberatly aligning this line to a known trajectory of a pair), our observations instead lead to a picture that looks like this:
When put our detector at any point on the line we have some probability of measuring one of the three values. When we put a detector near the perturber on the left, we have that same probability distribution; a second detector a bit further from the perturber and still on the left experiences a highly similar distribution. But if the second detector is on the right, then one of the options is "eaten"! Instead of an equal probability of measuring + or - (instead of 0), if left-closer detector measures + then the right-further detectors' probability of detecting + or - (vs 0) collapse into a similar probability of detecting -; there is essentially 0 chance of right-further detector measuring +. Moving things so that it's right-closer instead of right-further, then the (unmoved) left detector is the one whose probability distributions have collapsed from three to two options.
See the MIT introduction lecture on Quantum Physics:
I believe the experiment where this is shown is lecture 1, possibly 2. It's been awhile since I watched this but a great source of information.