So, hard to explain in layman words: future can't transmit information to the past, but an action that will be taken in the present can influence some properties of a particle in the past.
Take two particles, separate them enough, do some "A" thing to one of them and the other instantaneously will have the only possible state that doesn't break the universe.
And one viable explanation is that when we've done the "A" thing, this information was transmitted to the particle in the past. We just couldn't measure this before (in the past) because measuring interferes with it, and that property can only be revealed at the same time that we've done the "A" thing.
Is that it? Kinda of information is transmitted to the past, just can't be read before some time ?
I find the clearest way to think about this is in terms of Feynman Diagrams. Here's a quick intro to some of the rules: http://bolvan.ph.utexas.edu/~vadim/classes/2008f.homeworks/Q... If you're not familiar, ignore the math and just look at the pictures on the first couple of pages.
Take a simple diagram of a photon interacting with an electron (google QED Vertex). Here's a very tiny ASCII version: ~< The squiggly line represents a photon propagating and the straight line segments (which should have arrows pointing a direction) represent an electron propagating. We can rotate this thing around in a bunch of different ways in time so that we have 1 or 2 inputs ("before") an 2 or 1 outputs ("after") in time. For example, with time going left to right, ~< represents a photon decaying into an electron and positron. Flipped around, >~ represents a electron and a positron colliding/annihilating to create a photon. Turned another way you could have a photon and electron as input, and an electron with a changed momentum as the output. For the last case you could say the electron absorbed the photon. But really, these are all the exact same pattern just rotated around in spacetime. So, what is causation? If it's all the same pattern, it's clear that consistency with the pattern is more important than the direction of time's arrow.
Speaking very loosely now (there are a bunch of constraints and caveats on what I'm about to say), you can plug these diagrams together to make arbitrarily complicated internal structures. But if they have the same inputs and outputs, they are in a sense consistent. And if you do it just right you can constrain which types of patterns can link up with the one you've set up. So, in the end, only patterns that are consistent with your setup can happen. Which is pretty much what this article is describing.
Additionally: If any particle in an entangled system get's measured along the full course of it's "enforced relationship into the future" that would be the point where the system collapses.
And that's where I'd run into issues .... It would seem a more reasonable explanation that retro-causal relationships would be a fourth physical dimension for the relationship to exist along ...
My favorite from David Deutsch, a main creator of the idea of quantum computing:
To those who still cling to a single-universe world-view, I issue this challenge:
explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10^500 or so times the computational resources than can be seen to be present, where was the number factorized? There are only about 10^80 atoms in the entire visible universe, an utterly minuscule number compared with 10^500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?
It's a lot easier to assume that reality is the visible output of an heroically counter-intuitive but relatively tractable probabilistic calculation process, than it is to assume there are infinite universes with poorly defined boundary conditions between them and even more poorly defined properties, and that by some magical process some, or maybe all, of those universes conspire together in certain situations only to create quantum effects.
The Shor's Algorithm argument is irrelevant. There's a huge leap between saying saying "Qubits can handle superpositions and we don't know why yet" and saying "The only possible explanation is that qubits exist in multiple universes at the same time. Because they just have to. Obviously."
The problem is simple - you're inventing entire multiple universes out of nothing just to explain the behaviour of a single qubit.
That may indeed be how reality works. but it's a very strong claim and it needs some correspondingly irrefutable evidence to support it.
Horribly simplified: Space isn't empty, there is an "ether". Matter moving through the ether interacts with it and produces waves just like a very light boat in a lake - these crests and troughs exactly line up with the magical waveform. All of the quantum behavior we see is explained by the waves in this "ether".
Ie the waveform is no longer this crazy math thing its a real thing. And just like "air" was considered empty until we discovered a vacuum, space itself is considered empty until we fully define the "ether". It matches up with all of the results of Copenhagen, but was dismissed because physicists like "locality" and it gives up locality.
But guess what? The surface of a lake is non-local. Your boat is affected by boats far away via waves, the effect just diminishes over distance.
And the double slit experiment is so straightforward now that it makes Copenhagen and many-worlds look kinda silly.
While PWT is the most sensible to me, it is strange though solely because of the wave function which is common to all theories, including MW. The novel part of PWT, that of particles having definite locations at all times being guided by the wave, is quite ordinary.
Via analogy, let's imaging the perfectly still lake. In the lake at an (x,y) position is a mechanical oscillator which moves up and down and continuously produces waves. Knowing the position of the oscillator and it's phase I can calculate the position of crests and troughs of waves on the surface and the height of the water at any location.
If now I had 100 oscillators spread on the surface of the lake, I would need 100 inputs of x,y coordinates of where the oscillators are and each their phases. In this case i'd need 3100 inputs/dimensions to calculate the wave function.
At first, this seems like a lot; but I don't need 3100 dimensions to describe the resultant wave function. It still is simply the surface of a lake. It can always be described at any moment in time in 2 dimensions (x,y) with a height value for each point of the surface. If you want to capture it changing over time you need 3 dimensions (x,y,t) with height values for each.
The fact I'd need to go through all 3100 inputs to calculate the wave, doesn't mean I need 3100 values to store the results of that calculation. This it what I mean by using the wrong "basis" to describe the wave function.
In programmer terms you can describe a resultant value as either a function, and all of its required parameters. Or you can just describe the output value. 310^80 is describing all of the inputs and the "function" to call.
And one note, on the math. I believe there be 3(10^80) dimensions as inputs. Which would mean R^(3*(10^80)) possible configurations. You wrote 3^(10^80) which is significantly larger.
As for your analogy, I don't quite follow. The actual physical model is that of the oscillators, more or less. The wave is the mathematical abstraction that does allow us to replace it with a function. In terms of information, the wave has more detail, but it is abstracted into a nice form. I think we agree on this.
But the quantum wave function is serving a different role. The water wave is something that varies over the 2d space. As we look at different values, we change the x and y coordinates to see the new values. In that sense, it is 2d.
For the quantum wave function, if we keep all but one of the 10^80 particles in the same exact position, but we vary the position of that one particle, we do get different values of the wave function and it does matter to all the particles as it will affect the gradient vector defining the velocity vector of each of the particles. The change is not localized to just the one particle that is changing, but impacts all particles.
Of course, most of the time, practically speaking, changes by one particle will not make a difference, but situations such as present in Bell's Theorem and quantum computing are exceptions to this.
Fundamentally, the current state of all the particles is needed to be known in order to know what part of the wave function is needed. That is the sense in which the dimension is not 3, but 310^80.
If you take Pilot Wave Interpretation and just take out the particles, you're left with the Many Worlds Interpretation with exactly the same predictions. To paraphrase Netwon, there's no need for that hypothesis.
And if you take Many World and simply add a particle you have Pilot Wave hypothesis. No need for the crazy multiple universes nonsense. All 3 produce (including Copenh) the exact same predictions, so that's not gonna be the distinguishing factor.
Which is more contorted? Waves are physical and particles exist? Or we live in a countless number of simultaneously subdividing universes?
It's fascinating to hear people discuss PW vs MWs - because (in my opinion) it becomes very clear which they became comfortable with first.
If you were to stop a random person on the street and give them a summary of the two theories it's pretty clear that someone unfamiliar with both would say the PW is a ton simpler. But it's fascinating to see that a good number of people who are comfortable with many worlds will argue that it's less complex. My view is obviously their opinion is shaped by what they've been exposed to first.
And PW is strictly more complicated than MW, because you still need to track the same wavefunction on phase space, but then you add this particle to have something to point to and say "this is reality". But all the complexity of the wavefunction is still there. If you try to reduce this complexity by collapsing the wave function, you're back to having the same problems as Copenhagen.
The problem here is that you want an explanation that is intuitive and easy for you to grasp. What if the true answer is one that is too hard for you (or maybe any human) to easily understand?
Our brains aren't evolved for understanding the true nature of the universe, they're evolved for reproducing and surviving in a very simple world. We're using re-purposed hardware for a very different task- we shouldn't be surprised that it's difficult.
Absolutely. For survival-to-reproduction -- and for quite a bit more than that -- Aristotle's physics are enough; but they're full of absurdities and strange edge cases when you look at them more carefully. To explain the flight of an arrow, you need Newton... and then you start noticing that Newton has edge cases too.
I don't pretend to understand quantum mechanics, but I'm fine with cheering from the sidelines on that one. Whatever the answers are at that level, they certainly seem to be shaping up to be something deeply unlike our intuitions...
QM does not line up with human intuition at all. It looks like this is a fundamental issue with how the universe works, not a problem with our explanations. And really, that shouldn't be any surprise. There's no reason to expect the universe's workings to match our intuition in any situation that's substantially different from what our intuition evolved in.
MWI may or may not be correct, but an adherent definitely would not say they "don't understand" QM.
The Copenhagen Interpretation has a pretty big theoretical hole in that the condition for when a wavefunction collapse occurs is not defined within the theory itself. Supposedly collapse occurs during a "measurement". Well, what's a measurement? Your measuring apparatus are all made up of matter that supposed to be following the same fundamental physical laws. I think you'd be hard pressed to write a computer program that takes as input a description of all the particles and their wavefunctions and compute when a measurement occurs. In practice, scientists doing calculations under the Copenhagen Interpretation will choose the points in time to perform a collapse so that the calculations match experimental results. This is what I mean when I say that Copenhagen is non-algorithmic. That's a forgivable mistake if you invent your theory before the idea of an algorithm is formalized (as was CI), but today it's not a forgivable mistake.
If you decide to fix the problem with Copenhagen Interpretation by explicitly defining that a measurement is when X happens for some X, then you've allowed for an experiment whose results can differentiate between Copenhagen and Many Worlds. MWI says that different parts of the wave function can affect each other no matter how distant, though the effect size decreases exponentially with distance. So set up an experiment so that X occurs and then be able to measure interference from distant parts of the wavefunction that are either present (in MWI) or not present (in CI). That distinguishes the interpretations via experiment.
OK so far...
> and the reason they behave like that is because they're interacting with parallel universes.
No longer OK. You consider that the simple explanation? You don't think that's obfuscating at least as much as "there are no photons" does?
If you still don't understand, consider this: MWI only works if photons are kept in a coherent superposition of states.
Supposedly this means each universe defines a state.
But... the photons have to be isolated from their respective universes. Otherwise coherence is broken and you no longer have your coherent superposition.
Given that, what's the rationale for inventing an entire universe around a quantum particle if the math only works if you keep that particle isolated from everything else in that universe?
So, shoot a photon, or an electron, neutron, proton, alpha particle, etc., at the double slit barrier. Only one particle, please.
Well, now give up on the idea that we shot a particle. F'get about particles because they don't exist and only seem to exist from some interactions. Instead of a particle, what we shoot at the target is a wave or, in quantum mechanics, a wave function.
So, the wave function hits the barrier. Nearly all that wave function hits the barrier and disappears. Why? Because physics can't think of anything else it could do. So, what is left are the two parts of the wave function that pass through the two slits. Now that one wave function is in two pieces, but no energy has been lost bumping into the barrier and squeezing through the double slits (amazing thing a wave function) -- but maybe with some barriers some of the wave function is reflected, but let's leave that to the next semester!
The two pieces of the wave function that went through the slits continue on to our array of detectors. As wave functions tend to do, especially as they pass through a small hole or slit, they spread out. By the time the two parts of the wave function hit the detectors, they overlap. But, they are just two parts of the same wave function. So, they interfere and create a sine wave pattern at the detectors. Well one of the detectors, at random, gets a detection of the particle.
So, lesson: What travels is a wave, not a point like particle. The wave can be split into two parts and come back together, and then it interferes with itself in a way different from having waves from two independent (I hope the physics people do say independent instead of uncorrelated) particles hitting the detectors. When the wave is detected the detection is at a point as if there were a particle. Moreover, there is no more than one detection, as if there were a particle.
Still, there are no particles; it's just that when a wave function interacts, say, at a detector, it interacts at a point, just some one point, as if it were a point particle. Amazing.
Generalize the experiment some and get the Michelson-Morley experiment, even more amazing.
No, it doesn't disappear. The barrier itself is illuminated. Most of the wave function that hits the barrier is absorbed (or reflected) by the barrier. But the light that hits the barrier is uninteresting so it is ignored.
(On the other hand, if you make a reflective barrier with a lot of slits, not just two, then you have a diffraction grating and things get interesting again.)
A photon isn't a computer. It doesn't compute. It acts according to some nature which we equate with computation.
Personally, I actually think retrocausality, once you twist your brain around it, is another perfectly sensible option that doesn't involve multiplying the universe into a near-infinity of inaccessible, unobservable parallels. The solution in my mind is that retrocausality actually isn't; what it means instead is that what "the present" is is even less like what you thought it was, and is even more of a partial order than relativity already told us it was. The answer to the "double-slit" experiment, to my mind, is that there simply isn't a "before" and "after" and some sort of "retrocausal self-interference" of the wave... the whole thing is simply one atomic event that doesn't have a before and an after, and your imposition of a temporal order on the events is much simplified from the reality. The mystery lies in your attempt to impose a global temporal order, which is appropriate enough at scale, onto a single local event where it doesn't apply in the way you expect. The results of the double-slit experiment itself is not that mysterious, merely counterintuitive, once you release the small scale from your imposed large-scale idea of how time flows.
It means that "the present" is actually quite complicated in a quantum universe with as many qubits as ours has, but, just as QM may be counterintuitive but eventually looks classical at a large enough scale, time may in fact be massively complicated but at scale it still works as we expect.
It would have a "physical basis" to it if you could visit the alternatives somehow, in which case they could be demonstrated to physically exist and all doubt would be removed. But so far that is not on offer.
"Entia non sunt multiplicanda praeter necessitatem" and all that.
The entire wave-function is there in the theory, containing all the "worlds" and it's responsible for 100% of the dynamics.
Additionally, Bohmians add a single "world particle" indicating which world we are in, and it's guided by the wavefunction. They claim only that particle is "real", but since it has no effect on the state of the universe, I would call it the only part of the theory which isn't real.
It's not about a simple explanation, it's about explaining using simple things.
A wave is simpler than "infinite worlds"
The "worlds" are just approximate ways of slicing up the wavefunction which correspond to our sense of classical reality. They are clearly (and uncontroversially) identifiable if you run a simulation.
Pilot wave is the most "concrete". If you're moving away from abstraction, then why stop at Many Worlds?
I'd bet there's a simpler explanation.
It's interesting, but this "I challenge you to answer a philosophical (not scientific) question in a way that I find acceptable" approach surely skirts uncomfortably close to dogmatically creationist arguments.
(For the record, my answer, which certainly wouldn't satisfy Deutsch, is: it works because the math proves it works; there's no more "how" about it than there is in "how does the computation 10^(250)*10^(250) = 10^(500) work?")
What I think the truth really is, is that we don't encounter quantum superpositions in an obvious way in our day to day lives. So it's inherently "weird" for us, especially since it seems to violate our notion of object permanence. But that doesn't mean it's wrong, it just means that one of our evolved instincts isn't applicable on a quantum scale.
I think that this puts it very well. If a theory doesn't fit with our intuition, then it might mean that the theory is wrong; but it's at least plausible that our intuition is wrong, and goes from plausible to overwhelmingly likely once the theory is backed up by experimental data. (Anyone, even or especially a scientist, who dogmatically trusts his or her intuition over science probably could do with a quick refresher in cognitive biases.)
We never progressed anywhere by saying "something just works" and leaving it at that. Challenge yourself.
I'm curious as to what you think consists of "an explanation" that's more useful than mathematics. Let's take a simple classical phenomenon to keep it easy, such as kinetic energy. The mathematical statement is E_k = 1/2 m v^2. What non-math explanation works better?
I think that the information can flow either way. For example, the existence of anti-matter, and portions of quantum mechanics (and, let's say, much of Witten's scientific work—though here I am tossing out a bit of folk knowledge, and couldn't point to anything in particular), are examples of letting the math guide the physics.
This is famously not true for the matrix mechanics in Heisenberg's picture of quantum mechanics and the Riemannian geometry underlying relativity, both of which were created without (so far as I, and history, know) any intended or perceived connection to physics.
On the other hand, I guess you could argue that that math was irrelevant to physics until Heisenberg and Einstein came along and found it an appropriate expression of their physical intuition.
It's just that after decoherence, they will never interact again and will stay separate.
Second, since when is one atom responsible for a computation? In modern electronics no atom is responsible at all for computation, only the electrons are†. Even then, computation is an emergent property, computation itself is abstract and cannot be physical; but it emerges from physical interactions.
† Not even the electrons are directly, it's the energy propagated by the electrons!Hence why electronics are instant while the speed of an electron is a few cm a second
And the only way to interact with parallel universes is if they are totally identical except what's inside of the quantum computer itself. Deutsch posits that the quantum computer essentially proves many-worlds.
"The natural computational abilities of reality itself"
Additionally you continue to insist that computation is a real physical thing. Are you claiming that nature is aware of the states we assign it to perform computation?
The way computation works is that we map various physical states with our own semantic abstractions. I can change the computation for addition into one for subtraction without any physical change to the system. So there is no harnessing of "the natural computational abilities of reality itself". We merely observe natural behavior and semantically map it (which is what engineering is).
You can create a computing machine by assigning physical states to any physical behavior, quantum, classical, hamiltonian, etc. The proof is trivial and it is not a proof he came up with.
It further follows, there is no computation happening during the creation of an interference pattern until we assign it our own abstract semantic states.
This quantum woo is helping neither science nor quantum physics; in fact you are hindering it. Ascribing causal responsibility to non-observable (explicitly unknowable)† entities used to be criticized as dogmatic dribble, now it's embraced as popular science meant only to provoke reaction. Science is about establishing useful models of the world to make physical predictions.
† See common fallacy: https://en.wikipedia.org/wiki/Ontological_argument
All of science is abstract, not even "force" describes something physical. There are many more concrete models like Bohmian that don't rely on unprovable conjecture.
So does religion, it doesn't make it true without evidence. Inventing unseen universes is easy, it's not simple, it fails the Occam's razor test for the same reason gods do.
I find this view to make much more sense than the multiple universe view. Those other universes would need the infrastructure to support the same computation, and would also need to be somehow linked to our own. Or perhaps I am misunderstanding you, and we are both saying the same thing.
I love this stuff.
Thus my hypothesis that the correct number of worlds is actually a fractal.
if the answer is no to both question then my answer is: "it doesn't work."
Yeah, but so far we have the highest factorization achieved is 21 because we've only built relatively tiny quantum computers.
"The Transactional Interpretation of Quantum Mechanics and Quantum Nonlocality"
Also, Bell's inequality shows the state vector is nonlocal, and and that implies causality violation in relativity.
Third, quantum information theory has retrocausality in the form of negative entropy:
"Negative entropy in quantum information theory"
Too many things pointing in the same direction. This is the right way to look at it.
He hit the popular science media as he was leading up to his modified delayed-choice quantum eraser for retro-causality. According to him that was a supposed paradox meaning the result of the experiment was going to have to give a result that conflicted with our theories and thus give an avenue for new discover (ie allow us to use a test that doesn't match our model).
It's been a decade and no results were ever discussed. That seems bizarre to me.
In other words, retrocausality WITH information transfer is possible when the information is flowing from one universe to another, in a certain direction.
or in slightly different form in http://lesswrong.com/lw/qp/timeless_physics
It's apparently possible to rewrite all physical equations without time coordinates and they still work.
I'm not competent enough to understand all of it and to check if it's not contradicting any experiments, but I'd love to hear if it does.
It seems particularly elegant (because if the state of the universe can't repeat, then why do we need time?).
"There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the ‘decision’ by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster-than-light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already ‘knows’ what that measurement, and its outcome, will be."
locality, definitiveness and non-conspiracy?
I don't quite get it. According to GR, non-locality and retrocausality are equivalent. So how is retrocausal QM different from Bohmian mechanics/pilot wave theories?
From what I can tell from this, there's no difference in the sense that it's experimentally indistinguishable.
It's different in the sense that Bohmian mechanics relies on a non-local pilot wave to match QM results. This frees the other assumption i.e. causality to get to the same answer.
TBH I can't see either get much attention until someone comes up with a way to make them produce different results compared to standard QM approaches. Either that or someone finds a way to make QM calculations substantially easier using these approaches. In other words, if they produce new physics or have practical value.
Yes, but it's only possible by comparing the results to those same results when that action is performed outside of a VM.
If you can never baseline your data outside the VM, then you'll never know you're seeing odd results. You'll just think "that's the way the world works".
But the static, unchangeable past is interesting because: 1. assumption that the past is static and can't change is not crazy and if retrocausal influence exists, it could imply Many Worlds interpretation as more plausible - there must be at least 2 different universes with static past if you can choose at present past A or past B.
The joke is that branching universes creates large overhead costs (n unmerged, binary branches create 2^n universes to keep track of) in simulation that can crash it.
It doesn't actually, they discuss this in the article. The laws of thermodynamics impose an arrow of time via boundary conditions, and something similar would apply here.
Well that's just inevitable. You're never going to see a naked singularity, for instance.
Also, the conspiracy may be a good thing. If you have self-replicating and mutating code on your computer you should put some sandbox around it, or it might evolve to buy more RAM on ebay. Or, more realistically - will crash the system.
Retrocausality would probably work in the same way.