Hacker News new | comments | show | ask | jobs | submit login
Physicists provide support for retrocausal quantum theory (phys.org)
84 points by metafunctor 136 days ago | hide | past | web | 120 comments | favorite



> First, to clarify (...) It does not mean that signals can be communicated from the future to the past(...) instead, retrocausality means (... it) can influence the properties of that particle (or another particle) in the past, even before the experimenter made their choice. In other words, a decision made in the present can influence something in the past.

So, hard to explain in layman words: future can't transmit information to the past, but an action that will be taken in the present can influence some properties of a particle in the past.

Take two particles, separate them enough, do some "A" thing to one of them and the other instantaneously will have the only possible state that doesn't break the universe.

And one viable explanation is that when we've done the "A" thing, this information was transmitted to the particle in the past. We just couldn't measure this before (in the past) because measuring interferes with it, and that property can only be revealed at the same time that we've done the "A" thing.

Is that it? Kinda of information is transmitted to the past, just can't be read before some time ?


We should be talking about "consistency" being the fundamental idea, not causality. I think "retrocausality" is an especially distracting choice of words.

I find the clearest way to think about this is in terms of Feynman Diagrams. Here's a quick intro to some of the rules: http://bolvan.ph.utexas.edu/~vadim/classes/2008f.homeworks/Q... If you're not familiar, ignore the math and just look at the pictures on the first couple of pages.

Take a simple diagram of a photon interacting with an electron (google QED Vertex). Here's a very tiny ASCII version: ~< The squiggly line represents a photon propagating and the straight line segments (which should have arrows pointing a direction) represent an electron propagating. We can rotate this thing around in a bunch of different ways in time so that we have 1 or 2 inputs ("before") an 2 or 1 outputs ("after") in time. For example, with time going left to right, ~< represents a photon decaying into an electron and positron. Flipped around, >~ represents a electron and a positron colliding/annihilating to create a photon. Turned another way you could have a photon and electron as input, and an electron with a changed momentum as the output. For the last case you could say the electron absorbed the photon. But really, these are all the exact same pattern just rotated around in spacetime. So, what is causation? If it's all the same pattern, it's clear that consistency with the pattern is more important than the direction of time's arrow.

Speaking very loosely now (there are a bunch of constraints and caveats on what I'm about to say), you can plug these diagrams together to make arbitrarily complicated internal structures. But if they have the same inputs and outputs, they are in a sense consistent. And if you do it just right you can constrain which types of patterns can link up with the one you've set up. So, in the end, only patterns that are consistent with your setup can happen. Which is pretty much what this article is describing.


Based on that description would this sounds like an action in the present is putting constraints on a seemingly unrelated event that has an enforced but currently unobservable relationship in the future. Or something like that. Why do I get the feeling that this starts requiring higher dimensions?

Additionally: If any particle in an entangled system get's measured along the full course of it's "enforced relationship into the future" that would be the point where the system collapses.

And that's where I'd run into issues .... It would seem a more reasonable explanation that retro-causal relationships would be a fourth physical dimension for the relationship to exist along ... hmmm


That explanation still seems to allow faster than light communication.


The many-worlds interpretation of quantum mechanics is the only one that's easy to understand. All others seem to bend over backwards to realistically explain quantum effects. It's so rare to find clear explanations, but they do exist.

My favorite from David Deutsch, a main creator of the idea of quantum computing:

To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10^500 or so times the computational resources than can be seen to be present, where was the number factorized? There are only about 10^80 atoms in the entire visible universe, an utterly minuscule number compared with 10^500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?


That seems to be begging the question. Qubits do more than this universe is capable of, therefore there must be multiple universes. The obvious refutation is: qubits don't do more than this universe is capable of, because the universe does not run on classical limits. 10^80 atoms implying on the order of 10^80 computations that can be done implicitly assumes that these things operate on classical-ish rules. There's an implicit notion of "a computation" in there, which is assumed to be something roughly like what a standard computer is able to do in a standard machine instruction. What if, instead, the fundamental operations that the universe works on are things like Hadamard transforms?


Explain the double slit experiment to me. The single photon somehow calculates the interference pattern with photons that don't exist, in our world at least. If there's not parallel universes, how does a single atom calculate that interference pattern? Where do those calculations occur? All other theories say it happens by magic and give all sorts of complicated non-real obfuscations to explain it.


There are no photons. There are only photon-like events, with associated probabilities and scattering processes.

It's a lot easier to assume that reality is the visible output of an heroically counter-intuitive but relatively tractable probabilistic calculation process, than it is to assume there are infinite universes with poorly defined boundary conditions between them and even more poorly defined properties, and that by some magical process some, or maybe all, of those universes conspire together in certain situations only to create quantum effects.

The Shor's Algorithm argument is irrelevant. There's a huge leap between saying saying "Qubits can handle superpositions and we don't know why yet" and saying "The only possible explanation is that qubits exist in multiple universes at the same time. Because they just have to. Obviously."

The problem is simple - you're inventing entire multiple universes out of nothing just to explain the behaviour of a single qubit.

That may indeed be how reality works. but it's a very strong claim and it needs some correspondingly irrefutable evidence to support it.


That's exactly what I'm saying! All other explanations say something confusing and obfuscating like "there are no photons". Many-worlds keeps it simple and says "there are photons, and the reason they behave like that is because they're interacting with parallel universes". You can't explain why they behave like that, so you say that the photon isn't really real, and have a complicated explanation of it.


Clearly this is all subjective, but Pilot Wave theory (De Broglie & Bohm) is the simplest least "crazy" of all the explanations.

Horribly simplified: Space isn't empty, there is an "ether". Matter moving through the ether interacts with it and produces waves just like a very light boat in a lake - these crests and troughs exactly line up with the magical waveform. All of the quantum behavior we see is explained by the waves in this "ether".

Ie the waveform is no longer this crazy math thing its a real thing. And just like "air" was considered empty until we discovered a vacuum, space itself is considered empty until we fully define the "ether". It matches up with all of the results of Copenhagen, but was dismissed because physicists like "locality" and it gives up locality.

But guess what? The surface of a lake is non-local. Your boat is affected by boats far away via waves, the effect just diminishes over distance.

And the double slit experiment is so straightforward now that it makes Copenhagen and many-worlds look kinda silly.


While one can take a bit of issue with using the ether as analogy for a number of reasons, the most important difference to keep in mind is that the wave function exists in configuration space. For 10^80 particles, this is a space with 3^(10^80) dimensions. This is the source of nonlocality.

While PWT is the most sensible to me, it is strange though solely because of the wave function which is common to all theories, including MW. The novel part of PWT, that of particles having definite locations at all times being guided by the wave, is quite ordinary.


So here I may be stepping past my knowledge, but I thought that the reason for appearing to require so many would be simply due to using the wrong "basis" to calculate the wave function. Or separating the concept of how to generate the wave function from the values it produces.

Via analogy, let's imaging the perfectly still lake. In the lake at an (x,y) position is a mechanical oscillator which moves up and down and continuously produces waves. Knowing the position of the oscillator and it's phase I can calculate the position of crests and troughs of waves on the surface and the height of the water at any location.

If now I had 100 oscillators spread on the surface of the lake, I would need 100 inputs of x,y coordinates of where the oscillators are and each their phases. In this case i'd need 3100 inputs/dimensions to calculate the wave function.

At first, this seems like a lot; but I don't need 3100 dimensions to describe the resultant wave function. It still is simply the surface of a lake. It can always be described at any moment in time in 2 dimensions (x,y) with a height value for each point of the surface. If you want to capture it changing over time you need 3 dimensions (x,y,t) with height values for each.

The fact I'd need to go through all 3100 inputs to calculate the wave, doesn't mean I need 3100 values to store the results of that calculation. This it what I mean by using the wrong "basis" to describe the wave function.

In programmer terms you can describe a resultant value as either a function, and all of its required parameters. Or you can just describe the output value. 310^80 is describing all of the inputs and the "function" to call.

And one note, on the math. I believe there be 3(10^80) dimensions as inputs. Which would mean R^(3*(10^80)) possible configurations. You wrote 3^(10^80) which is significantly larger.


Error: Yes, it should be 310^80. I think my brain was half-writing the dimension and half-writing the more useful view of (R^3)^10^80. Thanks for catching that.

As for your analogy, I don't quite follow. The actual physical model is that of the oscillators, more or less. The wave is the mathematical abstraction that does allow us to replace it with a function. In terms of information, the wave has more detail, but it is abstracted into a nice form. I think we agree on this.

But the quantum wave function is serving a different role. The water wave is something that varies over the 2d space. As we look at different values, we change the x and y coordinates to see the new values. In that sense, it is 2d.

For the quantum wave function, if we keep all but one of the 10^80 particles in the same exact position, but we vary the position of that one particle, we do get different values of the wave function and it does matter to all the particles as it will affect the gradient vector defining the velocity vector of each of the particles. The change is not localized to just the one particle that is changing, but impacts all particles.

Of course, most of the time, practically speaking, changes by one particle will not make a difference, but situations such as present in Bell's Theorem and quantum computing are exceptions to this.

Fundamentally, the current state of all the particles is needed to be known in order to know what part of the wave function is needed. That is the sense in which the dimension is not 3, but 310^80.


The Pilot Wave Interpretation is one of the strangest contortions I've ever seen. You go to all the trouble to set up the wave equation acting on phase space of the entire universe, but then you remember that you think particles are fundamental elements of reality, so you drop them in and say they're guided by your wave equation.

If you take Pilot Wave Interpretation and just take out the particles, you're left with the Many Worlds Interpretation with exactly the same predictions. To paraphrase Netwon, there's no need for that hypothesis.


> If you take Pilot Wave Interpretation and just take out the particles, you're left with the Many Worlds Interpretation with exactly the same predictions. To paraphrase Netwon, there's no need for that hypothesis.

And if you take Many World and simply add a particle you have Pilot Wave hypothesis. No need for the crazy multiple universes nonsense. All 3 produce (including Copenh) the exact same predictions, so that's not gonna be the distinguishing factor.

Which is more contorted? Waves are physical and particles exist? Or we live in a countless number of simultaneously subdividing universes?

It's fascinating to hear people discuss PW vs MWs - because (in my opinion) it becomes very clear which they became comfortable with first.

If you were to stop a random person on the street and give them a summary of the two theories it's pretty clear that someone unfamiliar with both would say the PW is a ton simpler. But it's fascinating to see that a good number of people who are comfortable with many worlds will argue that it's less complex. My view is obviously their opinion is shaped by what they've been exposed to first.


Copenhagen isn't even a proper theory, because it never tells you precise criteria for when wavefunction collapse occurs. It's not a surprise that nobody noticed this at the time, because the Copenhagen Interpretation was invented prior to the foundational work on computation and formalization of what an algorithm is. But looking at it now, it should be clear that there's no formal definition of what a "measurement" is. As soon as you give precise criteria for wavefunction collapse, then you have a difference in predictions between Copenhagen and MW.

And PW is strictly more complicated than MW, because you still need to track the same wavefunction on phase space, but then you add this particle to have something to point to and say "this is reality". But all the complexity of the wavefunction is still there. If you try to reduce this complexity by collapsing the wave function, you're back to having the same problems as Copenhagen.


> All other explanations say something confusing

The problem here is that you want an explanation that is intuitive and easy for you to grasp. What if the true answer is one that is too hard for you (or maybe any human) to easily understand?

Our brains aren't evolved for understanding the true nature of the universe, they're evolved for reproducing and surviving in a very simple world. We're using re-purposed hardware for a very different task- we shouldn't be surprised that it's difficult.


> Our brains aren't evolved for understanding the true nature of the universe, they're evolved for reproducing and surviving in a very simple world.

Absolutely. For survival-to-reproduction -- and for quite a bit more than that -- Aristotle's physics are enough; but they're full of absurdities and strange edge cases when you look at them more carefully. To explain the flight of an arrow, you need Newton... and then you start noticing that Newton has edge cases too.

I don't pretend to understand quantum mechanics, but I'm fine with cheering from the sidelines on that one. Whatever the answers are at that level, they certainly seem to be shaping up to be something deeply unlike our intuitions...


Feynman said, "I think I can safely say that nobody understands quantum mechanics." This is a guy who won the Nobel Prize for work in quantum mechanics.

QM does not line up with human intuition at all. It looks like this is a fundamental issue with how the universe works, not a problem with our explanations. And really, that shouldn't be any surprise. There's no reason to expect the universe's workings to match our intuition in any situation that's substantially different from what our intuition evolved in.


Feynman said that quote in 1965, and it may have been true when he said it. The Many-Worlds Interpretation was technically invented before then (1957), but was a fringe theory and not disseminated much prior to the 1980s. I don't think you could get away with saying the same quote today.


Implying that many-worlds is the key to unlocking true understanding of QM? What's the basis for that?


Yea, that's exactly what I'm implying. As for the basis for it -- I'm not sure how to put it. Every other interpretation is deeply confused, but MWI gives a deterministic, algorithmic, and non-mysterious explanation.

MWI may or may not be correct, but an adherent definitely would not say they "don't understand" QM.


I thought the various competing interpretations of QM were all exactly equivalent when it came time to actually crank through the math in an algorithmic fashion.


Not exactly equivalent.

The Copenhagen Interpretation has a pretty big theoretical hole in that the condition for when a wavefunction collapse occurs is not defined within the theory itself. Supposedly collapse occurs during a "measurement". Well, what's a measurement? Your measuring apparatus are all made up of matter that supposed to be following the same fundamental physical laws. I think you'd be hard pressed to write a computer program that takes as input a description of all the particles and their wavefunctions and compute when a measurement occurs. In practice, scientists doing calculations under the Copenhagen Interpretation will choose the points in time to perform a collapse so that the calculations match experimental results. This is what I mean when I say that Copenhagen is non-algorithmic. That's a forgivable mistake if you invent your theory before the idea of an algorithm is formalized (as was CI), but today it's not a forgivable mistake.

If you decide to fix the problem with Copenhagen Interpretation by explicitly defining that a measurement is when X happens for some X, then you've allowed for an experiment whose results can differentiate between Copenhagen and Many Worlds. MWI says that different parts of the wave function can affect each other no matter how distant, though the effect size decreases exponentially with distance. So set up an experiment so that X occurs and then be able to measure interference from distant parts of the wavefunction that are either present (in MWI) or not present (in CI). That distinguishes the interpretations via experiment.


Thanks. I must have gotten mixed up with the equivalence of some of these interpretations, and the lack of actual experiments to distinguish the others.


The mix up is not your fault. I see it routinely written that the interpretations don't make any difference to predictions, and I never see anyone talk about or acknowledge the issues I just raised. I feel like I'm taking crazy pills.


> All other explanations say something confusing and obfuscating like "there are no photons". Many-worlds keeps it simple and says "there are photons

OK so far...

> and the reason they behave like that is because they're interacting with parallel universes.

No longer OK. You consider that the simple explanation? You don't think that's obfuscating at least as much as "there are no photons" does?


Inventing infinite universes to explain something isn't simpler than "there are no photons"; it might please you more, but infinite universes is the more complex explanation as it essentially asserts magic. It's far simpler to believe we don't fully understand quantum mechanics yet than it is to believe there are an infinite number of parallel universes.


You have an unusual definition of simple.

If you still don't understand, consider this: MWI only works if photons are kept in a coherent superposition of states.

Supposedly this means each universe defines a state.

But... the photons have to be isolated from their respective universes. Otherwise coherence is broken and you no longer have your coherent superposition.

Given that, what's the rationale for inventing an entire universe around a quantum particle if the math only works if you keep that particle isolated from everything else in that universe?


Double slit experiment? It appears that some people, e.g., Feynman, believe that this experiment is profound. I don't have an explanation as good as I would like, but from what I studied in physics class, reading Feynman, etc. here's my explanation.

So, shoot a photon, or an electron, neutron, proton, alpha particle, etc., at the double slit barrier. Only one particle, please.

Well, now give up on the idea that we shot a particle. F'get about particles because they don't exist and only seem to exist from some interactions. Instead of a particle, what we shoot at the target is a wave or, in quantum mechanics, a wave function.

So, the wave function hits the barrier. Nearly all that wave function hits the barrier and disappears. Why? Because physics can't think of anything else it could do. So, what is left are the two parts of the wave function that pass through the two slits. Now that one wave function is in two pieces, but no energy has been lost bumping into the barrier and squeezing through the double slits (amazing thing a wave function) -- but maybe with some barriers some of the wave function is reflected, but let's leave that to the next semester!

The two pieces of the wave function that went through the slits continue on to our array of detectors. As wave functions tend to do, especially as they pass through a small hole or slit, they spread out. By the time the two parts of the wave function hit the detectors, they overlap. But, they are just two parts of the same wave function. So, they interfere and create a sine wave pattern at the detectors. Well one of the detectors, at random, gets a detection of the particle.

So, lesson: What travels is a wave, not a point like particle. The wave can be split into two parts and come back together, and then it interferes with itself in a way different from having waves from two independent (I hope the physics people do say independent instead of uncorrelated) particles hitting the detectors. When the wave is detected the detection is at a point as if there were a particle. Moreover, there is no more than one detection, as if there were a particle.

Still, there are no particles; it's just that when a wave function interacts, say, at a detector, it interacts at a point, just some one point, as if it were a point particle. Amazing.

Amazing experiment.

Generalize the experiment some and get the Michelson-Morley experiment, even more amazing.


> So, the wave function hits the barrier. Nearly all that wave function hits the barrier and disappears.

No, it doesn't disappear. The barrier itself is illuminated. Most of the wave function that hits the barrier is absorbed (or reflected) by the barrier. But the light that hits the barrier is uninteresting so it is ignored.

(On the other hand, if you make a reflective barrier with a lot of slits, not just two, then you have a diffraction grating and things get interesting again.)


What? So in other universes the interference pattern is wrong?

A photon isn't a computer. It doesn't compute. It acts according to some nature which we equate with computation.


Bohmian mechanics seems like a better explanation than "there are infinite universes out there"


Yes, I find the whole "Explain this in any other way!" sort of a bizarre demand. You've got your choice of 5 or 6 decent isomorphic explanations. Preference for "many worlds" seems to boil down to "it annoys my common sense less than some of the others", which is a very personal concern rather than a scientific one.

Personally, I actually think retrocausality, once you twist your brain around it, is another perfectly sensible option that doesn't involve multiplying the universe into a near-infinity of inaccessible, unobservable parallels. The solution in my mind is that retrocausality actually isn't; what it means instead is that what "the present" is is even less like what you thought it was, and is even more of a partial order than relativity already told us it was. The answer to the "double-slit" experiment, to my mind, is that there simply isn't a "before" and "after" and some sort of "retrocausal self-interference" of the wave... the whole thing is simply one atomic event that doesn't have a before and an after, and your imposition of a temporal order on the events is much simplified from the reality. The mystery lies in your attempt to impose a global temporal order, which is appropriate enough at scale, onto a single local event where it doesn't apply in the way you expect. The results of the double-slit experiment itself is not that mysterious, merely counterintuitive, once you release the small scale from your imposed large-scale idea of how time flows.

It means that "the present" is actually quite complicated in a quantum universe with as many qubits as ours has, but, just as QM may be counterintuitive but eventually looks classical at a large enough scale, time may in fact be massively complicated but at scale it still works as we expect.


It's more of a recognition that many-worlds actually has a physical basis to it, while the others talk about an obfuscated non-physical construction.


"Many worlds has a physical basis to it" has no physics-meaning that I can discern. In order to have one, it would have to deviate from the other isomorphisms and cease to be an isomorphism, and thus somehow be testable, which it isn't. (My previous post is word salad to some extent too, since I'm jamming it into an HN post. But the core insight there is basically, let the math be the math and stop imposing your globally-based ideas on the local world.)

It would have a "physical basis" to it if you could visit the alternatives somehow, in which case they could be demonstrated to physically exist and all doubt would be removed. But so far that is not on offer.


Pretty much. Seems that the "shut up and do the math" QM interpretation is still the important one


"We don't understand the deeper mechanisms yet" is also a valid explanation - although one that's sure to be unpopular with scientists.

"Entia non sunt multiplicanda praeter necessitatem" and all that.


Bohmian mechanics is just many worlds in denial, for people with prior ontological beliefs they can't give up.

The entire wave-function is there in the theory, containing all the "worlds" and it's responsible for 100% of the dynamics.

Additionally, Bohmians add a single "world particle" indicating which world we are in, and it's guided by the wavefunction. They claim only that particle is "real", but since it has no effect on the state of the universe, I would call it the only part of the theory which isn't real.


I'm sure you could confuse the heck out of me trying to explain it. Deutsch would say it's just a clever obfuscation of what's really happening. And his many-worlds explanation is so incredibly simple that my small mind uses Occam's razor to figure out what to believe.


As simple as using God to explain how the universe came to be

It's not about a simple explanation, it's about explaining using simple things.

A wave is simpler than "infinite worlds"


The wavefunction is exactly the same as "infinite worlds". The essential aspect of the Everett / many-worlds view is that the wavefunction is the complete description of reality.

The "worlds" are just approximate ways of slicing up the wavefunction which correspond to our sense of classical reality. They are clearly (and uncontroversially) identifiable if you run a simulation.


Ah that's a better explanation


The "worlds" are just points in the space that supports the wave. You can use any other name, if you prefer, but there seem to be an infinite number of them.


All quantum mechanics has notions of "points in space that support the wavefunction," but these are different from the "worlds" in MWI.


Seems like I have been misunderstanding MWI all the time. How are they different?


The atom is a wave, which can interfere with itself.


I think Deutsch would say there isn't actually a wave-particle duality. That's just an obfuscation that doesn't have any reality to it. Many-worlds has a much simpler explanation - there is simply a particle that interacts with particles in parallel universes that produce a wave-like interference effect.


How is "a particle that interacts with particles in parallel universes" any simpler? That sounds like a crazy special rule. Why and how do individual photons/particles interact with parallel universes when nothing else can do so? Honestly, MWI sounds to me like just a fancy way of saying "magic".


He might say that, but so what? It's clear that he favors many-worlds, but that doesn't mean the rest of us have to. My point is that he's attempting to prove many-worlds by assuming that none of the other explanations are true, i.e. he's begging the question.


Well the other explanations say that the atom doesn't really exist, it's just an abstraction. And many-worlds says the atom does exist, and this is why you see these quantum effects. That's a meaningful difference.


I'm pretty sure that's not what the other explanations say at all.


Pilot Wave is to Many Worlds, as Many Worlds is to Copenhagen.

Pilot wave is the most "concrete". If you're moving away from abstraction, then why stop at Many Worlds?


I think you can just dispose of the concept of particle and everything works fine as waves (with discrete levels where the boundary conditions force so, and with collapse when coupled to a macroscopic system that restricts the possible states).


When I let go of something, it falls to the floor and hits it at exactly the correct speed. The correct speed accounting for gravity and the air and air currents and all sorts of other things. That's a lot of calculations to make. And it was exactly correct at every instant of the fall as well! Somehow, all these calculations are being made instantly. Where are those calculations occurring?


What about the interpretation that if reality is a computer simulation, certain operations (such as the factoring described) are a form of a security weakness in the simulation, allowing the simulation to "escape" and access/consume host computational resources?


That seems to me like a multiple universes explanation for people who also like science fiction.

I'd bet there's a simpler explanation.


> My favorite from David Deutsch, a main creator of the idea of quantum computing:

It's interesting, but this "I challenge you to answer a philosophical (not scientific) question in a way that I find acceptable" approach surely skirts uncomfortably close to dogmatically creationist arguments.

(For the record, my answer, which certainly wouldn't satisfy Deutsch, is: it works because the math proves it works; there's no more "how" about it than there is in "how does the computation 10^(250)*10^(250) = 10^(500) work?")


I agree with you. I think the need for philosophical justification stems from our desire to reconcile quantum computing with our classical computing/physics models. But just because we desire such a thing doesn't mean there exists a reasonable one.

What I think the truth really is, is that we don't encounter quantum superpositions in an obvious way in our day to day lives. So it's inherently "weird" for us, especially since it seems to violate our notion of object permanence. But that doesn't mean it's wrong, it just means that one of our evolved instincts isn't applicable on a quantum scale.


> But that doesn't mean it's wrong, it just means that one of our evolved instincts isn't applicable on a quantum scale.

I think that this puts it very well. If a theory doesn't fit with our intuition, then it might mean that the theory is wrong; but it's at least plausible that our intuition is wrong, and goes from plausible to overwhelmingly likely once the theory is backed up by experimental data. (Anyone, even or especially a scientist, who dogmatically trusts his or her intuition over science probably could do with a quick refresher in cognitive biases.)


Math is used to explain the universe. It isn't an explanation in of itself.

We never progressed anywhere by saying "something just works" and leaving it at that. Challenge yourself.


Seems like it's the opposite to me. We never progressed anywhere with "explanations." Progress comes when those explanations are condensed to testable mathematics.

I'm curious as to what you think consists of "an explanation" that's more useful than mathematics. Let's take a simple classical phenomenon to keep it easy, such as kinetic energy. The mathematical statement is E_k = 1/2 m v^2. What non-math explanation works better?


Maths are models of reality which may or may not be correct and can only be verified by comparing them to reality. While they serve as the most simple way to explain reality, don't confuse them with reality. The map is not the territory.


What prompts this reply? I'm not saying that math is reality, merely that solid mathematical models have been the foundation of progress in science, not "explanations."


I just don't agree with what you're saying, progress comes from understanding reality via some intuition, understanding and intuition lead to the development of testable models. Testable mathematical models are the result of progress in science, not the foundation of it.


> Testable mathematical models are the result of progress in science, not the foundation of it.

I think that the information can flow either way. For example, the existence of anti-matter, and portions of quantum mechanics (and, let's say, much of Witten's scientific work—though here I am tossing out a bit of folk knowledge, and couldn't point to anything in particular), are examples of letting the math guide the physics.


The math certainly points at places to look for physics, so yes I agree to a point, but the math itself didn't just happen, it came from a mathematician using his intuition and understanding of physics to explore possible ideas that might be true with most of the possible maths not leading to possible physics. Math is just a language to communicate ideas accurately, those ideas however came from someones intuition about what should be. The math it led to is a result of a process, not the creator of it, the model the math lays out could be right or could be wrong and as such simply points out possible places for experimental physicists to go searching. The math didn't come first, the intuition came first, the mathematical model second, and the physical proof last.


> the math … came from a mathematician using his intuition and understanding of physics to explore possible ideas that might be true with most of the possible maths not leading to possible physics.

This is famously not true for the matrix mechanics in Heisenberg's picture of quantum mechanics and the Riemannian geometry underlying relativity, both of which were created without (so far as I, and history, know) any intended or perceived connection to physics.

On the other hand, I guess you could argue that that math was irrelevant to physics until Heisenberg and Einstein came along and found it an appropriate expression of their physical intuition.


> I guess you could argue that that math was irrelevant to physics until Heisenberg and Einstein came along and found it an appropriate expression of their physical intuition.

Exactly.


Just the math is not enough, you need some connection to reality. For instance, your formula does not describe reality for large v. This can be tested.


I don't understand David Deutsch's philosophical justification. Is he suggesting that quantum computing steals computational resources from other universes? If so, in this "quantum computing as theft" view, aren't many of those universes (those similar to ours, doing a similar computation at the same time) stealing computational resources from ours at the same time, yielding the original question again?


The "different universes" in the multiverse are still parts of the same wavefunction. Therefore they can interact.

It's just that after decoherence, they will never interact again and will stay separate.


No, nothing stolen. The quantum computer's calculations take place in parallel universes. The difficulty is arranging it so that the parallel universes are identical to eachother other than those calculations in the computer itself. That's the only way that this universe can get the answer back. That's my understanding of the many-worlds explanation of quantum computing. Nice and simple.


First if you can measure it then it must be in ~physical reality~. If we are directly interacting with it why even consider it to be in another universe? Is my keyboard also in another universe?

Second, since when is one atom responsible for a computation? In modern electronics no atom is responsible at all for computation, only the electrons are†. Even then, computation is an emergent property, computation itself is abstract and cannot be physical; but it emerges from physical interactions.

† Not even the electrons are directly, it's the energy propagated by the electrons!Hence why electronics are instant while the speed of an electron is a few cm a second


The coolness of quantum computing is that it harnesses the natural computational abilities of reality itself. So a single atom is doing an amazing number of quantum calculations at any moment. These guys like Deutsch figured out how to harness those calculations.

And the only way to interact with parallel universes is if they are totally identical except what's inside of the quantum computer itself. Deutsch posits that the quantum computer essentially proves many-worlds.


You didn't defend Deutsch's erroneous assumption that atoms are responsible for physical interactions in computation. Even in quantum computing atoms are not responsible for any computation––especially if you have a system like DWave which uses magnetically induced currents.

"The natural computational abilities of reality itself"

Additionally you continue to insist that computation is a real physical thing. Are you claiming that nature is aware of the states we assign it to perform computation?

The way computation works is that we map various physical states with our own semantic abstractions. I can change the computation for addition into one for subtraction without any physical change to the system. So there is no harnessing of "the natural computational abilities of reality itself". We merely observe natural behavior and semantically map it (which is what engineering is).


Take the interference pattern built in the famous double slit experiment: what Deutsch and other recognized is that there's computation happening to create that pattern. And similar computation is present in all quantum effects. Quantum computers in effect harness that natural computational power. I don't fully understand how, but Deutsch proved that you can create a general computing machine from those natural quantum computations. Amazing.


You continue to ignore the weak points of Deutsch's argument that I've pointed out.

You can create a computing machine by assigning physical states to any physical behavior, quantum, classical, hamiltonian, etc. The proof is trivial and it is not a proof he came up with.

It further follows, there is no computation happening during the creation of an interference pattern until we assign it our own abstract semantic states.

This quantum woo is helping neither science nor quantum physics; in fact you are hindering it. Ascribing causal responsibility to non-observable (explicitly unknowable)† entities used to be criticized as dogmatic dribble, now it's embraced as popular science meant only to provoke reaction. Science is about establishing useful models of the world to make physical predictions.

† See common fallacy: https://en.wikipedia.org/wiki/Ontological_argument


Many-worlds has a concrete, physical, simple explanation for quantum effects. The other interpretations are based in the abstract, and are incredibly difficult to understand. That's all I'm saying.


"There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is." -Niels Bohr

All of science is abstract, not even "force" describes something physical. There are many more concrete models like Bohmian that don't rely on unprovable conjecture.


> Many-worlds has a concrete, physical, simple explanation for quantum effects.

So does religion, it doesn't make it true without evidence. Inventing unseen universes is easy, it's not simple, it fails the Occam's razor test for the same reason gods do.


All computing machines harness the universe's innate ability for computation. Quantum computers just do it better.


What if collapsing quantum wave functions simply eliminates the possible futures within our universe in which the computation is incorrect? In that case it's not exactly a multiple-universe problem, and more an issue of imposing constraints on our own single universe.

I find this view to make much more sense than the multiple universe view. Those other universes would need the infrastructure to support the same computation, and would also need to be somehow linked to our own. Or perhaps I am misunderstanding you, and we are both saying the same thing.


Well the calculations to do this elimination would have to happen somewhere. Deutsch and the many-worlds interpretation says it happens in parallel universes. And what makes quantum computing so difficult is that you have to arrange those parallel universes to be identical in every way except the inside of the computer itself. That's the only way to get an answer back. If there's interference with the outside environment, the universes become different outside of the computer and you can't get an answer back.

I love this stuff.


Not clinging to single universe rule, but multiworld is a big leap, plus seems weird. Still significantly less of a leap than single-world narrative of "two particles have some probability to do something > ????!?? > they did this" so I dunno. Can MWI derive the born rule yet? Either way neither seems likely.

Thus my hypothesis that the correct number of worlds is actually a fractal.


im not "clinging to single universe world view" but i got 2 questions: 1. has this alghoritm actually been executed somewhere? 2. does this mean there is infinite storage posibillity to store any data you want or it must be "special" data. and how much time does it take to access any segment of this data.

if the answer is no to both question then my answer is: "it doesn't work."


> 1. has this alghoritm actually been executed somewhere?

Yeah, but so far we have the highest factorization achieved is 21 because we've only built relatively tiny quantum computers.[0]

0. https://en.wikipedia.org/wiki/Shor%27s_algorithm


John Cramer has been pointing this out for years (since the 1980s I think):

"The Transactional Interpretation of Quantum Mechanics and Quantum Nonlocality"

https://arxiv.org/abs/1503.00039

Also, Bell's inequality shows the state vector is nonlocal, and and that implies causality violation in relativity.

Third, quantum information theory has retrocausality in the form of negative entropy:

"Negative entropy in quantum information theory"

https://arxiv.org/abs/quant-ph/9610005

Too many things pointing in the same direction. This is the right way to look at it.


What happened to John Cramer?

He hit the popular science media as he was leading up to his modified delayed-choice quantum eraser for retro-causality. According to him that was a supposed paradox meaning the result of the experiment was going to have to give a result that conflicted with our theories and thus give an avenue for new discover (ie allow us to use a test that doesn't match our model).

It's been a decade and no results were ever discussed. That seems bizarre to me.


This reminds me a lot of the sci-fi novel Anathem by Neal Stephenson. Trying to be spoiler-free as possible, in the book, the many-worlds interpretation of quantum mechanics is the correct one, and the worlds lie on a directed acyclic graph, meaning that the flow of information can only move in one direction along the universes. So in the same way information can only travel in one direction in time (into the future), information can only travel laterally into other dimensions in one direction. So, in the book, nobody can directly influence the past, but the question of existing in multiple worlds and sending information between them, and thereby potentially exploring many possible outcomes of a situation simulatenously, is much more open.

In other words, retrocausality WITH information transfer is possible when the information is flowing from one universe to another, in a certain direction.


spoilers I think he implies pretty heavily that the Rhetors actually CAN influence the past, through a mechanism very similar to that described in the article - "choosing" which way an event happened in the past. Their powers work at Narrative scale though, not just experimental observation.


There's also the idea of timeless physics, as argued in https://en.wikipedia.org/wiki/Julian_Barbour#Timeless_physic...

or in slightly different form in http://lesswrong.com/lw/qp/timeless_physics

It's apparently possible to rewrite all physical equations without time coordinates and they still work.

I'm not competent enough to understand all of it and to check if it's not contradicting any experiments, but I'd love to hear if it does.

It seems particularly elegant (because if the state of the universe can't repeat, then why do we need time?).


I never fully understood the Bell experiments. How exactly do they discount hidden variables?


They don't here is what Bell said:

"There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the ‘decision’ by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster-than-light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already ‘knows’ what that measurement, and its outcome, will be."


Ok, and barred that?


I could be mixing things here, but aren't the three options losing one of

locality, definitiveness and non-conspiracy?

https://en.wikipedia.org/wiki/Counterfactual_definiteness#Ov...


The way I understood it is in a YouTube video by Looking Glass Universe. I watched it time ago, but if I remember correctly, if there were hidden variables, the distribution of outcomes of his experiment would look different compared to what it actually looks. Sorry it's a half-assed explanation :)


> But by allowing for the possibility that the measurement setting for one particle can retrocausally influence the behavior of the other particle, there is no need for action-at-a-distance—only retrocausal influence.

I don't quite get it. According to GR, non-locality and retrocausality are equivalent. So how is retrocausal QM different from Bohmian mechanics/pilot wave theories?


> So how is retrocausal QM different from Bohmian mechanics

From what I can tell from this, there's no difference in the sense that it's experimentally indistinguishable.

It's different in the sense that Bohmian mechanics relies on a non-local pilot wave to match QM results. This frees the other assumption i.e. causality to get to the same answer.

TBH I can't see either get much attention until someone comes up with a way to make them produce different results compared to standard QM approaches. Either that or someone finds a way to make QM calculations substantially easier using these approaches. In other words, if they produce new physics or have practical value.


I agree that retrocausality does not look at all surprising knowing GR and the long-established non-locality of QM. (Whether this is equivalent to what the pilot-wave interpretation has to say about the character of QM may require a complicated proof.)


// WARN: because past is static, to support retrocausal influence, we're spawning alternative universes which are joined on retrocausal action choice - this slows down The Simulation and, if abused, can exhaust resources and crash.


don't remind me that people actually believe that you can infer the existence of a simulation only with data points that are created and controlled by it.


There doesn't seem to be anything wrong with this basic idea. As a (bad) analogy, think about how computer code can be written to detect whether or not it is running inside a virtual machine or on a "real" device. It's usually almost impossible to tell but by probing at the extremes and forcing the system to do things that never happen under common usage, it's possible. (It's also possible to break out of the system entirely and gain access to the host OS sometimes, but perhaps that's taking things too far :)


> It's usually almost impossible to tell but by probing at the extremes and forcing the system to do things that never happen under common usage, it's possible

Yes, but it's only possible by comparing the results to those same results when that action is performed outside of a VM.

If you can never baseline your data outside the VM, then you'll never know you're seeing odd results. You'll just think "that's the way the world works".


The difficulty is that we already know something about how computers work. What's to say that time, space, logic, reason, mathematics are even things in the "base" reality?


Simulation part was a joke.

But the static, unchangeable past is interesting because: 1. assumption that the past is static and can't change is not crazy and if retrocausal influence exists, it could imply Many Worlds interpretation as more plausible - there must be at least 2 different universes with static past if you can choose at present past A or past B.

The joke is that branching universes creates large overhead costs (n unmerged, binary branches create 2^n universes to keep track of) in simulation that can crash it.


Isn't this another blanket theory that magically explains a ton of 'spooky' stuff without actually proving it? Just asking from a common's people point of view. I'm a practical person, I need tangible facts...


Time to start scanning for tachyon radiation


This would allow us to predict the future and potentially allow for time travel as well. Seems unlikely from the implications alone, but let's see what peer review of the details will tell us.


> This would allow us to predict the future and potentially allow for time travel as well.

It doesn't actually, they discuss this in the article. The laws of thermodynamics impose an arrow of time via boundary conditions, and something similar would apply here.


Purely as an amateur, it seems unelegant to have something that violates causality as a mechanism, but doesn't give you a way to violate cause and effect at a higher level. It's like the universe is conspiring to keep its awesome tricks from you.


> It's like the universe is conspiring to keep its awesome tricks from you.

Well that's just inevitable. You're never going to see a naked singularity, for instance.


So, universe is like Haskel ;) Purely functional API, but under all of that there's still imperative machine code.

Also, the conspiracy may be a good thing. If you have self-replicating and mutating code on your computer you should put some sandbox around it, or it might evolve to buy more RAM on ebay. Or, more realistically - will crash the system.


True. I also like the implication that we have another proof against T-symmetry.


Entanglement would too, if it allowed superluminal communication in the classical sense. But, it doesn't- you can't use entanglement to send information, even if there is a spooky interaction at a distance.

Retrocausality would probably work in the same way.


good comparison, sure sounds like entanglement in parts


I think the idea is to use retrocausality to replace entanglement. No spooky action at a distance- it's just that when you observe one of the pair, it retroactively "picks" the state it always was, which retroactively sets the state of its partner back when they were first entangled.


Unless I am missing standing, the goal is only to produce a new interpretation of quantum mechanics. That is, the predicted physics do not change, only the formulation of the theory. Therefore, anything possible under this formulation is also possible under more traditional formulation of quantum mechanics (e.g., the many world's interpretation).


sounds like ad-hoc hypothesis


Cmon guys really, first we come up with all this misterious dark stuff and now this...sigh




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: