That's not right, the Many-Worlds Interpretation is just that the wavefunction in the Schrödinger equation (or its generalizations) is real, and it rejects that there's a separate process that somehow collapses the wavefunction to the single "branch" we perceive ourselves to be in. At the metaphysical level, there's no splitting involved, and no ontological measurements either (much less a "we" to do the measuring).
In the book, Wallace quotes an interchange between Paul Davies and David Deutsch. <Davies> "So the parallel universes are cheap on assumptions but expensive on universes?" <Deutsch> "Exactly right. In physics we always try to make things cheap on assumptions." & Wallace re-quotes 'I do not know how to refute an incredulous stare'.
If you start with what can be directly observed or experienced, then that's phenomenology.
Those are both valid and interesting ways of trying to understand reality, and I believe they are ultimately fully compatible with each other. But I think you are trying to hold physics to phenomenology standards in a way that doesn't make sense. No one thinks that Newton's theory of gravity "lost contact with empirical reality" because it doesn't have measurements or people as ontological components.
Anyway, that's all beside the point I was trying to make, which was just that the description of the MWI in the article was plain wrong.
You can formalise the mathematics of it using the concept of quantum decoherence.
MWI handwaves that problem away without really explaining it. "It's random but subject to the Born Rule" is a description of what's already observed, not an explanation with predictive power for new and distinct observations.
There are much more complex criticisms that use words like "ontic" and "epistemic", but that's the fundamental problem that MWI claims to solve but doesn't.
How does MWI imply that we should simultaneously experience multiple realities? If two different states of you experience two different realities, each state of you is only aware of one reality.
MWI largely reduces to something like the Copenhagen interpretation for large systems. It's just that MWI explains the transition between the quantum and classical regime, doesn't require any ad hoc rules about observers, and doesn't need Schrödinger's equation to be violated.
The claim was that inaccessible worlds are inherently incompatible with our ability to empirically investigate or falsify them (via experience).
MWI has a metaphysical parsimony to it. But to believe it's physics is religious faith not physical science- and that's fine, but it's still not science.
I don't think collapse is real either. I think it's a phenomenological byproduct of consciousness requiring particilarity to model the world it perceives. You've already alluded to that being the case ('we can only experience our worldline').
If you knew the metaphysics behind that particularity well enough, you'd also know that it leaves no ground for and has no need of the existence of a physical material reality to begin with. As such, materialist physics has already made an epistemic leap which inevitably leads to mistaken intuitions about the most reasonable ways to interpret empirical phenomena.
That's not what the claim was. The original claim was that MWI is "incompatible with our experience of reality." TheOtherHobbes then said that the incompatibility is that we experience only one reality, instead of many realities.
> MWI has a metaphysical parsimony to it. But to believe it's physics is religious faith not physical science- and that's fine, but it's still not science.
It's not religion to point out that QM explains the phenomenon of wavefunction collapse without any additional postulates (through decoherence). That's all that MWI says.
Isn't that just a claim about the sensitivity of the detector?
I'm sure folks were dubious about the theory of the electromagnetic spectrum given their bias to visible light, but as experiments improved and detectors with them it doesn't seem so farcical.
Because we experience the world through our senses. Everything else is one mathematical model or another that we’ve created. And our models aren’t even consistent!
very painful point about the copenhagen interpretation is this distinction between 'normal' time evolution and the special procedure when a 'measurement' is carried out
This is not a special procedure. Measurement occurs whenever physical interactions take place. To measure a particle, we bounce another particle off of it and then try to detect the result. The measurement is the particle collision, not the detection. It’s like playing billiards in the dark. We don’t know where the balls are.
Taking 'a mathematical construct as more real than basic empirical experiences' is basically the history of physics and it has in the past been highly successful.
Except for all of the times when it broke down. When one model was found to contradict our experiments and we had to replace it with another, which later turned out to be wrong as well. Perhaps the most embarrassing example of this, in human history, is all of our attempts to make geocentric models work .
The most well-known critique of science’s institutional habit of inventing new models whenever old ones broke down is probably Kuhn’s paradigms . If you’re interested, you’re better off reading Kuhn than anything I have to write here. I think the best evidence for Kuhn’s thesis is the abject disappointment we witness every time particle physicists fail to overturn the standard model. If that’s not supremacy of measurement, then I don’t what is.
When two particles bounce off each other, there is no collapse according to traditional copenhagen, instead the wave function just evolves according to the SE. Even worse, when that particle (let's say photon) then travels to the measurement device to interact with the particles that make up that machine, the evolution is similarly governed by the SE. Somehow though at some point, nature decides that a measurement has taken place and collapses the wave function. What dictates where that happens? Honestly this way of thinking about it makes no sense to me. The MWI is, to my mind, the simplest explanation for all of this mayhem.
This is more clear in Everett's original formulation. Everett didn't speak of “many worlds” but of a difference between relative and absolute truth. You measure a spin-½ particle along some axis, it is only “relatively” true that you saw what you saw, say ↑, and there is also a relative truth that you saw ↓. But because these relative truths are exhaustive there is also an absolute truth that you have the deluded belief that you saw either one or the other; that truth holds in both relative truths. The “lost contact” objection is precisely that in Everett's theory this belief is ultimately delusional (there is nothing like collapse to which they correspond, it is just formally incorrect to confuse your relative truths with the absolute truths) and we are led by the theory to delusions like this “with high probability” (scare quotes because Everett realized more so than his successors that eliminating collapse also untethers the theory from probability in a deep way). It is just accepted that these deeply practical things like “my existence as an observer” and “my tool’s measurements” are ultimately based on a sort of illusion which has no correspondence to the true reality; this is buried in a mathematical technicality in his thesis (he notes that he is not looking for an “isomorphism” between experience and the external world, but a “homomorphism”), but it is kind of the crux of the whole enterprise. “It’s fine if I predict things which are not observed, so long as I also predict the things that are observed and I predict that you will be very opinionated about not observing the things predicted but not observed,” if you will, is the homomorphic approach to QM that Everett advocates.
In that respect there is something very different from “the history of physics.” Physics does in some cases say that certain things are illusions. And those statements track very closely with MWI. For example a rainbow appears to be a thing out in the world, but modern physics is very happy to say that the phenomenon cannot be correctly located as an object inside the cloud in which it is perceived, because it turns out the cloud is “rainbowing” in different ways in many different directions and you are only getting part of the story. You see part of the light suggesting a reconstruction of a physical object, but if we look at all of the light we realize that the different reconstructions are not reconcilable because they all have to occur at a 41° angle from the rays of the Sun and real objects have varying angles.
You can see a lot of similar ground trod here. Both claims of illusion appeal to the act of trying to reconstruct the external world from observed experience. Both then appeal to looking at all observations taken together. But there is a difference in scope. In the rainbow case there is a counterfactual, a “what you would have seen,” that can be confirmed with a camera at some other location. We can do a parallax measurement to reconstruct that the rainbow is actually around as far away from you as the Earth is from the Sun or so. But in MWI these sorts of experiments which may be possible are ultimately forever infeasible, I need to have quantum control of every atom of my measurement apparatus if I want to make some similar observation, and even then any particular observation will be consistent with a classical ontology, it's just the pattern of many observations which will suggest non-local correlations which I can interpret as evidence for maybe everything being illusory. One rocks your boat gently, the other is sailing in a hurricane.
Our experience is recovered from this by positing that subjective/phenomenological experience is somehow tied to the individual components of the wavefunction. Since the individual components don't interact with each other, it gives the appearance of branching. This is compatible with our observations.
For the second part, there's nothing mysterious going on at all: measurement is simply physical interaction.
> [Giordano Bruno] is known for his cosmological theories, which conceptually extended the then-novel Copernican model. He proposed that the stars were distant suns surrounded by their own planets, and he raised the possibility that these planets might foster life of their own, a philosophical position known as cosmic pluralism.
I need to point out that he was burned at the stake for advocating cosmic pluralism.
But MWI does not make different predictions from other quantum interpretations. No observations can confirm or falsify it.
I need to point out that he wasn't. His heresy trial had nothing to do with his cosmic beliefs. He was convicted for teaching religious beliefs, as a Catholic, that were contrary to Catholic dogma. The church punishment for this was excommunication but the secular punishment was execution, so he got burned.
In chemistry, atoms are also taken as more real than "measurements" and "we". Some chemists would probably insist that "we" are actually composed of atoms.
When you have two theories that make the same predictions, but one is strictly simpler than the other, Occam's razor tells us to prefer the simpler one.
In my mind, MWI or something akin to it, is the way to go, and is generally the way I conceptually think about QM.
Physicists do not directly apply Occam's razor in most circumstances, and we certainly don't do bookkeeping on how many “entities” there are, and your comment illustrates precisely why: how you count is not a given.
Here is something that did happen in classical mechanics: we transitioned from F_i = m a_i to Lagrangians even though they have roughly similar explanatory power. Here is an argument that was not made: “Lagrangians are truer because you don't have to postulate three equations of conservation of momentum and one of conservation of energy, you just have one law of least action.” Nobody even declared a confident end to the tyrrany of Newton's third law as Lagrangians no longer need it.
Furthermore nobody said that classical field theory was “better” per Occam's razor merely because you were no longer bound by the tyrrany of the least action principle and could now consider essentially a world in which F_i = m a_i was not universally true, to be replaced with a philosophical interpretation by some bloke Neverett who declares the fields on-shell “typical” and derives the least-action principle as a statement that “if you find yourself in a typical universe then almost surely your retroactive reconstruction of events satisfies the least action principle.”
No, many worlds interpretation is thriving precisely because it calls physicists attention to the importance of decoherence calculations in the understanding of various physical phenomena. It gives you an idea for how to model measurements that are somehow partial, or being continuously performed. Occam doesn't enter into the discussion in the first place.
In other words, because MWI is obtained by removing a postulate from the usual formulation of QM, I think it's fair to say it's simpler. If, instead, MWI had been obtained by formulating all of QM in some other distinct framework where there was no mention of wave functions, measurements, or the Schrodinger equation etc, and it had one fewer postulate, then yes I would agree that you can't arbitrarily say that it's simpler.
Is that the case with MWI? Is there a constant amount of information at time t and t+1? Note that I see a fundamental equivalence between information and entropy (of the computational sort), and so an exponential growth of computation required to get from t to t+1 is an inescapable theoretical burden.
To put it a different way, MWI seems to reify possibility. But the state of possibility grows exponentially in time, and so the theoretical entities grow exponentially.
Comparing MWI to collapse interpretations, collapse is better regarding this bookkeeping as collapse represents an upper limit to the amount of quantum bookkeeping required. MWI has an exponentially growing unbounded bookkeeping cost.
In QM by contrast we have entanglement, which essentially means that we can't describe one particle separately from all the other particles (if we could, then QM would be just as "easy" to solve as classical mechanics). Instead of 3n functions of time, we instead have a single function of 3n variables (plus time). The complexity of these functions does not scale linearly with n (imagine e.g. a Fourier series in one variable vs one for two variables)
So, you're right that QM is an exponentially harder problem to solve compared to classical mechanics, but this is because of entanglement and has nothing to do with Copenhagen vs MWI.
Well rather than a single world, I think that the perceived identity exists in multiple highly similar and interacting worlds seen as an entity. Just like we have a size in physical space we also have a non-zero "size" in probability "space".
Some quantum materiel also on Science Asylum:
Please correct me if I'm wrong, but I thought that hidden-variables theory was considered in scientific literature, and was experimentally disproved.
EPR is the argument that showed any local theory agreeing with the results of quantum experiments must have hidden variables. Bell showed that any hidden variable theory must be non-local. Conclusion: theory is non-local.
Many worlds escapes this by not having definite results of experiments or anything else. Since anything that could happen does happen, there is no need for non-local communication to achieve the (non)results.
Bohmian mechanics is a mathematically consistent theory (uniqueness and existence proven for a wide range of Bohmian systems) and it gives rise to an explanation of the quantum formalism. One can choose to dismiss because of prejudice, but there is no mathematical or physical reason to do so. Its biggest flaw, being nonlocal and thus philosophically (not actually) incompatible with relativity, is what Bell established as having to be the case for any theory with results.
The story with quantum field theory is more questionable, but there is a perfectly fine setup with creation and annihilation of particles, thereby being compatible in spirit with much of QFT. The biggest problem is that technically QFT has problems with having a well-defined wave function. But there have been Bohmian inspired methods of solving that problem, basically using boundary conditions that respect the preservation of probabilities under annihilation and creation.
'Disproved' is a bit strong. Bell's theorem is the most used to tool to attempt to rule out hidden-variable solutions. Whiles it's true that experiments thus far support Bell's theorem, they all contain loopholes. It's not even clear if an entirely loophole free test can exist, so far no one has been able to come up with one.
I have personally come across two camps of people on this: Those that ignore or discount the loopholes and thus claim that hidden variables have been disproven. Or those that consider the loopholes important, or that the loopholes seemingly being unavoidable is a clue itself.
Warning: each lecture is ~2.5h long and informationally dense so can't be played at 2x speed.