I must admit I struggle with the MWI Born rule derivations based on rational credence. I don't see why proving that one ought to assign credence in such and such a way is sufficient to prove that that's the way nature is. It feels too much like deriving an "is" from an "ought", although in a slightly different way than what Hume objected to!
> While it has been claimed that Born's law can be derived from the many-worlds interpretation, the existing proofs have been criticized as circular.
So it seems you aren't alone in feeling this way.
Edit: Forgot the link - https://en.wikipedia.org/wiki/Born_rule
The latter article is more recent at 2005 but as far as I can tell Carroll's self-locating uncertainty ideas weren't introduced until around 2014.
The TL;DR is that the critics are correct. Deriving the Born rule begs the question because it makes an unjustified assumption (branching indifference) and also introduces an "invisible pink unicorn", i.e. a concept that, according to the theory, has physical significance but cannot be measured. In the case of MWI that concept is branch weights.
[UPDATE] The critique I wrote is based on Wallace (https://arxiv.org/abs/0906.2718). Carroll's argument appears to be somewhat different. I'm just now working my way through his paper (https://arxiv.org/abs/1405.7577) but I'd be very surprised if it did not also have some untenable assumption hidden in there somewhere.
I have previously come across Deutsch's formulation in terms of information flow , along with what seemed like a very strong criticism from Wallace and Christopher Timpson  that his model was not gauge-invariant.
When scientists say Many Worlds, do they actually mean Worlds as physical 'parallel universes' that pop up into existence?
Or they only mean that those are probable outcomes of our measurement (probable histories) that never actually happened (only one of them happened - the one we end up being it)?
I think in general people accept that there can be a wave function for a cat with the states alive and dead. Cats are much more complicated than that. Presumably they have memory and inside that memory can be things like perhaps the result of a electron spin meansurement experiment. One state of the wavefunction of the cat might have the memory that the electron result was spin up. Another state of the cat might have the memory that the electron result was spin down.
This scenario above is a cat watching a electron spin measurement. Afterwards, the electron does not collapse into a single state upon being observed. Instead, it is still in two states but it is correlated with the memory of the cat (or entangled with the cat).
The belief, at least as I see it and I assume others believe this too, is that the cat has a conscienceness for each state of the wave function. So there is a "conscienceness" or "cat" that thinks the electron was measured as spin up. And then there is another that thinks that electron was measured as spin down. This is where the term many worlds comes from, the fact that there are two "consciences" (well, many consciences). I can see why people would think that part is weird. But how else should it work?
I guess it all comes down to what is the experience of a person (or cat) having a wave function and being in multiple state at the same time (just like all other objects in quantum mechanics). We are not external observers to the world, we are a part of it and we have a wave function too. Or, it would be more correct to say we are a part of the wave function of the system. There is not a separate wave function for each thing. There is just one wave function.
How exactly would they interfere?
In textbook/Copenhagen interpretation, we say that when the electron passes through the slits the wavefunction has not yet been measured. But with the detector in place, you collapse the wavefunction to an eigenvector of position. By the uncertainty principle, the momentum uncertainty is now very high, so the electron shoots off in a random direction and cannot be expected to follow any pattern.
In MWI, we start with the interference pattern. But adding the detector does not "collapse" anything - instead it introduces lots of degrees of freedom. The particle passes through both the left and right slits, but the degrees of freedom ensure there is no fixed relative phase delta between these possibilities. The detector turns waves into "static noise", and so any interference pattern is lost. Both options are realized, but cannot detect the other possibility: distinct Worlds.
The key point is that MWI has no special role for measurement. The left-slit and right-slit worlds interfere, but with the detector in place the pattern is destroyed: the interference is uncoordinated and averages to zero, so undetectable. Whereas in textbook/Cophenhagen QM we say that the measurement results in a single outcome.
I had a discussion with someone on HN about this a while ago and realised that "world" or "split" isn't necessarily synonymous with "superposition". Rather, I believe that a split occurs when the superposition entangles with the environment and causes sufficient decoherence that there is negligible probability of (measurable) interference.
In principle you could imagine a thought experiment where you had a super-powered quantum machine which finely could control the quantum state of a large, isolated room. In that situation you could imagine someone in the room conducting an electron spin measurement and looking at the outcome, before the machine enacts a reversal of the room's quantum wave-function, thus causing the two copies of the person to interfere. If we ever reach that level of technology, it will be fascinating to see how the interpretation debate progresses.
Backing up, the problem is that you have these different approaches—depending on how you count them you could maybe group them broadly and say there are 3-4, or as large as a dozen or two—that are all mathematically identical. They all predict the Born rule, but suggest different ways that Nature really fundamentally would act to produce that rule.
Since they are mathematically identical, it is provable that there is no way to choose between them. As a result, the one which steps the most out of peoples’ ways to enable experimental results, has better “genes” for its own reproduction in the publication of papers. And that has just been the Copenhagen interpretation: you the experimentalist have a soul and when that soul measures the world the nice unitary evolution of the world comes crashing down with probabilities given by the Born rule. Contrast with pilot-wave theories where you have to work out a whole separate equation that doesn’t do anything which makes any further observational impact.
The basic issue that we are facing is that while the notion of souls seems laughable for fashionable sciencey people, it also seems in some distressing way inevitable. You take Many Worlds for instance, you admit the reality of every single possibility of the entire universe as a much broader multiverse. An equation, the Schrödinger equation, essentially works over, say, a Planck time to create a vector field on top of this, saying “This instant is followed by that instant is followed by that instant.”
In the middle of that, what uniquely qualifies my experience here and now as I know such experience must exist? I do not perceive a multiverse; I perceive a changing universe. And MWI says “well actually there are a million yous, frozen in time, all perceiving changing universes. Your experience of motion through time is actually kind of a lie.” This is not a unique problem to QM; it happened much earlier with special relativity where we discovered that you are actually a rope of worldlines thrusting through a static four-dimensional Lorentzian manifold, every part of that rope being presumably in a separate conscious state, perceiving itself as moving through time but we “on the outside” can see that there is no unique present defined such that it can wash over all of the ropes simultaneously. MWI just happens to facilitate the same basic “unrolling of time” because it has already unrolled all of possibility-space. And to fix it you need something—I’m calling it a soul, you can get fancy—which “zips along the worldline” and contains my conscious experience, or you need to argue that my experience is an illusion, or you need some “universe-soul” to act like a coherent “present moment” for all of us, or the like. It all kind of sucks.
There is a nice perspective sitting in the middle of this due to Andreas O. Tell , and it is pleasantly agnostic while still doing something like what MWI is doing to try and derive the Born rule from normal wavefunction evolution. In brief, he says: “use the state-matrix formalism for QM, and take a completely agnostic view of what the cosmos is and how it behaves. Still if you have a local information-processing system which is embedded in the cosmos and changing, then it receives information and must update its model of the universe. Its model of the universe must necessarily come down to a list of wavefunctions with a list of weightings, but there is a freedom-of-perspective which allows you to choose the wavefunction with the highest weight as “the” one that you think the system is in. New data just forces these weights to cross in size, causing the Born rule when you try to determine whether those weights will cross over and the system will be in the new state.”
In some sense then we can live in a very Copenhageny world where we are changing data-processors uniquely present in some space and time, but we can use Schrödinger evolution to derive the Born rule just the same as the many-worlds interpretation does, but rather than committing to its plethora of different universes we might be able to just remain non-committal about what is in the rest of the universe, beyond what I see in it.
I actually believe that this idea has been rediscovered in it essence at least half a dozen times, going back as early as (at least) the late 90s:
I think it's arguable that even Everett himself held this view, and there is some evidence that Schrödinger held it as well but didn't have the courage to admit it (because he didn't have the benefit of the Aspect experiment to support it).
> Undetectable to the observer, different alternate realities can fight for becoming the dominant one, at least over a short period of time. This effect appears to be highly unsettling and not really greatly preferable to the world-splitting n the Everett interpretation
Tell himself is indeed not working in the field anymore—this preprint was submitted to at least one journal as I understand but was not accepted before the grant was up or so and he could no longer keep pushing for publication; instead he went into acoustic signal processing with a friend, and they both started a company called SoundTheory now. Something about trying to maximize the information “punch” of music to your brain makes it sound better or reduces background noise or something.
The paper is still interesting on its own merits though. I mean, it’s interesting to me; of course your mileage may vary.
And, what is the view of the cosmos that is not difficult? Would that be some instance of wavefunction "collapse"?
There's a great paper by van Kampen that points out that what we call a measurement and collapse of the wave function is entangling a quantum state to an irreversible statistical process. Measurement and wave function collapse are limit cases of that that are useful approximations when doing calculations.
: http://www.johnboccio.com/research/quantum/notes/vankampen.p... (a version sans paywall!)
The goal is to derive emergent spacetime and gravity from quantum mechanics.
Some features of their theory:
* Finite dimensional Hilbert space. Quantum field theory gets boot.
* Spacetime is entangled degrees of freedom in a way that semi-classical spacetime geometry emerges. Things are local because they are entangled not the other way around.
* Spacetime expands because initially untangled degrees of freedom become entangled with the rest of the the universe.
Does frequentism really require actually performing the experiment? Or is imagining doing the experiment good enough? I would say
»Candidate X will win the next election with a probability of Y percent.«
»The following sets of states and possible evolutions of those states are
compatible with my knowledge about the world and in Y percent of the cases
candidate X wins the next election.«
What about the Planck distance then? What’s that all about?
It seems to me that on a microscopic level and small time scales, a small change in input will lead to a small change in output.
This is certainly true in classical mechanics, but what about quantum mechanics? Are the quanta the result of a continuous process? Can an subatomic particle wind up on mars, exceeding the speed of light? with a certain probability?
HERE is what bothers me. The instability of certain physical problems (small change in input leads to large changes in output, like where a pencil is going to fall if stood on its tip). How can this happen if the composition of continuous functions is continuous???
In mathematics we have abstractions such Real Numbers and infinite sequences of functions that can converge to discontinuous and even really weird functions in the limit.
But in the real world it seems that we have some sort of minimum, like planck distance, or simple measurement error, that preclude us from reversing a process after a certain point. Maybe THAT is where unstable problems on the macro scale come from??
Pilot Wave Theory seems to say that everything is deterministic and the uncertainty in Quantum Mechanics comes from us being unable to observe the process that leads to the result. But PWC requires us to abolish the idea of locality, which to me is a special case of continuity.
Anyway can someone please explain this to me? As it regards quantum mechanics? Leslie Lamport’s paper caused a big watershed moment for me and I’m still reeling from it:
Quantum physics has inspired so much work in other fields! Check out this guy's work for examples: https://scholar.google.com/citations?user=wdhkzPMAAAAJ&hl=en
I don't agree with Khrennikov's interpretation of quantum mechanics (he's a realist whereas I tend to appreciate the more "mystical" feeling interpretations of quantum mechanics), but he and others' work on the connections between quantum physics and classical probability theory, as well as on non-physics applications of quantum theoretic tools, is crazy thought provoking.
The reason for this is that if we take the most improbable outcome of a given wave function and say “This highly improbable branch occurs once”, we are immediately contradicted, as the next least improbable event is virtually certainly a non-integer ratio to the former. So, we give the wave function numerical factoring / self-resolving capabilities and instead, the least and second least improbable branches occur the number of times necessary to maintain status as whole integers with correct relative ratio. But then, that only resolves two possible events on the wave function, and so with the third least improbable event, almost certainly not an integer ratio to the first or the second, we must repeat this step again of multiplying the number of branches for the least and second least improbable, to maintain a consistent integer ratio for our types of branches. As you can see, as you follow this up through all the possible branch outcomes, to express their corresponding probabilities in whole integers counts of quantum outcomes, you essentially have to engage in a massive computation of finding common factors all the way up. Further, even the least improbable event will require an incomprehensible number of duplicate branches, and the most probable events will have an even more innumerable count of duplicate branches still.
The only way I can see to escape this madness with MWI seems to be give up on the notion of truly separate branches, and instead treat these “many worlds” as a stream of overlapping world-ish-nesses in which discrete outcomes don’t actually even exist, but then you have seeming contradictions in observable discreteness and it’s not clear it’s truly even MWI anymore.
Disclosure: I’m not a physicist, and it's quite plausible that I don’t know what I’m talking about.
That's what it is. A measurement is coupling a quantum event to a statistically irreversible process. The total wave function that results has two major lobes. There's no split on measurement. That's why it's appealing: it makes no reference to classical mechanics in the formulation.
That's not particularly a criticism nor a compliment, btw.