Hacker News new | past | comments | ask | show | jobs | submit login
Where Quantum Probability Comes From (quantamagazine.org)
97 points by eaguyhn 10 days ago | hide | past | web | favorite | 39 comments





Carroll is a big advocate of the Many Worlds Interpretation, so it's nice to see some other interpretations getting a decent hearing in this article as well as a decent treatment of some academic concerns about MWI. That's a testament to his humility as a researcher.

I must admit I struggle with the MWI Born rule derivations based on rational credence. I don't see why proving that one ought to assign credence in such and such a way is sufficient to prove that that's the way nature is. It feels too much like deriving an "is" from an "ought", although in a slightly different way than what Hume objected to!


Wikipedia says

> While it has been claimed that Born's law can be derived from the many-worlds interpretation, the existing proofs have been criticized as circular.

So it seems you aren't alone in feeling this way.

Edit: Forgot the link - https://en.wikipedia.org/wiki/Born_rule


Interesting. Following through the citations, this statement seems to be based on two quite old articles which criticise a different Born rule formulation based on frequency analysis:

https://link.springer.com/article/10.1007%2FBF02058273

https://arxiv.org/pdf/quant-ph/0409144.pdf

The latter article is more recent at 2005 but as far as I can tell Carroll's self-locating uncertainty ideas weren't introduced until around 2014.


Here's a blog post I wrote about this recently:

http://blog.rongarret.info/2019/07/the-trouble-with-many-wor...

The TL;DR is that the critics are correct. Deriving the Born rule begs the question because it makes an unjustified assumption (branching indifference) and also introduces an "invisible pink unicorn", i.e. a concept that, according to the theory, has physical significance but cannot be measured. In the case of MWI that concept is branch weights.

[UPDATE] The critique I wrote is based on Wallace (https://arxiv.org/abs/0906.2718). Carroll's argument appears to be somewhat different. I'm just now working my way through his paper (https://arxiv.org/abs/1405.7577) but I'd be very surprised if it did not also have some untenable assumption hidden in there somewhere.


Adrian Kent has a lengthy and detailed overview of MWIs pre-2009 which may interest you [0]. He discusses Wallace's formulation, contrasts it with other formulations and touches on branching indifference. He also has a more recent (and much more brief) article which refers to the Self-locating Uncertainty arguments [1].

I have previously come across Deutsch's formulation in terms of information flow [2], along with what seemed like a very strong criticism from Wallace and Christopher Timpson [3] that his model was not gauge-invariant.

[0] https://arxiv.org/pdf/0905.0624.pdf

[1] https://arxiv.org/abs/1408.1944

[2] https://arxiv.org/ftp/quant-ph/papers/9906/9906007.pdf

[3] http://users.ox.ac.uk/~bras2317/dhshort2.pdf


Thanks for the pointers. (This is turning into quite the little rabbit hole.)

What do you think of the simplified argument here: https://algassert.com/post/1902 ? Basically: ground the definition of probability into a reversible classical circuit then use that circuit's quantum behavior to generalize the definition to quantum (while assuming as few things as possible). In this case the assumptions are "limiting to amplitude 1 must mean limiting to probability 1" and no signalling.

Thanks for bringing that to my attention. My initial reaction is that the argument appears sufficiently cogent to merit respectful consideration. I predict that if someone were to dig into it, they'd find a question-begging assumption hidden somewhere. (The alternative is that this is a major breakthrough in physics, and my Bayesian prior on that is low.) But where that assumption is hiding is not immediately obvious to me. Looking for it seems like a worthwhile exercise.

I don't think the argument is novel, it's just a cut down counting/frequency argument. So any objections to those would presumably port over.

I dunno, I think you might be selling it short. This argument seems to be based on the mathematical continuity of probabilities, which is not something I can recall ever seeing before.

I was always wondering about MWI and couldn't find the answer to one question. AFAIK Everett never mentioned Many Worlds in his paper. What he was talking about was a Universal Wave Function. In that sense, 'world' is just a 'quantum system', not 'Universe'. At what stage we started calling it Many Worlds?

When scientists say Many Worlds, do they actually mean Worlds as physical 'parallel universes' that pop up into existence?

Or they only mean that those are probable outcomes of our measurement (probable histories) that never actually happened (only one of them happened - the one we end up being it)?


The words "many worlds" is misleading to a lot of people. As Carroll says, when a "measurement" occurs, nothing special happens. It is just normal quantum mechanics.

I think in general people accept that there can be a wave function for a cat with the states alive and dead. Cats are much more complicated than that. Presumably they have memory and inside that memory can be things like perhaps the result of a electron spin meansurement experiment. One state of the wavefunction of the cat might have the memory that the electron result was spin up. Another state of the cat might have the memory that the electron result was spin down.

This scenario above is a cat watching a electron spin measurement. Afterwards, the electron does not collapse into a single state upon being observed. Instead, it is still in two states but it is correlated with the memory of the cat (or entangled with the cat).

The belief, at least as I see it and I assume others believe this too, is that the cat has a conscienceness for each state of the wave function. So there is a "conscienceness" or "cat" that thinks the electron was measured as spin up. And then there is another that thinks that electron was measured as spin down. This is where the term many worlds comes from, the fact that there are two "consciences" (well, many consciences). I can see why people would think that part is weird. But how else should it work?

I guess it all comes down to what is the experience of a person (or cat) having a wave function and being in multiple state at the same time (just like all other objects in quantum mechanics). We are not external observers to the world, we are a part of it and we have a wave function too. Or, it would be more correct to say we are a part of the wave function of the system. There is not a separate wave function for each thing. There is just one wave function.


In MWI, all Worlds are physically real and may interfere with each other.

Sounds like that would be a testable hypothesis, right?

How exactly would they interfere?


All QM interpretations require interference. This is demonstrated by the two-slit experiment (for example). In this experiment, an electron passing through the two-slit shield generates an interference pattern. But if you detect which slit the electron passes through, you destroy the pattern. Why?

In textbook/Copenhagen interpretation, we say that when the electron passes through the slits the wavefunction has not yet been measured. But with the detector in place, you collapse the wavefunction to an eigenvector of position. By the uncertainty principle, the momentum uncertainty is now very high, so the electron shoots off in a random direction and cannot be expected to follow any pattern.

In MWI, we start with the interference pattern. But adding the detector does not "collapse" anything - instead it introduces lots of degrees of freedom. The particle passes through both the left and right slits, but the degrees of freedom ensure there is no fixed relative phase delta between these possibilities. The detector turns waves into "static noise", and so any interference pattern is lost. Both options are realized, but cannot detect the other possibility: distinct Worlds.

The key point is that MWI has no special role for measurement. The left-slit and right-slit worlds interfere, but with the detector in place the pattern is destroyed: the interference is uncoordinated and averages to zero, so undetectable. Whereas in textbook/Cophenhagen QM we say that the measurement results in a single outcome.


It partly depends on how "world" is defined, but generally: no. No one has come up with a way to test it against other interpretations (or indeed any interpretation against any other, with the possible exception of dynamical collapse models).

I had a discussion with someone on HN about this a while ago and realised that "world" or "split" isn't necessarily synonymous with "superposition". Rather, I believe that a split occurs when the superposition entangles with the environment and causes sufficient decoherence that there is negligible probability of (measurable) interference.

In principle you could imagine a thought experiment where you had a super-powered quantum machine which finely could control the quantum state of a large, isolated room. In that situation you could imagine someone in the room conducting an electron spin measurement and looking at the outcome, before the machine enacts a reversal of the room's quantum wave-function, thus causing the two copies of the person to interfere. If we ever reach that level of technology, it will be fascinating to see how the interpretation debate progresses.


Yeah, I mean, at this scale you’re not going to prove how nature is from any of this. What MWI is saying is that if you admit a really difficult view of the cosmos, then you can regard the Born rule as part of Schrödinger evolution—but it is kind of only possible because you punted the details into that difficult view of the cosmos. (And then the MWI-supporters come in full force and argue that actually the view is not so “difficult” after all.)

Backing up, the problem is that you have these different approaches—depending on how you count them you could maybe group them broadly and say there are 3-4, or as large as a dozen or two—that are all mathematically identical. They all predict the Born rule, but suggest different ways that Nature really fundamentally would act to produce that rule.

Since they are mathematically identical, it is provable that there is no way to choose between them. As a result, the one which steps the most out of peoples’ ways to enable experimental results, has better “genes” for its own reproduction in the publication of papers. And that has just been the Copenhagen interpretation: you the experimentalist have a soul and when that soul measures the world the nice unitary evolution of the world comes crashing down with probabilities given by the Born rule. Contrast with pilot-wave theories where you have to work out a whole separate equation that doesn’t do anything which makes any further observational impact.

The basic issue that we are facing is that while the notion of souls seems laughable for fashionable sciencey people, it also seems in some distressing way inevitable. You take Many Worlds for instance, you admit the reality of every single possibility of the entire universe as a much broader multiverse. An equation, the Schrödinger equation, essentially works over, say, a Planck time to create a vector field on top of this, saying “This instant is followed by that instant is followed by that instant.”

In the middle of that, what uniquely qualifies my experience here and now as I know such experience must exist? I do not perceive a multiverse; I perceive a changing universe. And MWI says “well actually there are a million yous, frozen in time, all perceiving changing universes. Your experience of motion through time is actually kind of a lie.” This is not a unique problem to QM; it happened much earlier with special relativity where we discovered that you are actually a rope of worldlines thrusting through a static four-dimensional Lorentzian manifold, every part of that rope being presumably in a separate conscious state, perceiving itself as moving through time but we “on the outside” can see that there is no unique present defined such that it can wash over all of the ropes simultaneously. MWI just happens to facilitate the same basic “unrolling of time” because it has already unrolled all of possibility-space. And to fix it you need something—I’m calling it a soul, you can get fancy—which “zips along the worldline” and contains my conscious experience, or you need to argue that my experience is an illusion, or you need some “universe-soul” to act like a coherent “present moment” for all of us, or the like. It all kind of sucks.

There is a nice perspective sitting in the middle of this due to Andreas O. Tell [1], and it is pleasantly agnostic while still doing something like what MWI is doing to try and derive the Born rule from normal wavefunction evolution. In brief, he says: “use the state-matrix formalism for QM, and take a completely agnostic view of what the cosmos is and how it behaves. Still if you have a local information-processing system which is embedded in the cosmos and changing, then it receives information and must update its model of the universe. Its model of the universe must necessarily come down to a list of wavefunctions with a list of weightings, but there is a freedom-of-perspective which allows you to choose the wavefunction with the highest weight as “the” one that you think the system is in. New data just forces these weights to cross in size, causing the Born rule when you try to determine whether those weights will cross over and the system will be in the new state.”

In some sense then we can live in a very Copenhageny world where we are changing data-processors uniquely present in some space and time, but we can use Schrödinger evolution to derive the Born rule just the same as the many-worlds interpretation does, but rather than committing to its plethora of different universes we might be able to just remain non-committal about what is in the rest of the universe, beyond what I see in it.

[1] https://arxiv.org/abs/1205.0293


Another author saying essentially the same thing:

https://arxiv.org/abs/1812.06451

I actually believe that this idea has been rediscovered in it essence at least half a dozen times, going back as early as (at least) the late 90s:

https://arxiv.org/abs/quant-ph/9605002

I think it's arguable that even Everett himself held this view, and there is some evidence that Schrödinger held it as well but didn't have the courage to admit it (because he didn't have the benefit of the Aspect experiment to support it).


I have to be brutally honest here and say that I'm not inclined to spend much time understanding in detail a 16 page paper from someone who has no recognisable affiliation and a single arxiv submission that doesn't seem to have garnered much attention. Having skimmed it and what you've written, I'm quite confused about how the ontology differs from Many Worlds or what it means for "weights" to "cross in size". I think the author concedes some of the difficult in positing a "dominant reality" when they say

> Undetectable to the observer, different alternate realities can fight for becoming the dominant one, at least over a short period of time. This effect appears to be highly unsettling and not really greatly preferable to the world-splitting n the Everett interpretation


I mean I suppose it’s up to you whether you “recognize” the Universität Konstanz, but it’s a rather large place.

Tell himself is indeed not working in the field anymore—this preprint was submitted to at least one journal as I understand but was not accepted before the grant was up or so and he could no longer keep pushing for publication; instead he went into acoustic signal processing with a friend, and they both started a company called SoundTheory now. Something about trying to maximize the information “punch” of music to your brain makes it sound better or reduces background noise or something.

The paper is still interesting on its own merits though. I mean, it’s interesting to me; of course your mileage may vary.


To be clear, what is the difficult view of the cosmos?

And, what is the view of the cosmos that is not difficult? Would that be some instance of wavefunction "collapse"?


> you the experimentalist have a soul and when that soul measures the world the nice unitary evolution of the world comes crashing down with probabilities given by the Born rule.

There's a great paper by van Kampen[1] that points out that what we call a measurement and collapse of the wave function is entangling a quantum state to an irreversible statistical process. Measurement and wave function collapse are limit cases of that that are useful approximations when doing calculations.

[1]: http://www.johnboccio.com/research/quantum/notes/vankampen.p... (a version sans paywall!)


This is related to his research. His latest podcast goes to it more deeply (there is a transcript): https://www.preposterousuniverse.com/podcast/2019/09/09/63-s...

The goal is to derive emergent spacetime and gravity from quantum mechanics.

Some features of their theory:

* Finite dimensional Hilbert space. Quantum field theory gets boot.

* Spacetime is entangled degrees of freedom in a way that semi-classical spacetime geometry emerges. Things are local because they are entangled not the other way around.

* Spacetime expands because initially untangled degrees of freedom become entangled with the rest of the the universe.


His book just came out this week as well.

In contrast with frequentism, in Bayesianism it makes perfect sense to attach probabilities to one-shot events, such as who will win the next election, or even past events that we’re unsure about.

Does frequentism really require actually performing the experiment? Or is imagining doing the experiment good enough? I would say

  »Candidate X will win the next election with a probability of Y percent.«
is just a shorthand for

  »The following sets of states and possible evolutions of those states are
  compatible with my knowledge about the world and in Y percent of the cases
  candidate X wins the next election.«
which seems not to different from a coin flip where the different outcomes are also due to imperfect knowledge of the initial state. The difference is that it is easy to sample the set of initial states for a coin flip by just repeatedly flipping a coin from slightly different initial states due to human imperfections in doing this task. Sampling the initial states of an election in the same way is obviously not possible and I have admittedly no real clue how people arrive at a meaningful number in practice. A similar example seems to be the probability of rain at some place some time into the future in which case it is possible to sample the set of initial states by running a weather model repeatedly.

I interpret it as a reference system. To a frequentist, a probabilistic statement is "this coin flip is 50% heads in reference to this set of coin flips." Your elements of the statement are the event, the probability, and the reference set of events.

Isn’t space and time considered continuous?

What about the Planck distance then? What’s that all about?

It seems to me that on a microscopic level and small time scales, a small change in input will lead to a small change in output.

This is certainly true in classical mechanics, but what about quantum mechanics? Are the quanta the result of a continuous process? Can an subatomic particle wind up on mars, exceeding the speed of light? with a certain probability?

HERE is what bothers me. The instability of certain physical problems (small change in input leads to large changes in output, like where a pencil is going to fall if stood on its tip). How can this happen if the composition of continuous functions is continuous???

In mathematics we have abstractions such Real Numbers and infinite sequences of functions that can converge to discontinuous and even really weird functions in the limit.

But in the real world it seems that we have some sort of minimum, like planck distance, or simple measurement error, that preclude us from reversing a process after a certain point. Maybe THAT is where unstable problems on the macro scale come from??

Pilot Wave Theory seems to say that everything is deterministic and the uncertainty in Quantum Mechanics comes from us being unable to observe the process that leads to the result. But PWC requires us to abolish the idea of locality, which to me is a special case of continuity.

Anyway can someone please explain this to me? As it regards quantum mechanics? Leslie Lamport’s paper caused a big watershed moment for me and I’m still reeling from it:

https://lamport.azurewebsites.net/pubs/buridan.pdf


I'm not an expert, but one of the most exciting realizations I've had over the last few years is just how close quantum theory is to various "ordinary" kinds of probability theory, including Kolmogorov's classical theory. Now probability theory is not so boring for me.

Quantum physics has inspired so much work in other fields! Check out this guy's work for examples: https://scholar.google.com/citations?user=wdhkzPMAAAAJ&hl=en

I don't agree with Khrennikov's interpretation of quantum mechanics (he's a realist whereas I tend to appreciate the more "mystical" feeling interpretations of quantum mechanics), but he and others' work on the connections between quantum physics and classical probability theory, as well as on non-physics applications of quantum theoretic tools, is crazy thought provoking.


If you'd like an intuitive introduction to the actual technical details of quantum probability: https://www.math3ma.com/blog/a-first-look-at-quantum-probabi...

I love this website so much (quantamagazine.org). The design is great and the articles are amazing.

There's something that has deeply irked me for many years about these MWI probability constructions, and that is the largely glossed over fact that there's sort of a non-local numerical awareness and computation within the wave function necessary to construct the number of branches in proper ratio required to maintain self-consistency in the MWI universe. Additionally, this number of branches is incomprehensibly larger than simply splitting the universe once for every quantized event, and results in unwieldy levels of duplication of identical branches.

The reason for this is that if we take the most improbable outcome of a given wave function and say “This highly improbable branch occurs once”, we are immediately contradicted, as the next least improbable event is virtually certainly a non-integer ratio to the former. So, we give the wave function numerical factoring / self-resolving capabilities and instead, the least and second least improbable branches occur the number of times necessary to maintain status as whole integers with correct relative ratio. But then, that only resolves two possible events on the wave function, and so with the third least improbable event, almost certainly not an integer ratio to the first or the second, we must repeat this step again of multiplying the number of branches for the least and second least improbable, to maintain a consistent integer ratio for our types of branches. As you can see, as you follow this up through all the possible branch outcomes, to express their corresponding probabilities in whole integers counts of quantum outcomes, you essentially have to engage in a massive computation of finding common factors all the way up. Further, even the least improbable event will require an incomprehensible number of duplicate branches, and the most probable events will have an even more innumerable count of duplicate branches still.

The only way I can see to escape this madness with MWI seems to be give up on the notion of truly separate branches, and instead treat these “many worlds” as a stream of overlapping world-ish-nesses in which discrete outcomes don’t actually even exist, but then you have seeming contradictions in observable discreteness and it’s not clear it’s truly even MWI anymore.

Disclosure: I’m not a physicist, and it's quite plausible that I don’t know what I’m talking about.


> The only way I can see to escape this madness with MWI seems to be give up on the notion of truly separate branches, and instead treat these “many worlds” as a stream of overlapping world-ish-nesses in which discrete outcomes don’t actually even exist

That's what it is. A measurement is coupling a quantum event to a statistically irreversible process. The total wave function that results has two major lobes. There's no split on measurement. That's why it's appealing: it makes no reference to classical mechanics in the formulation.


In a shocking turn of events Quanta Magazine features an author who’s heard of the Many Worlds Interpretation.

If I were to write an edgy/snarky comment bot for HN, I imagine its comment history would resemble yours to a tee.

That's not particularly a criticism nor a compliment, btw.


Nice. You’re a better programmer than I, though.

What makes you think that's the case?

It follows from you programming snark as well as I think it. I’m sure it’s possible, but it seems like a hard problem to me.

I wonder if one could earn a living as a professional shitposter for AI training purposes.

lmao



Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: