Hacker News new | past | comments | ask | show | jobs | submit login
Quantum Bayesianism Explained by Its Founder (quantamagazine.org)
77 points by guerrilla 35 days ago | hide | past | favorite | 39 comments

>>>"At the other end of the spectrum is Bruno de Finetti. He says there’s no reason whatsoever for my probabilities and yours to match, because mine are based on my experience and yours are based on your experience. "

That doesn't deny an objective reality out there, it's just saying that we have different information about what it's out there and, therefore, we arrive to different conclusions.

I don't understand how that perspective is different from just "objective Bayesianism" with different priors.

I'm not sure if I'm barking up the same tree, but my thought process has been something like this: A Minkowski diagram plots distance on the 'x' axis, and time on the 'y' axis, and the "here and now" is at the origin in the centre. The past that can affect that central point is a right triangle with the apex pointing up at the origin. The future spacetime that the "here and now" can affect is an upside down triangle with the apex starting at the origin.

This gets interesting if you think of the past as a collection of "states", such as boundary conditions, and the area of the triangle as some "function" over the states. Think: hash function with some subset of a file as the input.

Now, imagine there are two points, one for each observer. Two scientists, say. There's one at the origin, call him Alex ("A") and another scientist in his "past light cone", call him Bob ("B"). Their states are like the hash function, and the inputs are the boundary conditions, like different but overlapping subsets of a file. They'll disagree because they don't all see the same input bits.

From the perspective of Alex, Bob is in his past, and Bob has only experienced a subset of the boundary conditions. In the physical universe, this is the big bang. Alex can, in mathematical principle, but maybe not a physical one, predict everything Bob will say about any experiment Bob will do. But Alex also knows that Bob is missing some information, the greater part of Alex's past light cone not shared by Bob. So Alex can make predictions that Bob cannot possibly predict. The situation is inherently, fundamentally asymmetric!

Here's the rub though: from Bob's perspective, Alex is in his past, and it is Bob that knows the entire state of the universe from the big bang leading up to his here & now, and it is Alex that is ignorant of a "slice" of it.

So what's going on?

The reality is that there are 4 observers: {Alex Prime, Alex's Bob}, and {Bob Prime, Bob's Alex}.

We colloquially talk about them as if there are only 2 observers, but the more accurate perspective is that there are 2 observers in 2 universes, and they can never completely agree on all experiments, not even in principle. They have different inputs, different boundary conditions, and different predictions.

> Here's the rub though: from Bob's perspective, Alex is in his past

This is not possible, if B is in the past lightcone of A. The relative time ordering can only vary if the two observers/events are spacelike separated — in which case the each have access to events the other doesn’t.

Everyone thinks everyone else is in their past.

Colloquially, we think that Alex and Bob are the same persons from each others' perspective.

Physically, this is not true. They're different. There's Alex and Bob-in-Alex's past, and also Bob and Alex-in-Bob's past. Four "people".

If helps, use symbols: A(A), A(B), B(B), B(A).

I agree. It feels like he is just combining normal Bayesianism with normal quantum mechanics. Which is seems fine, but I'm annoyed he dodged the question about nonconscious things also having a personalized set of priors.

I'd like to hear him explain the single particle double slit experiment from this QBism perspective.

Sounds like the wave model describes the statistical properties of a population of photons.

Eg, a coin flip is deterministic if you know all the forces involved (airflow, force of flip, exact distribution of mass of the coin, etc). But since we are usually ignorant of all that, instead we model it as a bernoulli trial.

But that’s not what qbism is about: a wave function (pure state) doesn’t represent ignorance about a true underlying physical state, it’s a maximally sharp state of belief.

> Regarding quantum states as degrees of belief implies that the event of a quantum state changing when a measurement occurs—the "collapse of the wave function"—is simply the agent updating her beliefs in response to a new experience. https://en.m.wikipedia.org/wiki/Quantum_Bayesianism

You could be right, That is what this sounds like to me though. According to the model there is a 50% chance the coin will land on heads, until you flip it.

For a pure state a measurement doesn’t improve our knowledge about the state of the physical system, it changes it (and we get information about the new state). The Bayesian updating applies to mixed states, where there exists “classical” uncertainty while for a pure state the uncertainty is purely “quantum”.

"Quantum measurement is nothing more, and nothing less, than a refinement and a readjustment of one’s initial state of belief. [...] Let us look at two limiting cases of efficient measurements. In the first, we imagine an observer whose initial belief structure ρ = |ψ⟩⟨ψ| is a maximally sharp state of belief. By this account, no measurement whatsoever can refine it. [...] The only state change that can come about from a measurement must be purely of the mental-readjustment sort: We learn nothing new; we just change what we can predict as a consequence of the side effects of our experimental intervention. That is to say, there is a sense in which the measurement is solely disturbance."


Thanks, not what I was thinking it sounds like.

The QBism perspective is not different, as you say it’s normal quantum mechanics so you get the usual wavefunction solution. If you don’t find the standard description satisfactory I don’t think qbism will help.

The source of the prior distribution affects analysis and whether the Likelihood Principle (LP) is obeyed. If the prior is pure/subjective, LP applies. If the source is not subjective LP does not apply, and with it goes some of the niceties of Bayesian analysis.

Not sure what that means, a prior is always "subjective" because is the model of the world before the observation, but it comes from the observer experience of reality, where else could it come?, it just that those experiences change with the observer.

Anyway, the prior is updated after every observation, so if it was a wrong prior, it should be less important in the result at every observation.

Data driven priors are more common now. Sometimes this is called empirical Bayes. Also priors can change after model checking. These are data driven priors, but not EB.

The prior is only negligible in infinite sample sizes. It could still have a large effect given 100s or 1000s of observations. For example, if the prior variance is miss specified by an order of magnitude (not uncommon).

The likelihood principle (all of the information an experimental outcome provides about a parameter θ is expressed in the likelihood function) applies either way.

The next statement: de Finetti says that probabilities are determined by personal gambling attitudes. Though in practice probabilities work independently from gambling attitudes.

>>"Though in practice probabilities work independently from gambling attitudes."

I suppose that is what I don't understand. What does it mean in practice? Probabilities work independently from gambling attitudes, but they work the same for everybody.

I mean de Finetti's claim contradicts facts.

> Eventually my colleague Rüdiger Schack and I felt that to be consistent we had to break the ties with Jaynes and move more in the direction of de Finetti.

For those who are curious, here's Jaynes' view on the subject:


The entropic dynamics approach from Caticha is closer to Jaynes’ ideas (and more interesting than qbism in my opinion, as it’s quite different from standard QM instead of just giving a Bayesian interpretation to it): https://arxiv.org/abs/1908.04693

Am I missing something here? A wave function is NOT a probability distribution! Wave functions have phase! The double slit experiment and bells paradox both clearly show that quantum effects can be created that cannot be explained in terms of classical trajectories that are simply unknown. Phase can cause wave functions to interfere with themselves, and so forth ... I can only imagine that Fuchs must deal with this in his thousands of pages, but the article makes it seem like his interpretation was one dismissed at the beginning because quantum mechanics demonstrably involves effects that cannot just be a due to a lack of classical state information.

From the post 5 years ago, this blog post summarizes the article well: https://motls.blogspot.com/2015/06/is-quantum-reality-person...

Edit: Going through recent posts of this guy, I would also like to state that anything non physics related on his blog is crazy right-wing babble.

See the discussion from 2015:


Same article but published by Wired.

Too many people, including physicists, confuse theories with reality. Quantum mechanics is a very powerful theory that correctly predicts a lot of observed phenomena. But while QM talks about a wavefunction, it doesn't mean the wavefunction actually exists. I myself particularly like the Everett interpretation, which some believe implies a multiverse, but I don't think the actual existence of a multiverse is necessary.

What other definition is there for saying that something exists, than that it correctly predicts observed phenomena? If you say that the table in front of me exists, surely that's nothing more or less than saying that the concept of a table correctly predicts the table-like sensations you see and feel.

There's an entire field of philosophy called ontology which includes metaontology and it answers your question as "many definitions." I recommend this book [1] by Berto and Plebani. SEP [2] also has about a dozen articles on the ontology of quantum mechanics and quantum physics more generally.

[1]. https://www.bloomsbury.com/us/ontology-and-metaontology-9781...

[2]. https://plato.stanford.edu/contents.html#q

To me, "exists" is stronger than "predicts observation", although I can't entirely describe how, except by example.

Let's say you're in a game. You can create a physics of the game world based on how that game behaves. However, the game world doesn't exist, as such. It's the result of a pattern produced by a computer, interpreted by our minds, which fills in the blanks and makes the game world feel more real than it is.

It makes sense that an account of existence should either include the substructure (in this case, the computer) or the observer (in this case, the mind doing the filling-in), or possibly both. On the other hand, a description of the behavior of the system is much simpler.

Another example: simulation hypothesis people sometimes talk about the possibility of "breaking out" of the simulation, as if the simulation is less somehow less true than whatever is behind it.

But from a predicting-observed-phenomenon perspective, what you describe by physics exists. So the distinction between the rules of physics that apply in a certain domain because you're in a simulation, and the rules of physics that apply in another domain once you find a way to break out, is entirely artificial. That people don't see it that way seems to indicate that they mean something else by existence.

Without relying on a thing that actually exists - a computer running the game - you can't properly describe the behavior of the game.

In Everett interpretation wave function actually exists. As actually as you can get. Well, quantum state exists, wave function describes what is the state. There's a physical object described by mathematical formula, this relation is called physical meaning.

When you say 'theory', I think you mean 'model'.

>As QBism understands a quantum measurement outcome, it’s personal. No one else can see it. I see it or you see it. There’s no transformation that takes the one personal experience to the other personal experience. William James was just wrong when he tried to argue that “two minds can know one thing.”

EPR paradox demonstrates that observers can know each other experience.

If the wavefunction is assumed to only encode epistemic uncertainty of the observer, then how are phenomena like tunneling explained?

From my reading, they would say some particles are just much more energetic than usual (for some reason we are ignorant of). See the post above about flipping a coin. According to the statistical model we use for coin flips, it is really unlikely to flip 100 heads in a row but not impossible.

Wouldn't it be possible to see this experimentally? You'd find particles jumping out of a "potential energy well" which they shouldn't be able to escape via tunneling.


That's what Bayesianists think, but not frequentists. (As a frequency, probability is very much real. And, what would be the point of talking about something that "does not exist," anyway?)


Yes ... you add Bayesian to the thing and it becomes new, principled and superior. It’s been a trend for about 3 decades now.

Read the article, you can learn something :)

I'm struggling to see how this anything more than Bohm's theory with a dubious interpretation of the guiding wave (which, to be honest, Bohm himself was guilty of). Like Bell without the 'Beables'.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact