Over and over again it has been shown you need to calculate all the possible histories to arrive at the probability of a specific history to be realized. And if you need to calculate those histories to match the universe, chances are high the universe as well has to calculate them.
Posted on the website for UNM Physics 466, Fall 2017.
Course Text: "Physical Mathematics, 2nd Ed." by Kevin Cahill
Kevin Cahill was born in New York, New York. He attended public and Catholic elementary schools, Regis High School, the University of Notre Dame, and Harvard University, receiving his PhD in physics in 1967 under the supervision of Roy Glauber (prix Nobel 2005). Cahill has done research at NBS, LBL, Ecole Polytechnique, Saclay, Orsay, and Harvard and has taught at Nice, Wesleyan, LSU, Indiana, Harvard, Fudan, and the University of New Mexico where he is a professor of physics and astronomy.
Both sides of the interpretative coin are needed. We must simultaneously understand that QM is epistemic and also that QM is contextual. The author asks,
> How do probabilities get into QM?
Because knowledge of physical systems is always limited and QM is epistemic, all quantities we manipulate in QM are probabilistic. Further, the Kochen-Specker Theorem requires that some measurements be not fully determined by past history, as a matter of living in at least three dimensions of space. Finally, basic linear logic allows us to replicate most of QM's effects with macrostates.
While Schrödinger hated this and used his famous cat thought-experiment to try to refute it, children learn about linear logic and probabilities in macrostates when they are given random toys or packs of trading cards, and today we recognize that it really is possible to condition relatively large differences in macrostate observations on single entangled or otherwise-prepared microstates.
> In the [Everett] realist approach the history of the world is endlessly splitting;
Eh, kind of? But it's also endlessly joining. We wouldn't just be in a "cosmic history" which is "sufficiently benign" for us, but also we're likely to be in a likely history. In fact, we're exactly as likely to be on our current path as our current path is to be randomly picked from amongst the possible paths. So it turns out that Everett's many-worlds analysis is a little tautological.
> But how can something so nonlocal represent reality?
Finally, a good deep question. There are two parts to the answer, and each are deep enough to warrant a series of lectures. First, the Kochen-Specker Theorem, combined with general relativity, leads to the Free Will Theorem: When we measure particles, we are sending them a query, and they choose how to respond to the query from among the possible replies. Second, reality isn't one single fabric, but interlocking systems of geometry and topology. With apologies to Wheeler, geometry tells reality how to interact, and topology tells reality how to propagate information. This insight leads to the co-hygiene principle .
The author sketches some possible directions for experimental work; it would be interesting to read about turning them into proper experiments and seeing the results.
I know probability and unpredictablity don't go away, but MWI is closer to an ontologic theory no? I mean as far as science can claim to be ontological.
And similarly for nonlocality. Is it only ontological? I feel like many worlds would say one thing vs copenhagen. And how do non traversible string theoretic wormholes reprsenting entanglement a la Susskind fit in if the theory can only be about epistemic knowledge.
Yes, the Copenhagen interpretation says that the wavefunctions we manipulate are epistemic; they exist in our mind and on the blackboard but not necessarily IRL. The epistemic toy theory  makes this concept very concrete, although note that that theory has hidden local variables and thus doesn't correctly implement Bell/KS/Free Will.
Yes, MWI is quite ontic; it's probably the most ontic that a QM interpretation can be, since it insists that all possibilities exist. Just like modal realism, MWI is the most extreme of the phrasings in that direction.
Non-locality sadly isn't negotiable anymore. Bell's inequalities have been violated in laboratories well enough to my satisfaction , so theories that can't handle it need to be fully rejected. I'm not sure what's left; maybe the transactional interpretation?
But then without nonlocality nor hidden variables, I truly am at a loss how quantum computers even work according to MWI. But I know this area gets so complex so fast.
If you take it literally, it describes a universe which constantly splits (from a classical perspective), resulting in apparent randomness because you only remember one branch, but you exist in all of them. That's many-worlds.
The pilot wave interpretation adds "Also, only one of them is real; the rest get 'computed' by reality, but any people you think it's computing are philosophical zombies."
The Copenhagen interpretation states that all but one branch spontaneously cease to exist once a physicist looks at it. More or less. Really, just anything that's sufficiently coupled to the environment; physicists are a sure way to ensure that. The choice of which branch remains is random.
Nobody really believes the Copenhagen interpretation, but it gives correct predictions so long as you don't try to explain why any of that should happen.
... Mostly. There's a cottage industry of trying to prevent decoherence in larger and larger objects, which makes it look less and less reasonable as the list of exceptions grows.
This is so dumb because the flag and the wind are both moving, there isn't an inertial frame where either one is stationary. (The wind, maybe in the far-field, but not near the flag.)
Or at least it means the process of knowing about the flag and the wind is a mental one.