Hacker News new | past | comments | ask | show | jobs | submit login
Quantum observers with knowledge of quantum mechanics break reality (arstechnica.com)
126 points by petters on Sept 19, 2018 | hide | past | favorite | 154 comments



This write up not only oversimplifies, but it totally neglects one of the more interesting motivations of the original paper [1]. The authors are probing whether or not quantum mechanics is consistent with a single-world interpretation---that is, whether or not there is a unique reality. Formally, the claim is that there is no physical theory that is (1) consistent with QM, (2) consistent with a single-world interpretation, and (3) logically self-consistent.

1. https://arxiv.org/pdf/1604.07422.pdf


Yes, I'm not sure why this is a Nature article. As far as I understand things, very few physicists who care about which interpretation of QM is correct would choose the Copenhagen Interpretation in this day and age. The Copenhagen Interpretation is not even a fully-fledged interpretation, because it uses undefined, unscientific terms like "measurement" to determine when a probability wave collapses.

As I understand things, most physicists don't give much thought to which interpretation is correct, since any experiments to distinguish between the various interpretations are virtually impossible to do. And most physicists don't care about distinctions for which there will be no experimental evidence.

Among the physicists who do care about the different QM interpretations, it is my understanding that most would go with the Everett (AKA "Many Worlds) Interpretation these days. All other interpretations that I know of are hugely problematic, but there are no significant problems at all with the Everett Interpretation. The only problem is that many people consider it to be "creepy". But not liking the best theory because it is "creepy" isn't very good science, if you ask me.

Regarding there being no single-world interpretation that is logically self-consistent, I'm not convinced about this: The Bohm Interpretation, for instance, is experimentally indistinguishable from the Everett Interpretation. I.e., no matter what incredible technology and powers of QM experimentation we might develop in the future, we will never be able to do an experiment, even in theory, that tells us which of these interpretations is the right one.

Consequently, it would seem that the Bohm Interpretation is logically self-consistent. The problem with the Bohm Interpretation is that it's very ad hoc and violates Occam's Razor. It only exists in order to calm our feelings about the universe being "creepy".


Although I go with the Everett interpretation (which I like to think of as the "universal wave function" interpretation because it is really just a minimalist theory assuming that the laws of wavefunction evolution always apply), and the many worlds aspect is just a byproduct of the fact that a wave function left to its own devices would decohere into a bunch of practically non-interacting "worlds".

It should be noted, however, that the Everett interpretation does have one issue: it's not clear why probabilities should work the way they do. There are different approaches to deriving the laws of probabilities under Everettian physics, but things very easily get metaphysical once you try to go down that road.

As you point out, the Bohm Interpretation works as a single world interpretation, although it relies on reifying particles embedded in waves to essentially select a single world, which is rather ad-hoc. However, it does give us the probabilities for free, assuming any reasonable initial setup for the particles.


> It should be noted, however, that the Everett interpretation does have one issue: it's not clear why probabilities should work the way they do.

Yes, back when I studied this topic seriously, this was an issue. E.g., if you toss a quantum coin that has a 1/3 chance of coming up heads and 2/3 chance of coming up tails, this seems to result in only two "worlds". And if there are two worlds, why are the observed probabilities then not .5/.5 rather than .3333/.6667?

I didn't mention this in my OP because (1) that would have been something of a deep-dive for a summary post, and (2) there were ideas being floated about to solve this problem back when I was studying this, but I don't know how these ideas ultimately panned out.

I'm surely curious as to what the current best ideas are about this issue.


Why can't it result in three worlds, two of which are indistinguishable?


Well, maybe it does. Only there really aren't separate worlds in the Everett Interpretation. It only seems that way to our superpositioned brains.

At some point this debate becomes a bit too confusing for me. All I can report is that the experts fretted over this.


In a perfectly mathematically coherent version of Everett: https://arxiv.org/abs/0903.2211, the idea is to integrate out the wave function to get a mass density on 3 space. This mass density on 3-space is a mess at any particular moment, but one can witness its evolution over time to pick out particular correlated histories. There is no particular splitting into separate worlds. And, indeed, the whole picture of discrete spin measurements is misleading. It is always spatial measurement stuff ultimately going on and so plenty of smearing.

The relevant probabilities are not derived by number of "worlds". Pick some particular moment and correlated history, look backwards (what is recorded in the current "configuration") at experiments, and one should see the proper statistics appearing in the "vast majority" of experiences.

However, there will be plenty of experimenters who see wrong statistics. Everett predicts this with certainty. There is a "world", according to this, that just split from the moment I am writing this, in which all future experiments have spin up coming up 100% of the time from that moment on. Over time, we all end up correlated with this as the experimenters report their fantastical findings.

If they truly believe in Everett's theory, they would accept that they just happen to be in the branch where this happens. In Bohmian mechanics, they would say something else is going on. The odds of seeing something like that in Bohmian mechanics are so vastly, incomprehensibly small, that it is more likely to see cracked eggs reassembling themselves from random thermal motions. But in Everett, it happens with certainty to some universe.

This is the difference. Bohmian mechanics can be readily falsified based on statistical outcomes of experiments. Perhaps not with 100% certainty, but certainly with 100% practical certainty. Everett can never be falsified based on statistics. It could be falsified if something that was supposed to happen with a literal 100% certainty failed to happen, but with anything statistical, it simply can't because the theory says it does happen.

One could modify the theory to cut out the "outlier" worlds. This is, in some sense, what GRW with a mass density ontology does.


I guess I am unclear on one of your points: Let's say we toss a fair quantum coin many, many times. In the Everett Interpretation, yes, there is a "world" in which that coin has always come up heads. But our chance of finding ourselves in that world is vanishingly small.

In the Bohm Interpretation, that coin could always come up heads too, but again with a vanishingly small probability.

So they seem equivalent experimentally to me. (And to the experts who have written entire books on the subject.)

Max Tegmark came up with a way to experimentally determine if the Everett Interpretation is correct. (I believe it was Tegmark who came up with this.) It has a high cost for the experimenter, though!

What you do, is rig a gun to a fair quantum coin, so when you pull the trigger, the gun fires 50% of the time. Now shoot yourself in the head with it many, many times. If you end up surviving many rounds of this, you can be pretty darn certain that the Everett Interpretation is correct.

Never mind the billions of other versions of yourself that you murdered to discover the truth!


For this to work, the gun has to terminate your consciousness before the state of the coin can become entangled with the world. Objects as large as guns cannot (currently) be kept in an unentangled state for the milliseconds required for a bullet to do its work. If it were possible, the entire gun-bullet-head system would need to be cooled to microkelvin temperatures, at which guns or consciousness don't work.


I don't understand your comment. In the Everett Interpretation, the wave function never collapses. Everything is always entangled. "Decoherence", however, causes the perception that the wave function collapses, and consequently this interpretation is often called the "Many World Interpretation", because the theory results in a different "world" (for all intents and purposes) for every possible outcome.

Hence, in the Everett Interpretation, if you shoot yourself using such a quantum gun, every time you pull the trigger, there will end up being one "world" in which the gun didn't go off and one world in which you put a bullet in your head.


Right, with many worlds, if there is any probability of something quantum happening, with say a billion billion to one odds, it will happen with 100% certainty somewhere. And if you are that observer, how do you say it was unlucky/lucky, sine it must have happened


It's no worse than a single world interpretation: These things happen all the time, the branch of math that deals with them is called Large Deviation Theory (and it's closely related to information theory).

One of the corollaries/interpretations of Sanov's theorem is that, generally speaking, when faced with an astonishingly improbable outcome (e.g. flipping 9,000 heads and 1.000 tails out of 10,000 independent coin flips), no statistical test can differentiate between "that improbable occurence with a fair coin" and "an unfair coin" - the fair coin, when it does something improbable, with have (with overwhelming probability) a specific tilted distribution that looks unfair.


But in single interpretation only one outcome happens on a trial. The full distribution does not manifest on a single draw. In MWI all the possibilities occur, which is different


Somebody usually wins the lottery. That doesn't change the fact that it's really unlikely for any given player to win the jackpot.


I guess you win. MWI is really not different from classical probability in any way.


Any state is equally improbable. It's human judgement to tell something was special or not. If all states are all possible combination of numbers in a lottery, you just call 1 combination "I won" and all the others "I lost". Same thing applies to a cleaned up room, any arrangements of objects in it are equally probable, but the number of states you would call cleaned up are so much less than the messy ones, so you can say a cleanup up room is less likely than a cleaned up one (unless you do something about it :P) This is also the fundamental of entropy btw.

TL;DR By grouping states together (human choice), certain arrangements seem more probable than others.


> the Everett interpretation does have one issue: it's not clear why probabilities should work the way they do.

This is referencing the nature of the measure and the difficulty of deriving the Born rule. Why should the outcome of measurements be proportional to the square of the wave function? That's indeed a problem in MW.

But even if that's solved in some way, there's an even more foundational issue of how non-determinism can arise at all in a deterministic theory. The MW reply is that there is no non-determinism, but then has trouble explaining observed reality - which is not a good position for a theory to be in.


> ... although it relies on reifying particles embedded in waves to essentially select a single world, ...

Is the mathematical formalism actually more unwieldy, or is it just this interpretation of what it's doing? The 'select a single world' sounds like taking some of the conceptual framework from Everett and trying to paste it on here, rather than giving the Bohm interpretation its own conceptual framework which could possibly be more elegant?


The Bohm interpretation is quite simple. It does start with the idea that the world is made of particles and that we are satisfied if we can make correct predictions of where stuff is. In other words, I have a computer in front of me and a particle theory would say that is because there are particles making up the computer and they are located in front of me. So that is the correspondence with reality. For some, this is a reasonable starting point. For most others, it is blasphemy for some reason.

Once we settle on stuff with position, then we have to ask how that stuff changes. One option is that we specify accelerations; that's Newton's way.

Another option is to specify velocities. That is the Bohm way. Specifically, the velocity is derived from the wave function of quantum mechanics. It is basically the derivative of the wave function, normalized and made real. Done. You can see a simple derivation of the equations here: https://en.wikipedia.org/wiki/De_Broglie–Bohm_theory#Derivat... The first one derives Bohm's equation very quickly and simply from the starting wave relation equations of Einstein and de Broglie.

The complication is that the wave function is a function of the configurations of all the particles. Thus, to get the velocity, one technically needs the positions of all of the particles of the universe. Practically, one only needs the positions of entangled particles, but still. It is a non-trivial setup though the most basic, natural setup one could possibly have given particles and a wave function. Also, the particle positions do not influence the wave function evolution. This is rather unusual.

For the Dirac wave function, it is even simpler. That object directly gives the velocity of the particles; no derivative needed.

Contrast this with MW which says, I guess, that the fundamental thing that we are concerned with is the wave function. It does not seem that particles exist in any meaningful sense in that theory. It is as if we are machines able to track a singular aspect of the change of an abstract vector in Hilbert space. It is not clear why this vector is often represented as a function on configuration space when there is no configuration of stuff. The theory essentially says that actual reality is nothing at all like what we perceive. I would think an honest account of a theory which only has a wave function would be to formulate the theory on an abstract Hilbert space and derive configuration space and, thus, configurations from that. Not very likely, by the way.

Reality may be deceptive, but I certainly prefer to start with theories in which our experience is explained in a pretty simple and straightforward way: it looks like stuff is over there because there is stuff over there. We have singular experiences because a single experience is what actually happens.

Also note that in Bohmian mechanics, we specify the initial wave function and the initial particles and then evolve the system using differential equations. All of the operator stuff, collapse, etc., comes out of that evolution; we don't need to have any special considerations about them. The quantum formalism becomes analogous to thermodynamics, not a fundamental theory, but a useful practical one replacing the individual evolutions with some useful shorthand. In MW, there is this question of how to model an experimental situation. Where are the measurement operators coming from? What is a subsystem? In BM, these things arise essentially by conditioning on the configuration of the environment. In certain situations, this will give rise to roughly an isolated system evolving according to its own Bohmian dynamics. The measurement interaction is then represented by an operator, or its generalization, depending. But all of that emerges from the basic differential equations evolving the universe.

It is not clear to me how easy it is to do that kind of analysis in MW. After all, there is no singular experience to break it down to, there is no subsystem, there is no definitive experiment being done. It is not really clear how one would falsify a theory which, more or less, assumes everything happens.


> Reality may be deceptive, but I certainly prefer to start with theories in which our experience is explained in a pretty simple and straightforward way: it looks like stuff is over there because there is stuff over there. We have singular experiences because a single experience is what actually happens.

This assertion does not make sense to me. The Everett Interpretation and the Bohm Interpretation are experimentally indistinguishable from each other, as I understand things. Consequently, there is no mystery at all with the Everett Interpretation as to why things appear to us the way that they do.

Since the Everett Interpretation is a significantly simpler theory than Bohm's, we should prefer it due to Occam's Razor. On the other hand, since they are experimentally indistinguishable from each other, we can never scientifically assert which of the two is correct, no matter how much evidence we have.


Everett's theory categorically tells us that the results of experiments differ from the results in Bohmian mechanics. In Bohmian mechanics, a typical experiment will have one result. In Everett, experiments have all possible outcomes happening. These are not the same.

The "indistinguishable" part happens because, according to Everett, there is some version of the experimenter that will have the same experience as the single experimenter in the Bohmian world.

This is not simpler. I have no reason to believe that there are infinitely many copies of me out there. Everett's theory says that there are. Fine. I can't disprove it. I also can't disprove that every instant of my experience is being carefully orchestrated by a thousand angels. It is experimentally indistinguishable from any theory you care to posit.

But I prefer theories where my actual experience is supposed to be a reasonable reflection of reality. I experience a single me and therefore I would prefer a theory in which there is a single me. Bohmian mechanics provides that and in a completely natural and reasonable way.

Everett categorically disputes my experience as being reflective of reality. There are infinitely many copies of me and my experience of being singular is an illusion. I can't dissuade people from embracing that, but it certainly strikes me as peculiar.

Also, in terms of experiments, Everett has infinitely many copies of the universe where all of the statistics of the experiments come out wrong. There are infinitely many that come out right. Is that experimentally indistinguishable? I don't know. Kind of a strange question in the context of "most everything happens".


This all hinges on a bunch of essentially random metaphysical choices that you have made. For example, what does it even mean for there to be a "single me"? I would argue that in the many worlds interpretation, once a fork happens, the other observers are not "me" anymore. So there's no contradiction between the experience of a "single me", and multiple copies - each copy has its own "single me" experience.


I never said that there was a contradiction. I simply said that theory is suggesting a reality at odds with my experience. That does not mean it is a contradiction. It means that my experience is not a faithful representation of reality.

And that's fine, but the idea that this is simpler than a theory which says my experience is a reasonable reflection of reality, is not. Occam's razor is not about number of equations, it is about what is simplest. I experience a single "me". A theory which supports that experience directly and obviously is simpler than a theory which does not.

This is particularly true when the "extra" equations are simple and obviously a part of the other equations.


But it's not at odds with your experience. It's an odds with "your" experience, where you arbitrarily redefined "your" (not that the conventional definition is any more rigorously defined, mind you...).

The reason why this all gets so convoluted is simply because it exposes how much we rely on terms and concepts that are defined very fuzzily, and often aren't even defined at all, but just accepted for granted as if everyone means the same by them. And then it turns out that we don't, which should really come as no surprise.


To satisfy my own curiosity, do you have a source on that issue with the Everett interpretation? I'd love to read up on what you were talking about in that second paragraph.


I don't have a summary offhand that would be better than anything you can Google up, but this post by Sean Carroll explains the issue and details one possible answer: http://www.preposterousuniverse.com/blog/2014/07/24/why-prob...


I don’t have a real source, but the basic problem is somewhat straightforward. Suppose I have a particle in the state |0> + |1>. (I’m ignoring overall normalization.). After the measurement, the state is (|0>|I measured 0>) + (|1>|I measured 1>). This is a pure (deterministic) state.

It would be nice to say that there’s a 50% chance that I measured 0, but how exactly do you get that in a rigorous way from the state vector above?

To make everything complicated, the answer should not treat the experimenter part of the universe specially.


Your second state is only pure if no information has leaked into the environment. The chance of a human-sized object measuring a state without any stray photons or air molecules interacting is basically zero.

As for why 50%, why not the Born rule? Or are you asking how we derive the Born rule?


> Your second state is only pure if no information has leaked into the environment.

Not true in a many-worlds model. The “I measured” part is intended to account for the environment, at least initially.

> Or are you asking how we derive the Born rule?

More or less. In a many-worlds interpretation, there is no Born rule per se. I’m saying it’s not entirely trivial to recover the statistics that the Born rule would give.


> Among the physicists who do care about the different QM interpretations, it is my understanding that most would go with the Everett (AKA "Many Worlds) Interpretation these days.

In a small poll at a conference on the foundations of quantum mechanics in 2011, the Copenhagen interpretation was the most popular. This could differ from your definition of "these days" I suppose.

https://arxiv.org/pdf/1301.1069.pdf

Update: a 2016 survey by different authors found similar popularity for the Copenhagen interpretation.


I can say that this weird acceptance of the logically undefined Copenhagen interpretation was a contributor to me wanting to leave the field ...


Logically undefined in what sense?

Paradoxical traps or thought experiments filled with holes?

MW is even more undefined in terms that we cannot even construct a measurement/singular interaction... (it is always derivative of total system state)


Well, I guess I can't argue with an unscientific poll, but it kind of boggles my mind that any scientist could take the Copenhagen Interpretation seriously.

I mean it builds the term "measurement", which is an undefined and unscientific concept, right into the laws of physics. Personally, I find this to be virtually nonsensical.


The 2016 survey: https://arxiv.org/pdf/1612.00676.pdf

Both surveys are fascinating reads. They clearly give a sense that despite of the spectacular success of QM how far we still are from the final word in that field.


>The only problem is that many people consider it to be "creepy".

That seems like an unfair characterization. The issue many have with the Everett Interpretation is that it relies on a fundamentally untestable existence of every possible universe in every possible state of being for the sake of conceptual tidiness. It's not unreasonable to apply Occam's razor and hold a measure of skepticism about this.


In my opinion, Occam's Razor does not say that you should prefer the theory that has the simplest consequences, but rather that you should accept the simplest theory that explains the observations.

The Everett Interpretation is clearly the simplest theory because the deep mystery of what would cause probability waves to collapse is made completely irrelevant. There is no wave collapse to worry about or to explain.


>The Copenhagen Interpretation is not even a fully-fledged interpretation, because it uses undefined, unscientific terms like "measurement" to determine when a probability wave collapses.

No, measurement is just a poor choice of name for a type of interaction some matter can have with other matter. I'm a wave function, and sometimes I have a measurement interaction with other wave functions and that causes collapse. When wave functions of random environment particles do the measurement and cause (seemingly) spontaneous collapse, we call it decoherence and its a serious drag on building quantum computers.

I'm not advocating for Copenhagen, but it is not inconsistent for the reasons you stated. Outside the paper linked here, I'm unaware of any inconsistency in the Copenhagen interpretation.


> No, measurement is just a poor choice of name for a type of interaction some matter can have with other matter.

That's not the Copenhagen Interpretation. Interpretations that claim this are called "Objective-collapse" theories. Back when I cared a lot about this, the most popular Objective-collapse theory was GRW:

https://en.wikipedia.org/wiki/Objective-collapse_theory


If "measurement" is an undefined and unscientific term, what is the scientific definition of "branching" in the multiple-words interpretation?

Edit: the most significant problem with the MWI, apart from the "creepy" metaphysical aspects, is the meaning and quantification of probabilities. In particular being able to derive Born's rule, which works so well in practice.


> If "measurement" is an undefined and unscientific term, what is the scientific definition of "branching" in the multiple-words interpretation?

The "Many Worlds Interpretation" is actually something of a misnomer. This is why many people prefer to call it the Everett Interpretation.

It doesn't actually posit many worlds. It posits one very big complex world with very complex superpositions of state. But since your brain ends up in a superposition of states, different facets of this superposition of your brain state perceive this one big complex world, as smaller, simpler "worlds".

And the term for why different pieces of this superposition of states stop having an effect on each other is called "decoherence".

As for how the math works out in terms of probabilities, that is beyond me.

When I studied this, the discussion was usually simplified down to a quantum coin that when flipped would come up heads 1/3 of the time and tails 2/3 of the time.

This only results in two "worlds" though. A heads world and a tails world. So there was an issue that people debated at the time: Why do we perceive the .3333/.66667 probability for these two "worlds", rather than a .5/.5 probability?

I must admit that I am ignorant about the current state of this debate.


Is saying "measurement" really much more undefined and unscientific than saying "different facets of this superposition of your brain state perceive"?


100% yes. You cannot give a scientific accounting of "measurement" since the term isn't even defined in the Copenhagen Interpretation.

There are various interpretations that attempt to define "measurement" in various ways, but those are not the Copenhagen Intepretation.

As for whether you can give a scientific account of how data is processed by a brain in a superposition of states, you most certainly can. (Ever heard of "quantum computing"?) It's just complicated.


“You most certainly can” is not a very satisfactory answer. I don’t say that the Copenhagen interpretation is very satisfactory but at least it predicts the probability of events. The term isn’t even defined in the Everett interpretation. In the best case, it’s incomplete and more metaphysics than physics.

Edit: I’ve never heard of any treatment of “quantum computing” which doesn’t include the concept of measurement, the Born rule and the projection postulate. Have you?

Edit2: it was maybe not fair to say that the probability of events is not defined in the Everett interpretation because many-world interpretations have addressed this issue since the original paper from Everett in 1957. But as far as I know they have not succeed. Measurement has also been addressed in countless papers and books for almost a century, for what it’s worth.


Just wanted to say, as someone who's been following this on the fringe for years, I really appreciate you using words that were easily searched. Really let me step through what you wrote effectively even though I didn't recognize the interpretations by name.


> Regarding there being no single-world interpretation that is logically self-consistent, I'm not convinced about this: The Bohm Interpretation

The paper on which this is based states in the abstract that this fails for Bohmiam Mechanics:

> This conclusion extends to deterministic hidden-variable theories, such as Bohmian mechanics, for they impose a single-world interpretation.


> The Copenhagen Interpretation is not even a fully-fledged interpretation, because it uses undefined, unscientific terms like "measurement" to determine when a probability wave collapses.

Then why is it being taught everywhere? It's been driving me crazy since forever.


I’d imagine because it is the easiest to explain.


Also because it works 100% of the time, even if it doesn't ask or answer some questions that seem profoundly important.


People have an adverse reaction to the "many worlds" theory. Maybe they don't understand it? Or maybe what I think is the many world interpretation is really something else. Back when I was in theoretical physics we weren't careful about defining terms like this because, as the original comment says, many worlds vs copenhagen was not really an issue of interest, other than in a philosophical sense for people like me. And I didn't take Copenhagen seriously.

If you take away any sensient beings, or whatever it is that is required to make one of these "measurements", then copoenhagen is the same as the many world interpretation. Wave functions go on evolving and there is no collapse. For example, an electron can be in spin up or spin down. It is not in both states. In one "world" it is spin up. In another "world" it is spin down. That is strange enough for all of us. But for some reason people have trouble extending this idea to people, so that a person can be in mulitiple states at the same time (in the different "worlds", in the same sense as the electron being in different "worlds".)

What made this strike home to me was when I was in graduate school and my advisor told me "There are no magic external observers. The observer is subject to quantum mechanics too. He is part of the experiment."

To describe the correspondance between copenhagen and "many worlds", suppose an observer measures if an electron is spin up or spin down. In "many worlds" case his memory of the outcome is correlated with the measured state of the electron. So in the "world" where the electron is spin up (that portion of the wave function) the observer also thinks the electron was measured as spin up. And in the "world" where the electron is spin down, the observer thinks the electron was measured as spin down.

In this "many worlds" case, The observer who measures the electron as spin up will not interact with the observer that measured it as spin down. For all intents in purposes, it is as if that other observer never existed. In the copenhagen case, that other observer does _not_ exist. In this interpretation, the wave function collapsed to only include the part with a single observer.

In effect the observed outcome of the two interpretations is the same. The difference being one of them, the copenhagen interpretation, postulates a magical change in the wave function of the universe.

(Aside - The fact that those observers will not interact is just in a practical sense, to my knowledge. I don't know if it is impossible for them to interact in theory. I don't think it is. Maybe someone else knows the answer to that.)


> For example, an electron can be in spin up or spin down. It is not in both states. In one "world" it is spin up. In another "world" it is spin down.

I don't think most MWIers would agree with this. Normally they consider worlds to have split only once (irreversible, or approximately irreversible) decoherence has set in. An electron in the coherent state |z+> + |z-> = |x+> wouldn't qualify.

In fact, this seems to be one of the biggest difficulties of the interpretation. Nobody knows whether there even is such a thing as in-principle irreversible decoherence, and if there's not, then the point at which it is "approximately irreversible" is arbitrary.


Are you a many worlds person, Or is this your interpretation of what they believe? There is no need for an irreversible decoherence. Any real situation where there is a question of copenhagen versus many worlds is a pretty decoherent problem to begin with, since you are dealing with macroscopic beings.

Edit: Add the state change in the measurement

(|z+> + |z->)|obs> => |z+,obs+> + |z-,obs->

First there is an electron in one of two states, and the observer is uncorrelated. After the measurement, the observer becomes correlated with the electron.


The only explanation I've ever seen of when worlds split is when decoherence has become "effectively irreversible." That's not a well-defined physical event, and so it's hard to say that worlds "actually" split.

For your unentangled state on the left, Sean Carroll explicitly describes it as a state that doesn't have two worlds yet. I can find the post if you like, or maybe we already agree and I'm misunderstanding.


If you have can find a post I'd like to see it. I don't quite follow.

In what I am describing, the two "worlds" don't really separate. It is possible that they can interact, theoretically. However, you can't construct an experiment to detect the different parts of the wavefunction interacting because of decoherence. You just can not make a coherent quantum system that invovles real people (to my knowledge). So in practice you can not do the experiment.In anything we observe, the two resulting observers (in a measurement with two choices) are effectively isolated.

In other words, decoherence is automatic. Also, it is inherently irreversible.

But, I am sure he (or whoever wrote that post) is saying something sensible so I would be interested in seeing it.


I just mean like this: http://www.preposterousuniverse.com/blog/2014/06/30/why-the-...

"We wouldn’t think of our pre-measurement state (1) as describing two different worlds; it’s just one world, in which the particle is in a superposition. But (2) has two worlds in it. The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever."

(State 1 is when the particle is in a superposition by itself and state 2 is when it's entangled with a macroscopic apparatus.)

My issue is that he uses words like "forever" and "impossible." These convey a sense of finality, but the decision of where to draw the boundary is subjective. The worlds can in principle (and under certain cosmological models, must) recohere.

See, for example: https://arxiv.org/abs/1105.3796

"Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment"."

See in particular Section 3.2 (Failure to irreversibly decohere: A limitation of finite systems)

(Edit: I should mention that I am not a physicist, by a long shot. Just a curious amateur.)


> the copenhagen interpretation, postulates a magical change in the wave function of the universe.

And the MWI postulates a magical branching into different "worlds", doesn't it?


No. I may not have conveyed the idea very well in the comment. That is normal quantum mechanics. People just use the term "world" when it applies to these observers and not when it applies to something like an electron.


If this is normal quantum mechanics and the "worlds" are not real, separated physical objects -- and they are just mathematical constructions -- what problem does the MWI solve precisely?

If the universe is an isolated quantum system evolving unitarily, how does the MWI help to understand the laws of physics that we observe?


> If the universe is an isolated quantum system evolving unitarily, how does the MWI help to understand the laws of physics that we observe?

It definitively answers the question of when wave collapse occurs and what causes it.

The answer given by the Everett Interpretation is simply that there is no wave collapse, and what we observe is the result of our brains being in a superposition of multiple states.


Consider this simple application of standard QM:

1) we have a one-particle system that has been prepared into a pure state by measuring the spin along the x-axis

2) we are going to measure the spin along the z-axis

3) the quantum state before is a superposition of the |up> and |down> states (in the basis corresponding to the Sz operator)

4) the theory predicts that we are going to find either |up> or |down> with equal probability

5) immediately after the measurement the quantum state will be either |up> or |down>, depending on the outcome

What is the answer given by the Everett Interpretation? What is the description of the initial conditions? What is the prediction of theory? What is description of the end state?

I hope the answer is not just handwaving and mumbling about "superposition".


The prediction of the Everett Interpretation is that our brains end up in a superposition of states in which one of those states perceives an up particle and one of those states percieves a down particle, but that the original particle is still, in actuality, in a superposition of up and down states. I.e., the superposition has not actually collapsed into either up or down. It's just that our brains are now in a superposition of states too.

I don't understand why this would be difficult to understand.


> It's just that our brains are now in a superposition of states too.

Are you saying they are "now" in a superposition, meaning they were not before the observation?

If so then how can the act of us observing the thing cause us, our brains to go into super-position? And is it just our brain that goes into super-position? Why not the rest of my body too? And what if I'm holding the hand of another person, does she too go into super-position?


> the original particle is still, in actuality, in a superposition of up and down states

If you cannot say that the particle is in a well defined up or down state after the measurement then you cannot say either that it remains in a superposition of up and down states. Because that original state was the result of a previous measurement. You cannot even say that the particle (or you, for that matter) does actually exist. The universe is a superposition of states where it does and states where it doesn't. (Does the universe exist at all?)

The standard interpretation of QM:

- we want to explain the physical world that we experience

- we come up with a theory based on a mathematical description of physical states and the equation describing its evolution

- the theory allows for superpositions of states, incompatible with the physical world where we observe only definite states

- why do we observe only definite states?

- we postulate that when we measure the wave function changes becomes and becomes consistent with the observation

- we also postulate the Born rule to compute the probability of observing each potential outcome

- everything is experimentally verified, there are open questions (what is a "measurement"?) but the theory works fine in practice

- we know that this is not a complete description of the world (gravity!)

The Everett interpretation:

- let's assume that the physical world that we experience is just one aspect of a larger, out-of-reach thing

- what is real is the mathematical description (nevermind that it's incomplete), evolving according to the Schroedinger equation

- why do we observe only definite states?

- because this is the way our brains experience the physical world! (see how easily we solved the issue with measurement?)

- what is the probability of observing each outcome?

- ... (but, hey, did you notice how elegantly did we skipped, I mean, solved the measurement problem?)

- we have a collapse-free interpretation of QM! (but remember that if you want to study the physical world that we experience you have to use the projection postulate, because this is the way our brains experience the physical world)

I find interesting that the MWI is so popular among cosmologist, given that QM doesn't handle cosmological issues well. But of course this interpretation "solves" the problem of why the observed universe is precisely the one that we observe.

Everett's theory is interesting, but not a panacea. Maybe decoherence is the key to explaining "collapse", maybe gravity is the key, maybe the answer is somewhere else...


The paper's title makes the claim succinctly: "Single-world interpretations of quantum theory cannot be self-consistent."

Nature's headline does a terrible job at conveying this, really. I would have expected better from the people who edit headlines there.


Usually when there's a paradox like this, it means we've chosen bad axioms somewhere. Even things that seem intuitively self-evident can prove untrue.

For example, what if time does not necessarily flow in a single direction at the quantum scale; what if instead, the ground level of physical reality is a timeless information graph / equation that is 'solved by the universe'?

I'm no physicist and no nearly nothing about the real math of QM, but every time I read these lay-explanations of "quantum weirdness" and "wave function collapse", I get this strange feeling that we're thinking about time all wrong: What if unidirectional time is an illusion? What if causality (and inference) is an illusion (thus explaining how hard it is to capture it mathematically)?


The specific axioms seem to be at fault in both thought experiments in step 0.00.

Iterated experiment presumes creating fully known fixed same state psi.

Wiener Friends experiment makes the F1 magically know (memorize) state of quantum RNG without measuring it and without being entangled.

Both experiments require cloning which is forbidden.


> What if causality (and inference) is an illusion

You may want to check David Hume.


We changed the URL from https://www.nature.com/articles/d41586-018-06749-8, which people were complaining about, to one that at least a few readers seemed to like better.


>> Formally, the claim is that there is no physical theory that is (1) consistent with QM, (2) consistent with a single-world interpretation, and (3) logically self-consistent.

Does pilot wave theory fit the bill?


Pilot wave theory (also known as Bohmian mechanics) imposes a single-world interpretation.


My understanding is that it meets all 3 criteria. Hence the assertion that there is no theory that meets all 3 would be false.


According to the paper, Bohmian mechanics violates assumption (Q) (the first one), which is not exactly "consistent with QM", but rather "If an agent knows the state of a quantum system S and knows that a measurement of x applied to the state yields ξ with probability 1, then the agent knows that x = ξ".

Bohmian mechanics violates it by not applying to arbitrary subsystems, but only to the universe as a whole (i.e. parts of the universe cannot be treated as quantum-mechanical systems themselves, because they don't have their own pilot waves)


I find that to be logical, as it seems like it would be incredibly difficult to dis-entangle an observer from the minute fluctuation and relationships with every other bit of energy in the universe.


re "only to the universe as a whole"--

“The etymology of the word ‘universe’ can be traced back to the use of the Old French univers, in the twelfth century, which derives from the earlier Latin universum. This word is created from unus, meaning ‘one’, and ‘versus’, the past participle of the verb vertere, meaning ‘to turn, rotate, roll or change’. So we have a literal meaning of everything ‘turned into one’ or ‘rolled into one’.” from John D. Barrow’s “The Book of Universes.”

Sounds like Bohm is taking the 'uni' in universe seriously, while the paper starts by assuming there is no fundamental unity in the universe.


The Latin word may have been invented less than a thousand years ago: "...dicuntur universum sive omne : quia universum est unum versum in omnia..."


Haven't read the paper in detail yet, but aren't they claiming it violates assumption SC (self-consistency)?


Parent is correct, Bohmian mechanics violates (1), but we already knew this. Bohmian mehanics allows for systems in quantum non-equilibrium, which is a unique concept to Bohmian mehanics. Unfortunately we don't know how to create such a state to test for it.


Hence non-locality right?


If the single-world interpretation is not true, wouldn't it mean that quantum physics is useless? What use is a quantum computer if the algorithm shows different results to different users?


Quantum computers are intrinsically stochastic machines. Quantum algorithms are about hedging the bets so that you get the "correct" answer at some p>0.5. Then you just repeat the computation until you're reasonably sure you have the right answer. In the Copenhagen interpretation the wavefunction magically "collapses" to one outcome, in Everett interpretation the wavefunction decoheres producing both outcomes in separate "worlds".


And what the use of a noisy channel if it may changes bits sent over it? ;)

You just need to have proper error correction. You are right - of course - if QM is completely noisy, or you can't have proper error correction (due to some physical limitation for example).

E.g. for quatum computation some people believes so: https://www.quantamagazine.org/gil-kalais-argument-against-q...


I know they are trying to dumb it down, but I think they are leaving out some fundamental elements of the puzzle here. Bob interprets the message incorrectly sometimes and no reason is offered for why or how often. They say the observers get contradicting answers sometimes, but that revelation seems anticlimactic considering I have no reason the believe Bob's observer should observe outcomes matching Alice's observer any more often than Bob himself can "guess" the result.

I might need to dig into the original paper: https://www.nature.com/articles/s41467-018-05739-8


On first glance it seems like they're building a sort of Godel sentence- a self-referential experiment that forces things to end up looking contradictory.


I was thinking something along similar lines. Hawking had a lecture[1] regarding Godel, which indicate incomplete or inconsistent models of systems. What we may think as inconsistent is probably an incomplete model. Or as Hawking addressed in the lecture: " According to the positivist philosophy of science, a physical theory is a mathematical model. So if there are mathematical results that can not be proved, there are physical problems that can not be predicted."

[1] http://www.hawking.org.uk/godel-and-the-end-of-physics.html


Isn't the measurement enough to change the quantum state of the system?

What is "got told" in this experiment? Does it involve creating an entangled state? In which way?

Otherwise, the story is classical information, which means a measurement must have been done or said story is not tied to quantum state at all.

The whole idea sounds like making classical guesses is supposed to somehow affect quantum states in a predictable or any way.


> There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable.

> There is another theory which states that this has already happened.

- Douglas Adams, The Restaurant at the End of the Universe


Here's another write-up of the same subject: https://arstechnica.com/science/2018/09/quantum-observers-wi...

"Quantum observers with knowledge of quantum mechanics break reality"


This article's comments section links to reviews of the paper, and, basically, the second reviewer nails it.

In one of the outcomes authors examine, which gives the rise to the apparent contradiction, measurements do no commute, which means, according to QM it is impossible to get that set of results in real world.


The reviews are here https://static-content.springer.com/esm/art%3A10.1038%2Fs414...

The authors' final response to referee #2:

We thank the referee for the clarification of their earlier comment, which we indeed misunderstood. It is of course correct that the observable of the measurement leading to z does not commute with the one corresponding to the measurement of w . This also means that the values z and w cannot be defined simultaneously (in the sense of “at the same time”). To explain how our argument avoids this problem, note first that the measurements take place at different times (as pointed out in the reply above). Concretely, z is measured at time n :10 and w is measured at time n :30. Note also that, in all statements we are using, we always explicitly specify the time at which a variable is supposed to have a given value. Whenever we are talking about z , the corresponding time lies (strictly) before n :30, whereas we only talk about what value w has at times after n :30. We hence never need to assume that z and w are available at the same time. To clarify this in the manuscript, we have changed the labelling of all statements. They now have a superscript which indicates at what time a statement has been made by an agent.


Disclaimer: in no way I am an expert in physics. I don't event have any degree in it.

Plainly speaking, this argument does not make sense, because there's no such thing as simultaneity.

The way I see it, the commuting argument tells what you, an observer to the entire system, can or can not see at some fixed point in spacetime, that is casually descended from the points where observations occur.

Anyway, the whole discussion about validity of interpretations does not make any sense to me, as long as they represent the same underlying math.


Here the argument is about commuting observables and isn't relativistic. For example, position and momentum are non-commuting observables. If you measure both, and then ask "what is the position of this particle," you have to know whether the position was measured first (in which case the position of the particle is still known), or whether the momentum was measured, so now the position is no longer known.


I think mistreatment of causality might still be another explanation for the result.

In particular, in the iterative process, that authors suggested to use to get w=ok, z=ok, each iteration must be connected to the previous in some QM encodable way, eg as { |continue>, |halt> }, that must be set in the last step, and measured in the first, therefore combining all participants into a single entangled QM system.

It is not immediately obvious, that this addition would not affect the outcome of experiment, and hence should at very least be included in the paper.


Can you please post the link to the reviews so we don't have to slog through all the comments?


Once one person has flipped the coin, it seems wrong to me to say that an observer outside the box sees a superposition of heads and tails, simply because the person outside the box doesn't know the result of the flip. I don't think that a human's ignorance is equivalent to a waveform superposition.

This is true even if you replace the coin with the results of a quantum random number generator. The problem isn't that the coin is a macroscopic object (and therefore doesn't really do quantum superposition). The problem is that, once observed, there is no uncertainty in the output of the quantum random number generator. There's only human ignorance (for everyone except the one person in the innermost box).

And as I understand it, experiment agrees with me. You take an entangled pair of photons, observe one, and you know what the same observation on the other photon will produce, even if the person making the observation on that photon doesn't.

It could be that the write-up is creating confusion, or that I am confused, but something feels off here (and not just in a "that's too weird to be true" way).


I don't think that it breaks quantum mechanics. It shows once again that the results of the double slit experiments are still not completely understood. That idea in the article only means that the Copenhagen Interpretation is being challenged. It doesn't make all the discoveries in particle physics collapse.


Hold on, what on Earth do they mean by a "quantum message"?


Ah: "In each round n = 0, 1, 2, … of the experiment, agent F¯¯¯¯ tosses a coin and, depending on the outcome r, polarises a spin particle S in a particular direction. Agent F then measures the vertical polarisation z of S"

https://www.nature.com/articles/s41467-018-05739-8


Is the point of this so that the whole situation in boxes remain an unobserved quantum state rather than classical?


This writeup is incomprehensible. Does anyone know of a better one?


I'm working on it (definitely room for improvement there), but the original paper is long and it will probably take me the better part of the day to work through it (and I also have to get some work done). In the meantime, the same conclusion can be reached by a completely different argument:

http://www.flownet.com/ron/QM.pdf

Same content in a video:

https://www.youtube.com/watch?v=dEaecUuEqfc

UPDATE: Just found this:

https://arstechnica.com/science/2018/09/quantum-observers-wi...

By way of:

https://news.ycombinator.com/item?id=18025438


Thanks! We've updated this submission to the Ars article from https://www.nature.com/articles/d41586-018-06749-8.


I found this article impossible to make sense of. But what I did gather reminds me of the simultaneity paradox in relativity where it's possible for different observers of two events to perceive their relative order in time differently. So that observer 1 may see event A happen before event B, observer 2 may see event B happen before event A, and observer 3 may see events A and B happen at the same time.


This is different, because relativity can actually be observed directly. Different interpretations of QM cannot.


I think people trying to interpret quantum mechanics are mislead by saying that probability wave upon observation collapses to precise value that upon next measurement stays exactly the same. That's unrealistic because no measurement is of perfect precission. Next measurement will give slightly different value. So we really can't tell if we have one exact value or still a probability wave just reshaped to very narrow thing by interaction needed to do the measurement. We can make probability wave wider and fuzzier again just by scattering particle we measure of some other unmeasured particle.

That's how I understand quantum mechanics wave-particle duality. That there's no duality. There are just waves that interact and reshape each other through that interactions. And the only reason that we think of them sometimes as sort of little billiard balls is because annoyingly equations that govern evolution of narrow and sharp probability waves look (not accidentally) exactly the same as primary school level math that governs billard balls.


If you make 3 subsequent measurements of different but related variables (like a vector component wrt to 3 different axes) the 3rd's probability distribution will depend on the value you observed in the 2nd observation in a way that cannot be replicated without the 2nd.

Meaning it's not just matter of poor precision. See the Stern-gorlach experiment.


A simple version of this (not the exact experiment) is to take three polarizing filters. Put one down, and see it reduces the amount of light passing through. Put a second one down, in line with the first, then rotate it to 90 degrees off. You will observe the light dim until the second filter is completely dark, blocking all the light which passed the first filter. Finally, take the third filter, and place it between the first two, at 45 degrees off, and suddenly light will pass through all three.

This very clearly demonstrates that each filter reshapes the wave's polarization, not to a single orientation, but to a narrow band.


The inaccuracy persists, e.g. in your comment - "the wave" I can image along literal hand waving. I'm being cynic because this polarization filter experiment had me dumbfounded.

Blocking the way between the second and third filter will show no light, as if the third filter wasn't there. It appears as if there is a wave but polarized in a way that doesn't reflect from the hand and the third filter would polarize it again. I believe that's also what you are saying. Only, what is "the wave"?

> a narrow band.

Are you agreeing with Scotty79 above?

What band?

But, anyway, I don't see the analogy to Stern-Gerlach (which I only skimmed quickly on wikipedia).


I think the problem he points out is that the measurement (e.g.the polarizer/slit/field) is never perfect. In fact the measurements are also made of materials at finite temperature with quantum effects so the subsequent probabilistic results won't be "exact". Of course you can design the measurements to be exponentially insensitive to those variations. However, when you build a sequence of measurements each exponentially sensitive to that variation you make setting up each individual measurement ever more difficult.

It still doesn't prevent the thought experiment.


Yes. Because 2nd measurement will reshape the probability wave of measuresed object.

There can't be measurement without interaction and no interaction is without consequences. Also uncertainty principle.


Are you suggesting that either wave collapse does not occur, or that instead of wave collapse, that there is a wave "narrowing" which is imperceptible to the observer (who observes it as a particle)?


It's more correct to say "apparent wave function collapse" because wave function collapse as a real physical phenomenon or event is just a hypothesis in some interpretations.

Mathematically "appeared collapse" is explained by decoherence. It provides an explanation for the observation of wave function collapse when the system and the environment mix in irreversible way.


>> It's more correct to say "apparent wave function collapse" because wave function collapse as a real physical phenomenon or event is just a hypothesis in some interpretations.

Wave function collapse is fiction IMO. There is no observable difference between a particle whose wave function has collapsed and one that isn't.

If there was an observable difference we could place an emitter of entangled particle pairs mid-way between two observers (Alice and Bob) arbitrarily far apart. Alice could either turn on her measurements of the particles that arrive on here end or leave it off. This would cause Bob to receive either a stream of particles with collapsed wave functions, or a stream of uncollapsed ones. Alice can now transmit information to Bob faster than the speed of light using Morse code, which is not possible. The reality is that Bob can't tell anything about the superposition of states (or lack of) by observing his particles.


The observable difference is that a particle's properties have been observed.

An uncollapsed state is a placeholder - constrained by statistics, but with no definite value. A collapsed state has a definite value along some measurement basis.

Entanglement is a different phenomenon.

Uncollapsed entangled pairs remain unmeasured by anyone. Either Alice or Bob can collapse the state of an entangled pair by measuring one item in the pair. This constrains the possible results the other person can find when they measure the other object (if that's a word that even applies in QM...) in the pair.

The correlation isn't obvious unless the results are compared, so neither experimenter can see it on their own.

But there has to be collapse - or decoherence, or a measurement, or some kind of interaction - for the correlations to become observable at all.


Copenhagen interpretation is not that easy to debunk. It is not backed by any additional math beyond all of the quantum physics math that is solid but backs multiple interpretations.

In physics if something doesn't have its own math then it is just an opinion. You can't prove it, you can't debunk it, you can have different one. Ultimately it doesn't matter.

For me Copenhagen interpretation becomes absurd from the moment it states that observer matters. Each observer is just lots of particles and the fact that they arranged to have eyes and brain can't possibly matter for the basic laws of physics.


Copenhagen interpretation does not imply human or conscious observer. That's called quantum mystism that usually uses Copenhagen interpretation as a starting point.

What Heisenberg and others assumed was that that there exists so called 'Heisenberg cut' between somewhere between quantum and macroscopic phenomenon.


That's not obvious to everybody. Even in this post there is mention that Shrodinger's cat is somehow different from Shrodinger's scientist.

And I've seen serious physicist lecturers claim that even recorded measurement didn't cause collapse because recording was not seen by anyone.

This is completely bonkers and misticism is the right word for it.

People who invented the thing are to blame. Why call something 'observation' if you mean interaction with macroscopic object.

And why make the fuss about Shrodinger's cat?


You don't need eyes and a brain to observe, a photon can be an observer.


That's not obvious to everybody. Even in this post there is mention that Shrodinger's cat is somehow different from Shrodinger's scientist.


> There is no observable difference between a particle whose wave function has collapsed and one that isn't.

Don't you think that there is an observable difference between a particle that has been already observed (i.e the wave function has collapsed) and a particle that has not been observed yet (i.e. the wave function has not collapsed)?


The observed one has just narrower wave function. That narrow that we can be pretty sure that next measurement of the same thing will fall close to the first one.


If we measure, say, the spin of an electron along the z-axis the only possible outcomes are UP and DOWN.

If we measure UP, the next measurement of the same thing can only be UP. The system has a narrow wave function indeed.

(On the other hand, if the system was originally prepared in a pure state along a perpendicular axis and has not been observed yet the probability of measuring UP or DOWN is 50/50. The difference between these scenarios is quite observable.)


Yes. Spin UP and DOWN are so "far" from each other that detector that can tell them apart reshapes the probability so much that from that point on this detector will always register UP.

Think about location. If you send electron through half-mirror and put a screen behind it and see electron hitting the screen then you cut down your wave function to at most half of the size. You'll never see this photon in front of the mirror because that small interaction with the screen narrowed down the possibilities.

You'll say that after it hit the screen it must go on in straight line and will hit any subsequent screen as if it was small ball just flying through space and screens.

But imagine instead of the screen just a detector made from solenoid that will notice passing electron but not exactly where it is. This way location is still uncertain. Probability wave is not that narrow. The only thing your detection did is reshaping the wave so you no longer have any possibility of finding electron in front of a mirror.

My point is that there's no difference in math between moving ball hitting things on the way and probability wave evolving in space time and exchanging momentum and energy with other waves and getting reshaped during these exchanges.

We know equations that govern this evolution, we know what rules the exchanges obey and how the waves change shape when they interact.

So why even have little balls cluttering our imagination if they are not needed to explain anything?


I think what you call "reshaping" is what the standard interpretation calls "collapse".

Maybe you will find it easier if instead of thinking about it as "something unlocalized becoming a point", you think about it as "the observable we measure is represented by an operator and after the measurement the wave-function will be the eigen-function of the operator correspoding to the eigen-value that we obtained as outcome of the measurement." This new wave-function is not necessarily "localized".

By the way, the wave-function is not a probability wave evolving in space time. It is a (complex-valued) mathematical function "evolving" in parameter space.


> I think what you call "reshaping" is what the standard interpretation calls "collapse".

Yes. But traditionally collapse is from a wave into result of measurement. My point is that it actually is from wave into a wave.

> operator correspoding to the eigen-value

This suspiciously for me sounds like a single value and something of different nature than operator you mentioned earlier.

> It is a (complex-valued) mathematical function "evolving" in parameter space.

I know. I called it probability function because its squared modulus can give you probability of getting measurement of given value.


Traditional collapse is from wave-function to wave-function. The result of the measurement is something different (but related).

The result of the measurement is one of the eigen-values of the observable operator (that's the essence of quantum mechanics, the set of outcomes is "quantized").

If the eigen-value observed is non-degenerate there is only one wave-function that corresponds to that pure state (i.e. the corresponding eigen-function). Immediately after the measurement, that will be the wave-function describing the system (and then will evolve according to Schroedinger's equation). In the general degenerate case, it will be the projection of the wavefunction before the measurement on the corresponding subspace.

Depending on what you measure the probability density corresponding to the after-measurement wave-function may or may not be concentrated at some point. It will happen if you measure the position (you know where it is know!). But if you measure the momentum, the location becomes completely undetermined.


That's great. So, pretty much I have correct intuitive understanding of quantum mechanics (except with that degenerate thing which I never encounterd before).

Thank you for your comments. At least now I can be now at peace when I read pop-sci articles about quantum knowing that they are just using extreme simplification (to the point of being wrong) but the actual current understanding is same as I believe and I'm in no obligation to share my insights. ;-)


Remember: "I think I can safely say that nobody understands quantum mechanics." ~ Richard Feynman

Edit: Degenerate means that the measurement doesn't completely determine the state of the system. In that case the state before the measurement is projected onto the subspace of states compatible with the measurement, I think this may be more or less what you meant by "narrowing down the possibilities".


More precisely (Quantum Mechanics, McIntyre):

"Postulate 5 (projection postulate): After a measurement of A that yields the result a_n, the quantum system is in a new state that is the normalized projection of the original system ket onto the ket (or kets) corresponding to the result of the measurement: ∣ψ′⟩ = P_n ∣ψ⟩ / sqrt( ⟨ψ∣ P_n ∣ψ⟩ )"

I agree that there are reasons to prefer waves to idealized point-like particles as the more fundamental description (singularities are always difficult to handle). But that's essentially unrelated to the single-world / many-worlds debate, I think.


This sounds much more like something I imagine.

Before collapse you have a system which evolution is governed by some equations, then interaction happens, then you get new system that is a transformation of old system into something dependent on result of interaction (measurement) but what we get in the end is still of the same nature and still evolves according to same equations as initial system.


That's QM in a nutshell. Of course, it's not completely clear why two different regimes are needed and when should one or the other be used. But as John Bell wrote in his "Against 'measurement'" article (https://www.tau.ac.il/~quantum/Vaidman/IQM/BellAM.pdf): "ORDINARY QUANTUM MECHANICS (as far as I know) IS JUST FINE FOR ALL PRACTICAL PURPOSES".

Quoting Everett's dissertation (http://www.weylmann.com/relative_state.pdf):

We take the conventional or "external observation" formulation of quantum mechanics to be essentially the following [1]: A physical system is completely described by a state function ψ, which is an element of a Hilbert space, and which furthermore gives information only to the extent of specifying the probabilities of the results of various observations which can be made on the system by external observers. There are two fundamentally different ways in which the state function can change:

Process 1: The discontinuous change brought about by the observation of a quantity with eigenstates φ_1, φ_2, ... , in which the state ψ will be changed to the state φ_j, with probability |(ψ,φ_j)|^2.

Process 2: The continuous, deterministic change of state of an isolated system with time according to a wave equation dψ/dt = Aψ, where A is a linear operator.

This formulation describes a wealth of experience. No experimental evidence is known which contradicts it.

[1] We use the terminology and notation of J. von Neumann, Mathematical Foundations of Quantum Mechanics, translated by R.T.Beyer (Princeton University Press, Princeton, I955).


> receive either a stream of particles with collapsed wave functions, or a stream of uncollapsed ones

I don’t think there’s any way for Bob to determine whether the particle stream has been measured by Alice or not.


>> I don’t think there’s any way for Bob to determine whether the particle stream has been measured by Alice or not.

That's why I wrote: "There is no observable difference between a particle whose wave function has collapsed and one that isn't."

If there was a way to tell it would violate relativity by allowing FTL communication.


Yes. Also you never really observe a particle. You observe interactions. You see that this electron bounced off of this ion. You see energy was exchanged because that energy triggers chain of events that in the end influences macroscopic object to the point you notice it with your own eyes and can write down as measurement.


What do you mean? Observing particle A is another way of saying measuring state of particle B after interaction with A, specifically comparing state B1 to state B0 expected when no interaction has occurred.

Using words "not really" is like saying I should've used exact particle names and state matrices instead of placeholders.


Math is solid. My objection is to calling it "observing a particle".

It's like watching animated movie and saying "I observe moving Donald Duck". There's no Donald Duck. What you see is 24 completely separate slightly differnt pictures per second of something that give you an illustion that Donald Duck exists and he is moving.

It's perfectly fine to say Donald Duck is moving when you are considering Earth orbiting the Sun, but when you are talking about particles and quantum it misleads you into thinking Schrodinger's cat should be dead and alive before you open the box.

Conversly, thinking about "paricles" not as specks of dust but as whole fuzzy clouds may give you a lot of new intuitive understanding of what's actually happening. Like when you drop planetary model of atom and you consider orbitals. Then you can understand chemical bonds better. Things that didn't make sense earlier, like benzene make now perfect sense, orbital hybridisation brings new explanations.


From the original paper:

>Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty. The agents’ conclusions, although all derived within quantum theory, are thus inconsistent. This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.

It really sounds to me that what they're saying is that some phenomena can be described multiple ways. And in particular, it is likely to explain phenomena consistently as X in one setting and Y in another.

This seems akin to the concept of two strings (A and B) being hashed to the same value; the creator of the hash had one value (A) and an observer has a rainbow table that believes the value is different (B) and both are right, but the observer is not able to understand the creator's intention.

Is this analogy roughly correct? And if so, why is this so darn surprising?


Here's my writeup, this is a first-attempt so please offer constructive criticism if wrong:

2 labs (L1 and L2), these are like the box of shrodingers cat, they are in quantum uncertainty.

2 lab workers (Alice, Jane)

2 outside observers (Bob, Frank)

Time is denoted as T0, T1, ...

T0 Alice in Lab1 randomly selects heads / tails

T1 Alice uses heads / tails to setup a quantum particle S

T2 Jane in Lab2 reads the state of S, infers heads / tails and stores this information in a new particle Z

T3 Frank reads the state of Lab2 to infer Z, S, heads/tails and determines pass / fail (pass == was heads), lets call this variable W_frank

T4 Bob reads the state of Lab1 to infer S, heads / tails, and then makes his own pass / fail (pass == was heads) and that's variable W_bob

Since these are separate readings on untangled particles by Bob and Frank, they can get disparate readings W_frank != W_bob

This is a problem since we "collapsed" the state of W_frank -> Z -> S -> Coin, but this doesn't necessarily imply that we can known with certainty that W_bob will match (i.e. forward collapsed Coin -> S -> W_bob).

Basically its just that each lab is in a quantum uncertainty (Quantum Coin is heads/tails, S is in either state, Z is in either state until measured). And making the measurement should reveal this, but there's no guarantee both collapses will result in the same outcome (so in one case the coin was heads, and the other it was tails).

My conclusions:

1. There is no randomness, we just suck at measuring still

2. We are in the "many worlds" but impossible branches can't happen (reality stays consistent somehow) -- so it won't happen even though its theoretically possible

3. The forward collapse does happen, we just haven't done the experiment to verify it. In other words, the pass/fail result would change to keep things consistent (the whole system is entangled). Including the memories everyone would have about it. So maybe this is happening constantly but we just don't know it because it changes even our memories about it.

4. This experiment fractures reality, and we realize we all live in a simulation and that's where "white holes" come from :P


I think their problem is considering that the labs L1 and L2 are isolated quantum systems because there is obvious entanglement through the state S.

In the sequence of events that they present at T1 the observer in L1 prepares S, transfers it to L2, and knowing the current state of L2 predicts what the observer outside L2 will measure later assuming that L2 is left unperturbed (the actions of the internal observer do not affect the system).

But this prediction is invalid because the protocol specifies that before the last observer enters the scene at T4 the system L1/L2 will be perturbed by another external observer (the observer outside L1 when he does his thing at T3, as you said he's effectively measuring Z).


Lubos Motl’s take on the paper: https://motls.blogspot.com/2018/09/frauchiger-renner-qm-is-i...

(Tl;dr: “People who still try to prove an inconsistency of quantum mechanics in 2018 are cretins.“)


So to clarify one info flow is:

Alice: Coin -> S, Jane: Z, Frank: Pass / Fail

The other is: Alice: Coin -> S, Bob: Pass / Fail


The original Maxwell daemon experiment didn't work because there was some "work" hidden in the formulation.

A few days ago I was reading this https://phys.org/news/2018-09-quantum.html (didn't get on the frontpage), but it's about the difficulty of disappearing quantum information.

As a layman, I can imagine that decoherence plays a similar role in these thought experiments. Maybe perfect transfer of quantum information is impossible.

It would be a nice merge of Schrodinger's cat with Maxwell's daemon. :-D


Thought this might be interesting about many-worlds https://www.hedweb.com/everett/everett.htm#believes if Feynman and Hawking and others are on board with it. I keep seeing with the string theorists and cosmologists that seem to be the leaders in the field are leaning toward many-worlds as the most sensible explanation. It then makes this article seem less interesting if the majority fall into copenhagen but the ones actually doing the real work are more in the many-worlds camp than not.


Is Hawking still onboard?


Unfortunately, Hawking passed away in March of this year. :(


I think that was the joke.


Or did he?


Depends. In an Everett (or Steins;Gate) sense, possibly. In a Bohm sense, probably not. :-) But string modeling has to be tested somehow and here you're dealing with essentially VR spaces and forms supervening on a quantum substrate with inascertainable so-called hidden variables. Probably need better brains to tackle this one.


Right it becomes not science if it cannot be tested or verified i wonder if quantum computers are helping though in the sense as its a practical use of being in many states at once although not in the macro sense


:) maybe in spirit? I don't know but that may be a real question in the sense of this. Funny. Sean Carroll and Michio Kaku and Brian Greene and Laurence Krauss are some other's I've heard that are for that but in light of that comment maybe just in this version?


> And different researchers tend to draw different conclusions. “Most people claim that the experiment shows that their interpretation is the only one that is correct.”

Exactly like in the experiment ...

Edit: To be clearer, why not try to explain the way the paper is received with the very material the paper is about ?


> (Frauchiger has now left academia.)

Tangential: why is that statement in the article?


Because she's no longer at ETHZ and normally they would add her current affiliation.


Our simulation is written in a functional language using lazy evaluation, obviously. Observing a quantum state forces evaluation and consumes more memory on the machine running our simulation. If we observe too much the beings simulating us are going to kill -9 us.


The tao that can be named is not the real tao.


I am happy and sad to hear this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: