Hacker News new | past | comments | ask | show | jobs | submit login
The Quantum Theory and Reality (1979) [pdf] (scientificamerican.com)
93 points by xtacy on Jan 28, 2018 | hide | past | web | favorite | 163 comments



Wave function collapse is not observed physical phenomenon. Some interpretations of QM require it to exist, but there is no empirical evidence of that happening.

Apparent wave function collapse happens when a wave function in a superposition of several eigenstates appears to reduce to a single eigenstate. The apparent wave function collapse collapse is mathematically equivalent of quantum decoherence where the wave function never really collapses but the states gets entangled with the observer.

If somebody were able to formulate and experience that would show the difference between decoherence and collapse, that would be new physics and we would be able to rule out some interpretations of quantum mechanics. Until that happens, 'shut up and caluclate' seems to be valid course of action.

As far I understand, the philosophical difference between apparent and actual wave function collapse is that in the apparent collapse probabilities of other states get so close to zero that they don't matter, in actual collapse they are exactly zero.

The assumption that human consciousness has something to do with setting all other states to zero is weird one and can't completely understand the assumptions behind it. I guess the idea is that we would not experience the world as we experience it now if there is just continuing decoherenće.


The philosophical difference is that without collapse, all of the eigenstates contributing to the intial state will get entangled with their own 'copy' of the environment, which is where the moniker many worlds interpretation comes from.


What is unclear to me in the many-worlds hypothesis is the reason whey these 'copies' don't constantly interact with each other and lead to continuous decoherence.

How this is justified?


Each of the worlds ends up distinct, e.g. all the air molecules end up in slightly different positions. Worlds that different don't interfere with each other. (This is predicted by the math, the math is the justification.)


> Apparent wave function collapse happens when a wave function in a superposition of several eigenstates appears to reduce to a single eigenstate. The apparent wave function collapse collapse is mathematically equivalent of quantum decoherence where the wave function never really collapses but the states gets entangled with the observer.

The problem with this interpretation is the fact that density matrices work: that certain probability-weighted ensembles of states are completely, provably equivalent to other probability-weighted ensembles of states. If you don't interpret collapse as physically real, then it remains to be explained why this should be the case.


> If you don't interpret collapse as physically real, then it remains to be explained why this should be the case.

Explanation is only needed if decoherence is not good enough explanation.


"Decoherence" does nothing to explain it.


Describe an experiment that differentiates between decoherence and wave function collapse.


Describe an experiment that differentiates between a photon that has a 1/2 probability of being vertically polarised and a 1/2 probability of being horizontally polarised, and a photon that has a 1/2 probability of being left circular polarised and a photon that has a 1/2 probability of being right circular polarised. Under "decoherence" those should be different things, and for them to be observationally indistinguishable would be nothing short of magical.


>photon that has a 1/2 probability of being left circular polarised and a photon that has a 1/2 probability of being right circular polarised.

That's a description of photon __before__ decoherence and before observation. If you construct the combined wave function after the photon has interacted with the photon sensor those probabilities have settled one way or another.


True but beside the point. "Decoherence" can't explain why those two cases are indistinguishable, unless you define it as a distinct phenomenon i.e. a collapse.

(Scare quotes because people use "decoherence" to mean multiple different things)


There is no need to explain it.

https://en.wikipedia.org/wiki/Quantum_decoherence


Yes there is. If you believe the wavefunction is physically real, then two different wavefunctions should correspond to two different physical realities. As I said, it's far too magical a coincidence for these two very different ensembles to not be experimentally distinguishable.


What you say is true, except as long as there is no evidence for collapse and it's designed to be undetectable, accepting it in a theory is nonsense.

You might as well postulate there are magical pink unicorns dancing on every particle. By your logic that theory also can't be ruled out.


It's not my logic. I was attempting to describe the situation. If you know better how the logic in interpretations requiring wave function collapse work, be kind and explain it.


Ok, I'm going to hijack this thread to get an answer to something I've always struggled with: how does 'observation' cause the wave function to collapse.

It won't happen in a closed box, but what about an open box in a closed room? What about a closed box with a live video camera? What about a live video camera whose display no one is watching?

I'm sure this is a basic question but for the life of me, I've never really understood it. Thanks.


The standard story about measurement/observation and quantum collapse is not the only interpretation of the available data.

There are, broadly, three kinds of interpretations of what's 'really' going on behind the scenes:

1. Collapse - The world exists in an indeterminate state until observation occurs, when the world collapses into one a single determinate state. The probabilities of quantum mechanics map onto the the different parts of the unobserved indeterminate state.

2. Many Worlds - Quantum phenomena cause the world the branch into multiple worlds. The probabilities of quantum mechanics represent the 'share' of reality that branches in each direction.

3. Hidden variables - The probabilities of quantum mechanics are artifacts of our inability to know all the relevant variables. Measurement necessarily involves causal contact, and causal contact will always disturb some of the relevant variables in unpredictable ways. It would be cool if we could observe what goes on when measurement occurs, but to do that would require measurement! So we're stuck with hidden variables.

The public tends to hear the collapse interpretation most often. Physicists tend to like the many worlds interpretations. Philosophers of science tend to like the hidden variables interpretation, because the other options require an incoherent metaphysics. (I'm a Philosophy PhD student.)

People say that the hidden variables interpretation is ruled out on experimental grounds, but this is demonstrably false. Experimental data shows that hidden variables, if they exist, violate locality:

Locality - Causal interaction is a local phenomena. No action at a distance.

So long as you're willing to abandon locality, hidden variables can work. Given that the other two interpretations posit equally weird things, abandoning locality won't seem so weird.

For more on this, see Tim Maudlin's excellent paper, "Three Measurement Problems"

https://www.academia.edu/32885328/Three_measurement_problems


Interpretation #3: Hidden Variable / pilot-wave / DeBroglie Bohm is by far the simplest and also easiest to understand. It's a shame that it's not the default that's taught.

As a very lose analogy, the world is "normal" ( no collapsing or multiple universes or anything). It's simply that every particle leaves a "wake" just like a boat on a lake. That's it.

Classical interaction is two boats interacting with each other (crashing into each other, or one pushing the other, or pulling the other like a tugboat). And the only way two boats can affect each other is directly.

Quantum interactions are "wakes"/"waves" in the water interacting with boats. One boat can be far enough away, but if it's big enough it's wake can move your boat. This as you'd imagine happens more to smaller lighter boats. And as waves push small boats around they affect where a boat will be when you look to measure them. (Measurements also create a wake)

Finally a boat's wake can interfere with another boat's wake. And that can be constructive or destructive interference just like two waves in the water can cancel each other out. Finally a boat on the other side of the lake can affect your boat via it's wake, meaning interactions aren't strictly "local".

It's that simple (more or less), it explains all the predictions of QM and yet it's taking forever for Copenhagen to die off and all the absurd questions of collapse and the rest...

DeBroglie-Bohm is the only item that makes sense, people are just slow to accept that locality isn't always the case.


Pilot wave theory might be easy to visualise if you don't consider entanglement, but it suffers from the conceptual problem that it's just many worlds with extra undetectable structure.

The pilot wave is the entire uncollapsed wave-function, exactly like in many worlds. There is also a particle and instant, universe-wide collapse, neither of which are detectable even in theory. Nothing except the wave function affects the dynamics at all.


All three interpretations are 'just the wave function'. What separates them is the metaphysical interpretation of what is 'behind' the wave function.


I can't quite see how it's easier or more plausible than e.g. many worlds? That one is as easy conceptually and is not without elegance.

Physics has a history of elegant theories that eventually turned out to be wrong. Phlogiston theory looked great at the time, and things like one-electron universe strike as beautiful when you first read about them.


> Physicists tend to like the many worlds interpretations.

According to Sean Carroll, this is because if you 'buy' the mathematics of the wavefunction, even before collapse you already have to accept "many worlds" at the quantum level. "many worlds" for them simply means superposition/linear combinations. After all, if you already accept "superposition" at the level of quantum states, you don't have to invoke anything new to deal with so-called "collapse", if everything simply stays as superpositions i.e. linear combinations.

Or, in other words, if you assume classical behavior first, and you need to get that out of superpositions at the quantum level, you need wavefunction collapse.

but if you start with superpositions at the quantum level, you already have superpositions, and then classical behavior can simply be derived from locating yourself in one of those superpositions.

Explained this way, "many worlds" doesn't seem so shocking, if you have already resigned yourself to quantum level superpositions.

I think of many worlds as kind of a literal interpretation that quantum superpositions are fundamentally real, as opposed to quantum superpositions merely being a very accurate mathematical model. (Most scientists think QM represents something real instead of some lucky equations.)


> if you already accept "superposition" at the level of quantum states, you don't have to invoke anything new to deal with so-called "collapse"

All three interpretations of QM 'accept' superposition, construed as a mathematical construct. What's at issue is how this mathematical construct maps onto reality.

If you mean something more by 'accepting superposition', you have to spell out what that is. And doing that just leads you right back to the three different interpretations.


This sounds a lot like someone has fallen for a subtle confusion of map and territory...


There is lots of mathematical evidence that every quantum state maps 1-to-1 on a physical state¹, or at least be uniquely determined by the physical state². The main assumption both make is that there is some physical state.

So it doesn't really matter if it's a map or not - it tells you something about the territory.

[1] The completeness of quantum theory for predicting measurement outcomes, https://arxiv.org/abs/1208.4123

[2] On the reality of the quantum state, https://arxiv.org/abs/1111.3328v3


> The main assumption both make is that there is some physical state.

I'd phrase this as: the main assumption is that reality is literally encoded by some sort of objective hidden variables theory.


I'm not sure what your point is, but it doesn't have to be hidden and in this case, objective just means two parties agree on the outcomes of experiments.


At some level, if we take scientific theories seriously as descriptions of the world out there, and not merely tools for prediction, then we have to assume some sort of isomorphism between the map (our theory) and the territory (the world), even if they're not the same thing. This isn't only for quantum theory.


> we have to assume some sort of isomorphism between the map (our theory) and the territory (the world)

But a mathematical model != a theoretical model. The very same mathematical model will be compatible with uncountably many theoretical models. (By 'theoretical model' I mean something like 'an interpretation of what the math is representing'.)So you can't read off theoretical structure from mathematical structure. And so you can't read off the structure of the world from mathematical structure.


> (By 'theoretical model' I mean something like 'an interpretation of what the math is representing'.)So you can't read off theoretical structure from mathematical structure. And so you can't read off the structure of the world from mathematical structure.

Occam's razor favours a theoretical model that corresponds more closely to the mathematics, rather than one that adds a bunch of epicycles to arrive at a different interpretation. If you follow your logic then you can never reject geocentrism, because it's possible to create a geocentric model that generates the same predictions as a heliocentric one; nevertheless we would generally say that heliocentrism is "more true" and "more physically real" than geocentrism.


What Occam's razor favours is irrelvant. It has no predictive power [1]. The more complex answer is just as likely to be the correct one.

[1] http://scienceblogs.com/developingintelligence/2007/05/14/wh...


Physics tends to be simpler than we originally thought, and Occam's razor correctly predicts this: electricity, magnetism, and light turn out to be facets of the same phenomenon, electricity and the nuclear forces turn out to conform to the same theory, electromagnetism, gravity, and spacetime turn out to behave in the same way. Your link's lazy "11 dimensions! OMG!!!1one" non-argument is in no way an adequate refutation of that history.


You're cherry picking. You pick the places where Occam's razor picks the correct answer and ignoring all the places it doesn't. For any new situation we can't predict which of those possibilities the answer will fall. Hence my statement that Occam's razor has no predictive powers.

I just linked to the first article I found that talked about the subject. And it's interesting that you look for something lazy instead of addressing more serious issues like Newton's model vs Einstien's. Very clearly the (much!) more complex answer is more correct than the simple one. Here's [1] another link discussing the issue and gives 2 examples.

[1] http://neuralnetworksanddeeplearning.com/chap3.html Search for "Occam"


> I just linked to the first article I found that talked about the subject. And it's interesting that you look for something lazy instead of addressing more serious issues like Newton's model vs Einstien's.

You should give links you're willing to stand by. I didn't go looking for something lazy, I went looking for talk about fundamental physics (where Occam's Razor is appropriate, and what we're talking about) and found only the barely even wrong throwaway line about M-theory.

> Very clearly the (much!) more complex answer is more correct than the simple one.

Newton and Einstein don't make the same predictions. When you include the amount that you'd have to add on to Newton's theory to generate the predictions Einstein gives you about the behaviour of light, Einstein ends up simpler.

> Here's [1] another link discussing the issue and gives 2 examples.

The "new particle" example shows the opposite of what's claimed. Bethe's only reason to dismiss a new particle as an explanation was Occam's razor. The part about Newton/Einstein is just wrong about the history; relativity wasn't developed as an effort to explain the orbit of Mercury, it was developed out of Maxwell and Lorentz's work on electromagnetism. A universe in which the only reason to believe relativity was those deviations in the orbit of Mercury probably would be a universe in which the true theory was Newtonian gravity with small correction terms, not a universe in which relativity was true.


>Bethe's only reason to dismiss a new particle as an explanation was Occam's razor.

Thank you for pointing out the other fault of Occam's razor: both sides of most arguments tend to assume the razor is on their side. This aspect was addressed in the second article I linked.


It's possible to make a mistake in the application of any principle, particularly when the mistake suggests you've made an important discovery. I think there would have been a clear consensus among third-party physicists that the razor favoured Bethe in that case.


But what does 'corresponds more closely to the mathematics' mean? The relevant point is just that the mathematics, alone, doesn't settle the theoretical question.

In general, it's hard to use Occam's razor in a non-question-begging way.


Well, the mathematics describes reality. If humans can't agree about whether one way of interpreting the mathematics in human terms is more or less complex than another way of doing so, then that's a human problem and potentially insoluble. But I don't think things are actually that bad: people are generally capable of reaching consensus about what a given piece of mathematics "means", and the relevant cases here are pretty clear-cut: either we interpret the wavefunction as being physical reality, or we interpret some derived structure as being physical reality, and the latter gets us further away from the wavefunction.


Isomorphism, yes, exactly—that's the basic principle of a map: it's a description that enables us to make predictions about the territory, because it's possible to put the two into systematic correspondence. However once you start considering the 'representational' features of the map, which aren't an essential part of the isomorphism, as equally real (because you've made the simplification: dealing with the map is the same as dealing with the territory—our minds are predisposed to do this in many cases)—then you're in trouble.


In physics the map IS the territory. Or, if you prefer, we're not merely constructing a map, but a model, that is, we try to feel out and understand the territory, what it's made of, and how it works.


So you think an electron IS a mathematical expression, not that a mathematical expression describes its behavior?

> ...we're not merely constructing a map, but a model, that is, we try to feel out and understand the territory, what it's made of, and how it works.

'map' is a metaphor there. You're description of what physics is doing still fits within the metaphor. It doesn't change the fact that there is something 'out there' and then there's our description of it, via physics, and our description is not the thing that's out there.

Suppose we develop a unified theory in physics which supersedes current formulations of quantum theory. Then would the old quantum theory still BE the territory at that point? Or would it be just a less accurate description than the new one we came up with?


>So you think an electron IS a mathematical expression, not that a mathematical expression describes its behavior?

No, I think that a mathematical expression describes the behavior of something real as close as possible to our knowledge -- it's not just a picture of how it looks.

In other words, in contrast with the map analogy, the equations for electrons etc can be used for a simulation.

>Suppose we develop a unified theory in physics which supersedes current formulations of quantum theory. Then would the old quantum theory still BE the territory at that point? Or would it be just a less accurate description than the new one we came up with?

Why assume one can describe the territory in just a single (perfect) level of representation? For some things not even the full precision we can muster today is even needed (e.g. I don't need Relativity to know where a baseball will land).


Are you saying that after a certain level of accuracy, it is impossible to be sure, but you CAN say for certain within an error radius?


> So you think an electron IS a mathematical expression, not that a mathematical expression describes its behavior?

To the extent that "is" means anything, yes.

> Suppose we develop a unified theory in physics which supersedes current formulations of quantum theory. Then would the old quantum theory still BE the territory at that point? Or would it be just a less accurate description than the new one we came up with?

I think it's useful to treat "isness" as something analogue rather than binary here. The old theory sort-of-is the territory. The new theory a-bit-more-is the territory. But there's no Platonic ideal of the territory as a separate thing from the mathematical model that reality implements. There is no there there.


well you might want disagree with the most senior research prof at Caltech, but I woudln't.

That's what they really think of many worlds. And, really, is it any worse than any of the alternatives?


I'm just responding to this:

> I think of many worlds as kind of a literal interpretation that quantum superpositions are fundamentally real, as opposed to quantum superpositions merely being a very accurate mathematical model.

If the stance is predicated on that assumption, as you suggest (and I'm inclined to agree, having read other of Carroll's writings), then I have to continue disagreeing. If the rest of the world of physicist luminaries were unanimously aligned with Carroll it would be more difficult (but still tempting...)—but luckily that's not the case.


Well you may as well just accept nonlocality. After all, the other interpretations do nothing to rule it out!


I want to keep locality. Being able to derive it would be a happy bonus, but I'm perfectly willing to adopt it as its own axiom.


There are lots of signs that locality is not tenable


>> Many Worlds - Quantum phenomena cause the world the branch into multiple worlds. The probabilities of quantum mechanics represent the 'share' of reality that branches in each direction.

I seem to have access to only one of these universes. How does nature decide which determinate state would my universe take? It still requires the magical collapse. (Or is it that I am branching off too and loosing access to myself in other branches?)

>> So long as you're willing to abandon locality, hidden variables can work.

(Asking, not questioning) Why is this so? I have read a bit or two about Bell's Inequalities, but do not understand why locality needs to be abandoned. With particles having random but entangled spins showing correlations even when separated out, how do we know there were no random hidden state associated with these particles when they separated. That is, how do we know that the decision on which particle has which spin did not happen during separation itself (getting stored as a hidden variable)?


> How does nature decide which determinate state would my universe take? It still requires the magical collapse. (Or is it that I am branching off too and loosing access to myself in other branches?)

You're branching too. It remains to be explained why your subjective probabilities of which branch you find yourself in follow the Born probabilities, but that's the only known way of assigning subjective probabilities that behave as probabilities should (in terms of things like "probability of A or B <= probability of A + probability of B").

> That is, how do we know that the decision on which particle has which spin did not happen during separation itself (getting stored as a hidden variable)?

Conway's "Free Will Theorem" offers a simplified proof that this can't happen: if you measure the particle along three orthogonal axes then you will always get two of one result and one of another, but there's no way to preassign values to all possible axes such that this remains true. So the result you get from measuring in a given direction cannot be fixed if you have a free choice about which set of directions you're measuring in.


> Or is it that I am branching off too and loosing access to myself in other branches?

Yes, you also split into branches. Of course the whole branches thing is a rough approximation - for example quantum computers work because they aren't branching.

> Why is this so? I have read a bit or two about Bell's Inequalities, but do not understand why locality needs to be abandoned.

I would recommend looking up the explanation from Dr. Chinese. Briefly it's because you measure each half of an entangled pair randomly on 1 of 3 angles, and you see that measurements of equal angles always disagree, yet measurements on different angles agree over 50% of the time. That's impossible to do with pre-determined values.


> quantum computers work because they aren't branching.

Or rather, the branch universes diverge and do their computing in parallel, and then finish by matching perfectly, with all universes having the same answer. Any interference from the environment outside of the qubits results in the universes not being perfectly similar at the end and you don’t get an answer back.


Question: where does the hidden variable/pilot wave interpretation say that the computation in a quantum computer takes place? Many-worlds says that it takes place in parallel universes.


> how do we know that the decision on which particle has which spin did not happen during separation itself

It's difficult to explain concisely, but John Stewart Bell wrote a nice albeit lengthy essay on this exact question:

https://cds.cern.ch/record/142461/files/198009299.pdf


>So long as you're willing to abandon locality, hidden variables can work. Given that the other two interpretations posit equally weird things, abandoning locality won't seem so weird.

Abandoning causality (that goes along with locality) wont seem so weird?


Precisely how to understand causation is controversial.

But I'll just say that most philosophers don't find anything incoherent in the idea of nonlocal causation. (Hume's classic attack on local causation might be worth a look.) See BoiledCabbages comment for a nice explanation of a model that involves nonlocal causation.


I think I can approach a layman's explanation to this (feel free to correct me if this is offensively inaccurate). My understanding is that 'observation', at a quantum level, isn't like "watching something with your eyes". A better description would be "taking a measurement" instead of "observation". It's impossible/difficult to take a measurement at the quantum level by passively observing, you have to de-facto interact with it in order to measure it - not unlike searching for a house of cards in a dark room with a blindfold on. Once you've 'found' the house of cards, you've 'interacted' with it, and in the process of interacting it is no longer a house of cards any more.


I'm still confused: where is the interaction in the double-slit experiment? Here's a quote:

In the famous double-slit experiment, single particles, such as photons, pass one at a time through a screen containing two slits. If either path is monitored, a photon seemingly passes through one slit or the other, and no interference will be seen. Conversely, if neither is checked, a photon will appear to have passed through both slits simultaneously before interfering with itself, acting like a wave.

What does "monitored" mean here? The problem is that everywhere I look, I find a synonym of "monitored". My question is, "How does that break down?"


There's an important distinction here which is often misunderstood. The loss of interference is due to the photon (really, light wave) interacting with the detector. That results in an entanglement, and it's easy to show (mathematically) that interference is lost.

That's not the difficult part. Any physical interaction that causes a correlation between the particle's state and some other state (effectively making that thing a detector) will cause this. This includes air molecules, which is why we have to isolate the systems very well.

After interacting with the detector, the particle+detector system is still in a superposition of two states (corresponding to the two slits). When the particle hits the screen, the screen gets in on the entanglement. At all times there are (effectively) two paths, including when your body and brain interact with the experimental setup.

So why do you see only one result?

One answer (the Many-Worlds Interpretation) is that you don't see one result. One "copy" of you sees this one and "another" sees the other. Another possibility is that there really is some particular event that "collapses" the wave function, but AFAICT people mostly use this interpretation for practical reasons, bypassing the philosophical conundrum.

Or, if you insist that you you sees only one result, but you still like MWI, then the question becomes: why did I end up on this branch rather than another one? But any explanation would have to come from outside the system (the multiverse) more or less, so I wouldn't hold my breath waiting for an answer.


but you still like MWI, then the question becomes: why did I end up on this branch rather than another one?

If you like MWI, then the answer is you end up on every branch. It's like another invocation of the hateful anthropic principle - in every branch you are asking about the result you saw in that branch. You are asking why you ended up in this branch because this is the branch in which you exist to ask that specific question.

The real question of MWI for me is, where the heck are all these squillions of universe copies every second and where does the energy come from and go to making squillions of universes worth of matter and creating universe sized places to put them and how does you interacting with a photon on Earth cause a copy of the Andromeda galaxy anyway?


The other real question with MWI is why you still experience one universe and not a superposition of universes?

How exactly, in moment-to-moment physical detail, do these splits happen?

And as for branching - how are observers assigned among universes?

The MWIers will shrug and tell you observer experiences are distributed at random.

Well - fine. So why can’t the collapse experience be inherently random too, without demanding squillions of Occam-busting unobservable universes?

It seems like the least plausible interpretation. Not only does it multiply entities for no good reason, and assume they can never be observed, but it doesn’t even solve the original problem.

“Yes, but the math...” isn’t a defence because there are plenty of situations in physics where technically correct solutions are discarded as unphysical. Why make an exception here?


> The other real question with MWI is why you still experience one universe and not a superposition of universes?

You experience a superposition of universes. Why do you think you aren't? What would you expect to be different?

> How exactly, in moment-to-moment physical detail, do these splits happen?

The wavefunction is what's physically real. When you can express the wavefunction as a linear combination of two other wavefunctions, then it makes sense to model this as a split, and think of it as the universe dividing in some process like cell division (a wavefunction that's almost a linear combination of two other wavefunctions, but has a small interaction term, is like a universe that has almost split into two, but the two parts are still slightly attached). Sometimes a split happens at a discrete point corresponding to a discrete physical event - e.g. if a proton and an antiproton in known spin states that sum to 0 collide to generate two gamma rays, then that's a distinct, discontinuous event (and you'll see that in the wavefunction) and after it the wavefunction is expressible a linear combination of two wavefunctions. So we interpret this as "when the particles collided, the universe branched", and like most models that's a simplification of reality, but it's a good one that generally gives correct intuitions. But if you ever want the exact physical detail, just look at the wavefunction.

> Well - fine. So why can’t the collapse experience be inherently random too

It can, but the collapse is a much bigger assumption than subjective probability. It violates unitarity, it's unclear when it happens, it introduces a notion of observer and a direction to time and all of that - and for what? It doesn't let you get rid of the superpositions (you still have to assume that the superpositions exist before the "collapse", so you still need all the machinery that you would need to implement many-worlds). Occam's razor says we can and should do without it.

> without demanding squillions of Occam-busting unobservable universes?

By that logic the assumption that the universe is billions of light years across should require a huge pile of evidence, much more than we have. Occam doesn't say we should be parsimonious about the size of the universe we assume, it says we should be parsimonious about how many physical rules we assume.


> The other real question with MWI is why you still experience one universe and not a superposition of universes?

Because the brain is warm and messy?

And Occam does not work that way.


>And Occam does not work that way.

Well, Occam is not a physical law, so that's neither here, nor there.


Your real question boggles my mind too. The way I visualize it is as another dimension where conservation of energy doesn’t apply. Which is just as hard to imagine as any extra dimension beyond the 3 or 4 we experience. But that doesn’t mean we shouldn’t try.


> where the heck are all these squillions of universe copies every second and where does the energy come from and go to making squillions of universes worth of matter and creating universe sized places to put them

It's the same "mass" of universe, just split across squillions of pieces. Like a tree trunk dividing into two branches, or a river confluence in reverse. (In fact it makes no difference if there were always two universe branches, it's just that they happened to contain exactly the same physical events up until the point where they didn't). And actually it's continuous rather than discrete: most of the time rather than dividing into distinct branches it's more like it's spreading out across a landscape. The branches are an approximation to let us compute more easily rather than something fundamental (e.g. we take this big continuous landscape of "universes" and draw a line between the region where a given photon was up and the region where that photon was down, and treat this as two distinct universes, because what we're interested in is whether the photon was up or down, and for our purposes any two points in the universe-landscape in which the photon was up are equivalent to each other).

> how does you interacting with a photon on Earth cause a copy of the Andromeda galaxy anyway?

It doesn't; when you interact with the photon which is, from your perspective, in two forks, you entangle your own fate with it, and from the perspective of an observer in Andromeda, both you and the photon are now in two forks (the one where the photon was up and you measured it as up, and the one where the photon was down and you measured it as down). As and when the observer in Andromeda interacts with you, then they can also be seen as divided into two forks. But if the observer in Andromeda is behaving exactly the same way in either fork, then it makes more sense to see them as being in a single unified universe. (Again, the very concept of "forks" is a simplification over continuous reality; the point isn't that there's any concrete sense in which the observer in Andromeda is in one rather than two realities whereas you are in two rather than one, it's just that when we want to talk about you we can draw a line in the universe landscape (between universes in which you measured up and universes in which you measured down) and treat the universe landscape as a fork between two universes, while when we want to talk about the observer in Andromeda there is no value in using that particular model (though there's nothing wrong with doing so, and if you compute predictions about the observer in Andromeda while treating them as being in such a superposition you will obtain correct results)).


Fascinating. And the fact that it's entanglement makes it information and thus not subject to space/time constraints.


Depends on what precisely you mean by that. It can allow you to exhibit correlations that are hard to explain within a classical framework, yes. But that requires very precise control over the system, so has no practical import most of the time. Things are being entangled pretty much nonstop around you all the time.

BTW, I added a paragraph to my answer above.


I think that basically the observer is considered "entangled" with a phenomenon after they have interacted with it. That's all.

The observer can be anything. You are you, so by the time information comes to you, you've become entangled.

I find this interpretation to be mostly wordplay. After all how can something be "real" if you can never observe it or detect it? It's sort of saying the theoretical construct "those other worlds" are real, but in what sense?

Bohmian mechanics makes way more sense to me.


Do we ever observe or detect the larger universe outside our light cone? Yet astronomers say it exists.

What about other minds? Do you ever observer/detect someone else's experiences, thoughts, feelings? Or just their behavior from which you infer a mind?


Well, the simplest way to "monitor" it is to just block one of the slits. Then you definitely know which slit photons that hit the screen went through! Alternatives include:

- Marking the photon itself (e.g. by rotating its polarization with a waveplate).

- Placing a crystal that does spontaneous parametric down-conversion after one of the slits. This splits the photon into two photons. Aim the secondary photon at a detector. Detector clicks -> went through that slit. No click -> went through other slit. But this will also mark the photon itself with a change in frequency, so really should do it to both slits.

- Having a magical device that stores a bit and toggles it anytime a photon passes through a piece of glass. Place the magical toggle-glass over one of the slits.

Anything that can distinguish the case where a photon goes through the left slit from the case where a photon went through right slit will work fine.


I understand. Thanks.


> how does 'observation' cause the wave function to collapse

It doesn't. This is a reasonable approximation to the truth in many common situations, but it is not the truth.

The truth is that measurement and entanglement are the same physical phenomenon. See:

http://www.flownet.com/ron/QM.pdf

or the video version:

http://www.flownet.com/ron/QM.pdf


Thanks, Ron. I can imagine that measurement and entanglement are the same thing, especially that both, like the uncertainty principle, are about the limits of extractible information.

The video link is the same as the paper; if you find it, please post it. Thanks again.



I loved it. Thanks.


Your paper describing measurement in terms of entanglement is interesting. Would you mind posting the link to the video you mentioned?



Observation means interaction in any form with the observer (edited as per Koshkin's comment). Most things in the universe in practice have interacted or will interact. Interaction could be as subtle as a single photon reflecting off it and hitting your eye, or it could be you smashing your face into the object.

When they present the hypothetical Schrödinger's Cat thought experiment, it's a thought experiment assuming somehow the box prevents all interaction between the cat inside and the outside world. In the real world however things like sound waves, light, x-rays, etc, will inevitably penetrate such a box. And even if the cat moves, it will cause some amount of detectable vibrations on the outside of the box. It would be impossible to theoretically prevent any energy from the cat inside the box to emanate outside as that would violate the law of preservation of information (even a black hole—the closest thing to an absolutely sealed box—releases information about its contents and expels energy).

There is no explanation for why the wave function collapses due to interaction other than "god rolling dice" and picking a collapsed outcome once such an outcome becomes relevant (through interaction/observation). Other interpretations of wave function collapse don't even suppose the collapse happens. For example in the many worlds interpretation, instead of having a single observer (the universe and its participants), there are multiple universes and multiple observers. In this interpretation there is no collapse of the wave function. Instead of many possibilities and one getting picked, there are many observers each independently observing one of the possible outcomes. Here the dice rolling still happens, but happens for every observer constantly (when deciding what outcome is observed next).


> Observation means interaction in any form

Strictly speaking, this is incorrect. The interaction between a photon and an electron, for instance, or an electron in an atom with the atom's nucleus are not "observations" and do not lead to the collapse of the wavefunction.

Observation (a.k.a. measurement) in QM implies interaction with "classical", i.e. non-quantum mechanical object.


A photon from the observer or outside of the system you are observing will still cause collapse. I have corrected the sentence to clarify that it's external interaction.


In a quantum logic circuit, every "observation" can be thought of as ultimately compiling down into a controlled-Z gate between the qubit-you-want-to-measure A and a qubit-to-hold-the-measurement-result B.

The interesting thing about the controlled-Z is that it is actually a symmetric operation. You could think of it as "if A is ON then apply a Z to B", but it is exactly equivalent to say it is performing an "if B is ON then apply a Z to A" effect. The truly symmetric description is "apply a -1 factor to the weight of cases where A and B are both ON".

Because the controlled-Z is symmetric, and every measurement ultimately involves a controlled-Z, you can't move details about A into B without also kicking back details about B into A. When B is a big well-approximated-by-classical-physics system, the back-effect plays out in a way that we call collapse.

Said another way, B (you and/or instruments) can't get information about A (the quantum system) without collapsing it.


One explanation[1] i heard is observation is a actually equivalent to entanglement. Now we know that two elements entangled will have to maintain the state corresponding to its entangled peer. Now if you observing (or taking measurement) the whole system gets entangled to the particle. you are observing and hence the wave function collapses.

You can look at a better explanation here [1]https://www.youtube.com/watch?v=dEaecUuEqfc


The most sensible explanation I've heard came from sir Arthur Eddington. His claim (roughly) is that probabilities in quantum theory model our knowledge of the quantum system, rather than indicating anything intrinsically probabilistic in the system itself.

In that case, our knowledge allows for more possibilities before making a measurement (observation), and collapses into something more definite after the measurement.

That's probably not the best description, but I think the basic idea about modeling knowledge makes a lot of sense. Eddington was primarily a physicist, but also a philosopher. I read this account of his in 'The Philosophy of Physical Science.'

If I'm not mistaken, it's pretty much in line with the 'Pilot Wave' interpretation which is getting more attention these days.


The mistake in this line of thinking is in the tacit assumption that a quantum system possesses a certain property prior to or independent from a measurement. It does not. Anything we measure is essentially an artifact of the measurement process itself, and the randomness of the result comes from bringing together two incompatible worlds - quantum and classical. All we are left with at the quantum level is wave-functions; but what matters for us is, of course, what we can experience, being ourselves classical, and QM has so far provided us with a perfectly good way of finding that out.


Would you mind either arguing for the superiority of that interpretation or showing flaws with the alternative I pointed out? I know that what you've stated is nearly cannon at the moment, but the issue isn't actually settled, which is why a discussion of it here is still potentially interesting, and why merely re-asserting the supposed truth of the most common interpretation isn't.

Edit: in other words, when you say:

> The mistake in this line of thinking is in the tacit assumption that a quantum system possesses a certain property prior to or independent from a measurement. It does not.

You are only asserting that's the case, not defending it. The rest of your post operates on the opposite, undefended assumption that a quantum system does not possess a certain property prior to or independent from a measurement.


It is untenable even from a purely philosophical standpoint that we could bring all this classical baggage into the world of which we already know well enough to behave nothing like what we have seen before and hope that we would end up with something other than a mess in our understanding what is really going on. All we have ended up with is a theory telling us how a quantum system interacts with what we must think of as a classical object so we could conveniently, albeit probabilistically, extrapolate the notions (e.g. those of "properties") of the familiar classical world to the world that is and will forever remain out of reach to our senses.


If I understand correctly you're saying the interpretation I've pointed out is not tenable because it carries classical baggage, and yet we've already observed quantum systems to not behave classically, so if we proceed that way we get a mess.

But the whole point of the alternative interpretation I pointed out is a way of reconciling the observations with an essentially classical world (at least in the sense that there is a single objective reality which exists independent of observation)—do you see a way in which it fails to do that? My understanding is that considering the probabilistic aspect of the theory to be modeling our knowledge of the system instead, successfully accounts for the fundamentally 'non-classical' observations.

It just seems hasty to throw out such a seemingly well-supported hypothesis (single objective reality independent of viewer), if there are two effective interpretations, one of which conflicts with it, the other of which does not. From my reading, the primary reasons for the mainstream physics community having adopted the stance which competes with the 'objective reality' hypothesis (i.e. Many Worlds or Copenhagen interpretations), are historical accident and political, while the intrinsic merits of the alternatives are near equivalent.


You seem to be conflating 'being real', or 'objective', with 'being classical', or 'possessing measurable properties prior to measurement'. That is the mistake I was arguing against, and the Copenhagen interpretation is what keeps one from making it.


Wait, what? Since when is carrying classical baggage a minus?

If we're going to play that game, I'll choose spooky action at a distance anyday, and say that space is just a strong correlation MOST of the time, but doesn't have to be ALL the time. I wonder how much classical baggage YOU bring by thinking Bell's inequality disproves something because nonlocal interactions are not possible.

What's easier to imagine - correlations between particles far away or many universes which are not actually real in any sense you can define?


>What's easier to imagine - correlations between particles far away or many universes which are not actually real in any sense you can define?

Many universes of course. At least that doesn't mess up with the workings and casuality of this universe.


Yep called it - you want this universe to work classically. So I can claim to be the bigger out-of-the-box thinker.

How is this any different than the multiverse theory as a non-explanation to the fine tuning? You simply push the problem elsewhere.


QM is a model. it may have some basis in reality, but we don't know if "amplitudes" are fundamentally real or not.

Just because it predicts outcomes (probabilistically!) very accurately doesn't mean we have the full picture.

(how does gravity fit into this picture? it's completely let out of QM!)


It definitely has some basis in reality, because it fits experimental results. The exact nature of that fit is up for debate, but the experiments are certainly part of the real world we try to make sense of.


> The mistake in this line of thinking is in the tacit assumption that a quantum system possesses a certain property prior to or independent from a measurement.

Your comment is wrong. In the 3rd of the 3 interpretations of QM (Debroglie Bohm) this is exactly the case. The world is at all times in a well defined state. We just don't know it until we measure so we use probabilities to desceibe. All 3 QM interpretations are mathematically indistinguishable.


The point is a bit more subtle that than. Both WMI and Bohm's theory are deterministic, so the world is in a well-defined state.

But any property we can measure is not well-defined, in the sense that before we have actually measured it, it doesn't exist.


>> His claim (roughly) is that probabilities in quantum theory model our knowledge of the quantum system, rather than indicating anything intrinsically probabilistic in the system itself.

Note "our" knowledge there. If I am making a measurement, and you are disconnected to me, how and when does it become our knowledge? How is it that one human measuring results in a collapse, with no other human knowing about it. Does something more happen when this first human tells about it to others?


>The most sensible explanation I've heard came from sir Arthur Eddington. His claim (roughly) is that probabilities in quantum theory model our knowledge of the quantum system, rather than indicating anything intrinsically probabilistic in the system itself.

That's counter to a century of experiments and theories. If that was the case, QM would be a trivial classical theory.


You should check out the Debroglie Bohm/pilot wave interpretation. What I have described does not go counter the experimental data we have.


The pilot wave does all the work of physical reality in that interpretation. Imagining there is this extra downstream classical reality is just a bunch of extra work for no benefit, and should be discarded per Occam.


It doesn't matter if anybody's watching. It's just that to get any information out of the system you need to interact with it somehow. This interaction is what's causing the wave function collapse.


With the double-slit experiment, my understanding is that if it's done in a closed room, the wave function won't collapse and you'll get an interference pattern, but if you're in the room, it will collapse and there will be no interference pattern. Aren't you testing it by observing it?


I'm pretty sure that's not the case. The interference pattern only disappears if you try to figure out which slit a single photon went through. It does not require a conscious observer to be present. What matters is if you extract information from the system or not.


> only disappears if you try to figure out

> What matters is if you extract information

> It does not require a conscious observer to be present.

But does it require a “you” trying to figure out and extracting information?


Sorry, you're right.


> if it's done in a closed room, the wave function won't collapse and you'll get an interference pattern, but if you're in the room, it will collapse

That is absolutely 100% not what happens. The experiments play out the same way whether or not a human's face happens to be nearby. All the data is collected automatically by photographic plates or electronic counters or other instruments, and "observation" always refers to some instrument being present or not.


I understand now. So it's, "can the information be known?" and the answer to that question, whether instrument or human or decision (such as marking the photon), is what creates the entanglement which causes the collapse.

P.S. Everyone - thanks! I get it now. And it's fascinating!


But the entanglement doesn't "cause collapse." If you entangle states A and B, their joint state is in a superposition and will still exhibit interference (given the right experimental setup). It's just that either particle by itself will not.


oh shit. ok, I see. wow


It's almost impossible to answer that question. Which is one motivation behind hidden variables theories of Quantum Theory that don't elevate observation to having a role in quantum phenomena.


To the contrary, it is very easy to answer: measurement does not cause the collapse of the wave function. This is a reasonable approximation to the truth in many common situations, but it is not the truth.

The truth is that measurement and entanglement are the same physical phenomenon. See:

http://www.flownet.com/ron/QM.pdf

or the video version:

http://www.flownet.com/ron/QM.pdf


> measurement does not cause the collapse of the wave function.

But you provide an example of measurement causing the collapse of the wave function yourself!

"Now let us add a detector at the slits to determine which way the photon went. To describe this situation mathematically we have to add a description of the state of the detector: (ΨU |DU> + ΨL |DL>)/√2 where |DU> is the state of the detector when it has detected a photon at the upper slit and |DL> is the state of the detector when it has detected a photon at the lower slit. Now the probability density is: [|ΨU |^2 + |ΨL|^2 + (ΨUΨL <DU|DL> + ΨLΨU <DL|DU>)]/2"

That describes the quantum system formed by the photon and the detector. And when you reduce the density matrix to its diagonal, setting the interference terms to zero in a non-unitary way, you are collapsing it to a "classical" mixture of two states corresponding to the possible outcomes of the measurement (as described by von Neumann 85 years ago).


> when you reduce the density matrix to its diagonal, setting the interference terms to zero in a non-unitary way, you are collapsing it to a "classical" mixture of two states corresponding to the possible outcomes of the measurement (as described by von Neumann 85 years ago).

Yes, that's true, but that is not the same thing as "measurement causing the collapse of the wave function". Collapse is not a physical phenomenon, it a a mathematical approximation. When you measure a particle, nothing happens to that particle as a result of the measurement. It does not suddenly "decide" to go through one slit or the other. Anything that "happened" to the particle happened not as a result of the measurement but as a result of the initial entanglement with the ancilla.


You had a wave function for the particle/detector system, you performed a measurement, then you didn’t have any longer the wave function that you previously had. I think that’s what von Neumann would call “measurement causing the collapse of the wave function”.


What Von Neumann -- or anyone else -- would or would not call it doesn't change the truth of the matter.

Von Neumann also believed that consciousness was required to collapse the wave function. This is a non-falsifiable and hence non-scientific theory (which is the problem with all collapse theories). Von Neumann was a really bright guy, and he very nearly got the right answer, just as Lorentz very nearly got to relativity before Einstein. But just as Lorentz refused to let go of the idea of Galilean invariance, Von Neumann refused to let go of the idea that classical reality is a primary phenomenon. It isn't. Classical physics is an emergent, not a primary phenomenon. There are no particles, only quantized fields. (To be fair, Von Neumann did not live long enough to see Bell's theorem or the Aspect experiment, so he didn't quite have all the information he needed. But he was bright enough that he probably could have come up with Bell's theorem on his own if he'd decided to go that direction.)


I really don't understand what you are trying to say. Do you dispute the name or do you dispute whether it happens?

From the quantum state of the particle/detector which is a superposition of both possible outcomes, what happens according to you as a result of the measurement? Is there a definite result?

What do you mean when you say that "if the detector is working properly then these amplitudes are zero and the interference term vanishes"? Is going from non-zero amplitude to zero amplitude a physical phenomenon? Why does it happen?


Did you actually read my paper or watch my video? Because answering those questions is exactly what they are about.


I have not tried the video, but I've read the paper. As I pointed out, your description of a measurement in section 4.1 is a textbook description of wave function collapse. I'm trying to understand how do you think that your explanation is different. Or maybe I'm missing your point completely.


> I've read the paper.

OK, the content is essentially the same.

> Or maybe I'm missing your point completely.

Yep, I'm pretty sure that's the case. You should probably re-read section 3.


I don't see the relevance. My question about the description of the measurement process on section 4.1 (which is supposed to be more fundamental: "Let us begin with the simple two-slit experiment.") stands.


> I don't see the relevance.

Section 3 presents an argument for why is cannot be the case that a measurement causes a physical change in the system being measured (because it would result if FTL communications). If you don't see how that's relevant then you are beyond my ability to help.

> supposed to be more fundamental

You are reading way too much meaning into that introductory sentence. The two-slit experiment is often presented as fundamental because it is easy to describe, and because historically it was done very early in the development of quantum mechanics, but it is not fundamental. Indeed it is this false assumption that 2-slit is fundamental that has lead to the horrible pedagogical mess that QM suffers from to this day. It is simply false that no one understands QM, or even that QM is particularly hard to understand. It's just that the usual pedagogy of QM is based on a false assumption.


I don't want to get lost on terminology (what is more fundamental, what do we call a physical change...). I try to understand the consequences of the approach you propose.

In 4.1 you look at a couple of similar examples that are quite simple, a particle/detector system in the first half ("measurement") and a particle/particle system in the second half ("entanglement").

The "measurement" part starts like von Neumann's measurement scheme, which is a two step process. In the first step, one assumes that the "target" and the "probe" are composed into a single quantum system (there is entanglement like in the particle/particle system considered later, I agree with that). This system is in principle in a superposition of states, the density matrix does contain interaction terms.

In the second step, one applies a non-unitary operator to actually take the measurement at the classical/macroscopic level and the interference terms dissapear. Instead of a pure quantum state we end with a diagonal density matrix representing a mixed state (the system is either in one state or the other, we don't know which one but we know the corresponding probabilities).

I still don't understand how the second step works in your approach and why do the interference terms vanish.

I understand even less how do you conclude that "the entanglement destroys the interference in exactly the same way (according to the mathematics) that measurement does." The entangled pair looks identical to the pre-measurement target/probe pair, but interference is still present at that point (and vanishes because of the measurement, whether or not we call this the collapse of the wave function).

By the way, in the video there is a slide saying that the resulting wave funtion is |ΨU |^2 + |ΨL|^2 but that's not a wave function.


You are talking about the math, and I'm talking about what happens physically. As an analogy, this is like talking about whether F=G * m1 * m2 / r^2 is an accurate mathematical model of gravity in the weak limit (it is) vs talking about whether earth is actually pulling you down towards its surface (it isn't).

But since you are obviously well versed in the math, I suggest you read the Cerf and Adami paper that my presentation is based on: https://arxiv.org/abs/quant-ph/9605002 Also, Zurek's work on decoherence. Those will probably answer your questions better than I can. I'm just a popularizer, not a physicist.


I was trying to find what does happen physically according to you. For the time being, you've only explained what does not happen ("measurement does not cause the collapse of the wave function", "When you measure a particle, nothing happens to that particle as a result of the measurement.").

Reading your paper I thought something happened, somewhere in the transition from page 6 to page 7, because I understood that you were getting a well-defined value out of the measurement. The paper you linked now makes clear that in this model nothing happens at all, ever. Anything less than the wave function for the whole universe is just an illusion, there is no classical world.

It's not an accident that your description looked like the standard measurement scheme from von Neumann, as they are just reproducing it. They seem to think that when von Neumann described the measurement process as an interaction of two quantum systems he didn't notice that this interaction was beyond the classical concept of correlations, and this is why he needed a second stage to perform the measurement and collapse the state vector.

In the pre-measurement step, the 'target' (system to be measured, with states |u> and |d>) and the 'probe' (quantum detector, with states |+> and |->) are combined. This system will be a superposition of the states |u,+> and |d,->. The density matrix includes the diagonal terms |u,+><u,+| and |d,-><d,-| and also the diagonal terms |u,+><d,-| and |d,-><u,+|.

Von Neumann introduces a projection step which keeps only the diagonal terms. The resulting density matrix is no longer a pure state but a mixed state (for a closed system, pure states correspond to eigenvalues of the density matrix and then a diagonal matrix has to represent a classical mixture of the corresponding pure states and cannot be a superposition). After the measurement following von Neumann we have a statistical ensemble of the possible outcomes and choosing one of them doesn't pose any conceptual problem.

Cerf and Adami avoid the second step by taking the entangled system, ignoring the state of the target and looking only at the probe. Mathematically, one obtains a reduced density matrix by tracing out one subsystem and the result is a diagonal density matrix which contains only |+><+| and |-><-| without any interference terms. However, this is an improper mixed state and cannot be interpreted as before (because it's not the description of a full system but the result of looking at a subsystem).

Their reduced density matrix does not represent a defined (but unknown) state because the subsystem remains entangled with the rest of the system (target+probe). They say that the full system continues to evolve in a unitary way without any kind of collapse, but then how is any measurement done at all? Their description of the detector is an (improper) mixture of |+> and |->, at what point do we get one or the other as a result?

Note that Cerf and Adami consider their model completely unrelated to Zurek's. Decoherence and einselection are interesting, provide a physical mechanism to explain how the interference terms become (close to) zero in the "correct" basis in a (practically) irreversible way, and have some experimental support. But it doesn't completely solve the problem of measurement: how does the system settle on one particular outcome?


Before I respond I would like to say that this interaction is tremendously helpful for me. You obviously know what you're talking about, and I appreciate you taking the time to engage on this. If you don't mind, I'd like to learn more about who you are IRL. If you want to do that off-line my contact info is in my profile.

But to answer your question:

> at what point do we get one or the other as a result?

That is like asking "at what point does a blastocyst become a fully fledged human being?" There is no sharp dividing line between the quantum and the classical. There is not "point" at which the one transitions sharply to the other. There is a range beyond which the classical approximation is good enough for all practical purposes, but the physical world never "actually becomes" classical, just as gravity never "actually becomes" Newtonian.

> Cerf and Adami consider their model completely unrelated to Zurek's

I'm not sure that "completely unrelated" is a fair characterization. I think they would regard it as an "improvement". (But they are both still alive. We could ask them.)

BTW, you might want to read this:

http://blog.rongarret.info/2014/10/parallel-universes-and-ar...

and its precursor: http://blog.rongarret.info/2014/09/are-parallel-universes-re...

It may or may not answer any of your questions, but I would be very interested in your feedback either way. (And keep in mind that this was written for a lay audience.)


On the relation to decoherence, maybe "completely unrelated" is too strong but they say their model is "distinct from" and "not in the class of" environment-induced decoherence models. I think it's debatable if it is an "improvement" or not, but I don't think they get any closer to "solving" the measurement problem than Zurek does.

I find interesting that London and Bauer proposed a similar model already in 1939 (ref. [12] in the "Quantum mechanics of measurement" pre-print):

[9. Statistics of a System Composed of Two Subsystems]

"The state of a closed system, perhaps the entire universe, is completely determined for all time if it is known at a given instant. According to Schroedinger's equation, a pure case represented by a psi function remains always a pure case. One does not immediately see any occasion for the introduction of probabilities, and our statistics definitions might appear in the theory as a foreign structure

"We will see that that is not the case. It is true that the state of a closed system, once given pure, always remains pure. But let us study what happens when one puts into contact two systems, both originally in pure states, and afterwards separates them. [...]

"While the combined system I + II, which we suppose isolated from the rest of the world, is and remains in a pure state, we see that during the interaction systems I and II individually transform themselves from pure cases into mixtures.

"This is a rather strange result. In classical mechanics we are not astonished by the fact that a maximal knowledge of a composite system implies a maximal knowledge of all its parts. We see that this equivalence, which might have been considered trivial, does not take place in quantum mechanics. [...]

"The fact that the description we obtain for each of the two individual systems does not have the caracter of a pure case warns us that we are renouncing part of the knowledge contained in Psi(x,y) when we calculate probabilities for each of the two individuals separately. [...] This loss of knowledge expresses itself by the appearance of probabilities, now understood in the ordinary sense of the word, as expression of the fact that our knowledge about the combined system is not maximal."

[10. Reversible and Irreversible Evolution]

"I. Reversible or "causal" transformations. These take place when the system is isolated. [...]

"II. Irreversible transformations, which one might also call "acausal." These take place only when the system in question (I) makes physical contact with another system (II). The total system, comprising the two systems (I + II), again in this case undergoes a reversible transformation so long as the combined system I + II is isolated. But if we fix our attention on system I, this system will undergo an irreversible transformation. If it was in a pure state before the contact, it will ordinarily be transformed into a mixture. [...]

"We shall see specifically that measurement processes bring about an irreversible transformation of the state of the measured object [...] The transition from P to P' clearly cannot be represented by a unitary transformation. It is associated with an increase of the entropy from 0 to -k Sum_n |psi_n|^2 ln |psi_n|^2, which cannot come about by a unitary transformation."

[11. Measurement and Observation. The Act of Objectification]

"According to the preceding section, [the wave function after the measurement] represents a state of the combined system that has for each separate system, object and apparatus, the character of a mixture. [...] But of course quantum mechanics does not allow us to predict which value will actually be found in the measurement. The interaction with the apparatus does not put the object into a new pure state. Alone, it does not confer on the object a new wave function. [...]

"So far we have only coupled one apparatus with one object. But a coupling, even with a measuring device, is not yet a measurement. A measurement is achieved only when the position of the pointer has been observed. It is precisely this increase of knowledge, aquired by observation, that gives the observer the right to choose among the different components of the mixture predicted by the theory, to reject those which are not observed, and to attribute henceforth to the object a new wave function, that of the pure case which he has found."

They go on to discuss how "the observer establishes his own framework of objectivity and acquires a new piece of information about the object in question" while there is no change for us if we look at the "object + apparatus + observer" system from outside. The combined system will be in a pure state and we will have correlated mixtures for each subsystem.

London and Bauer say that the role played by consciousness of the observer is essential for the transition from the mixture to the pure case. Cerf and Adami claim to do without the intervention of consciousness but it's not clear how do they explain the transtion from the mixture to something else. They say things like the following, which doesn't look different from London and Bauer to me:

"The observer notices that the cat is either dead or alive, and thus the observer’s own state becomes classically correlated with that of the cat, although, in reality, the entire system (including the atom, the γ, the cat, and the observer) is in a pure entangled state."

From a very superficial look at the blog posts you linked to, I think I agree with some things (v.g. that most of this ideas have been around for quite some time), I disagree with others (v.g. that there is no measurement problem) but I guess most of the discussion is meta-physical so it cannot be "wrong".

I think dodging the measurement problem is not solving it. The core of the problem is here:

"But, while QM predicts that you will be classically correlated, it does NOT (and cannot) predict what the outcome of your measurements will actually be."

QM does not predict that the measurement will have an outcome. The transition from that probability distribution to one definite result is the measurement problem.


OK, so on your view, what could a solution to the measurement problem possibly look like?


I am not an expert and I am not sure to what extent a solution to the measurement problem is possible, or even needed, but I think it would mean that the current QM theory is incorrect or at least incomplete. In the first case (v.g. the "objective collapse" theories) the Schroedinger function may be very good approximation and it may be practically impossible to get a experimental/observational verification. In the second case (v.g. the "non-local hidden variables" theories) the predictions of the Schroedinger function may be exact so a verification could be impossible even in principle.

Weinberg, one of the greatest theoretical physicists alive, wrote last year a piece explaining why he's not satisfied with any interpretation of QM. He distinguishes two main approaches: "realist" and "instrumentalist". Here is the article and some comments: http://www.nybooks.com/articles/2017/01/19/trouble-with-quan... http://www.nybooks.com/articles/2017/04/06/steven-weinberg-p...

He gives more details in section 3.7 of his "Lectures on Quantum Mechanics" (Interpretations of Quantum Mechanics) which ends as follows:

"My own conclusion is that today there is no interpretation of quantum mechanics that does not have serious flaws. This view is not universally shared. Indeed, many physicists are satisfied with their own interpretation of quantum mechanics. But different physicists are satisfied with different interpretations. In my view, we ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is only a good approximation."

The last chapter in Commins' "Quantum Mechanics: An Experimentalist’s Approach" (The Quantum Measurement Problem) gives a nice description of the problem and the proposed solutions which fall into three categories:

1.- there is no problem. [Decoherence doesn't completely avoid the problem; it makes the off-diagonal elements of the density matrix zero but the issue of going from a mixture to a specific outcome remains.]

2.- the interpretation of the rules must be changed but this can be done in ways that are empirically indistinguishable from the standard theory. [De Broglie-Bohm pilot-wave theory postulates "real" particles compatible with QM predictions, but currently only works well in the nonrelativistic setting.]

3.- deterministic unitary evolution is only an approximation. [Adding non-linear and stochastic terms produce "spontaneus localization", tuning the parameters we can explain both the microscopic "unitary" and the macroscopic "non-unitary" behaviours.]

"In conclusion, although many interesting suggestions have been made for overcoming the quantum measurement problem, it remains unsolved. We can only hope that some future experimental observation may guide us toward a solution."

[By the way, I sent you an email but I don't know if you got it.]


Weinberg either does not understand the multiverse theory or he intentionally misrepresents it. It is not the case that the universe splits when a measurement is made. That description begs the question because it doesn't specify what counts as a measurement. It's putting lipstick on the Copenhagen pig. A much better (though still not very good) description of the multiverse can be found in David Deutsch's book "The beginning of infinity" chapter 11. Universes do not "come into being" when measurements are made. The entire multiverse always exists. This is the pithy summary:

"Universes, histories, particles and their instances are not referred to by quantum theory at all – any more than are planets, and human beings and their lives and loves. Those are all approximate, emergent phenomena in the multiverse."

Indeed, time itself is an emergent phenomenon:

"[T]ime is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks – or of any objects usable as clocks) into the same history."

Here is Weinberg's fundamental problem:

"[T]he vista of all these parallel histories is deeply unsettling, and like many other physicists I would prefer a single history."

Well, I'm sorry Steven, but you can't have it. That's just not how the world is, and wishing it were so sounds as naive and petulant as an undergrad wishing Galilean relativity were true, or Einstein wishing that particles really do have definite positions and velocities at all times. Yes, it would be nice if all these things were true. But they aren't.


> Weinberg either does not understand the multiverse theory or he intentionally misrepresents it. It is not the case that the universe splits when a measurement is made.

I don’t know if your point is that the universe splits more often than that or never. Note that Weinberg is referring to the usual MWI formulations, I don’t know what is the precise definition of the “multiverse theory” you mention. And actually he says that “the fission of history would not only occur when someone measures a spin. In the realist approach the history of the world is endlessly splitting; it does so every time a macroscopic body becomes tied in with a choice of quantum states.”

This is Deutsch’s description in a recent paper (https://arxiv.org/abs/1508.02048):

“when an experiment is observed to have a particular result, all the other possible results also occur and are observed simultaneously by other instances of the same observer who exist in physical reality – whose multiplicity the various Everettian theories refer to by terms such as ‘multiverse', ‘many universes', ‘many histories' or even ‘many minds'. “

The “plain English” description in the book you cite lacks any rigour and can be confusing if you know a bit of QM because it postulates that the universe splits already (but in what basis?) in the cases where single-universe QM works fine (the wave function can describe a pure quantum state which is a superposition). These “soft splits” can be undone and allow for interference (corresponding to the unitary evolution of the Schroedinger equation).

But, he says that “interference can happen only in objects that are unentangled with the rest of the world” and “once the object is entangled with the rest of the world [...] the histories are merely split further”. These “hard splits” are the branches in the MWI. And correspond perfectly to Weinberg’s description: “[the world splits] every time a macroscopic body becomes tied in with a choice of quantum states.”

In Deutsch’s multiverse the number of universes doesn’t grow because there is always an uncountable infinite number of them, but I’m not sure this makes the theory much better.

Unfortunately I don’t have time to continue this interesting discussion in the near future. At least you have to concede that in an infinite number of alternative universes you found my arguments convincing. That’s good enough for me.


Thanks for posting that. Can you reconcile these two parts for me though:

"It accounts for the apparent contradiction between quantum theory, which says that entropy is conserved in unitary transformations, and the apparent increase in entropy that arises from the randomness in quantum measurements."

And

"randomness is not an essential cornerstone of quantum measurement but rather an illusion created by it."

So is it random or is it not, in your view?


> So is it random or is it not, in your view?

It depends on your perspective. From the perspective of a classical entity, yes, it's random. From the perspective of the quantum wave function, no, it's not.


> or the video version:

Both links are identical!



The only fully honest answer to this question: it is not understood.

A good chunk of physicists (not all) would agree that the current understanding is unsatisfactory, but it's unclear whether now is the "right time" to research that question. It is unclear whether we have the experimental tools to probe the consequences of the different models, and whether different approaches produce different predictions in some testable scenario.

Of course, there is an active field of research probing this and related questions. Different physicists in there have different ideas, depending on how they interpret quantum mechanics. (See other responses in the thread for a sampling of those). But there is no satisfactory agreed-upon textbook answer.

Personally, I find the some aspects of the decoherence perspective appealing.


"Collapse of a wavefunction" is a (mathematical) artifact of the currently used theoretical framework in which the quantum system is being brought into contact with a classical, i.e. non-quantum system (which is what observation, or measurement, is).


Interference is very fragile. It used to create the most sensitive scientific instruments. It is very easy to disturb self-interference in double slit experiment.

https://www.youtube.com/watch?v=nsaUX48t0w8


If it helps, in the multiverse interpretation wavefunctions never collapse. Don't think too carefully about observation in QM it's just an artifact of getting the right answers.


PBS Space Time did great episode on this subject:

https://www.youtube.com/watch?v=izqaWyZsEtY


It wasn't really relevant to my question, but you're right, it's a good clip. Thanks.


Observation in QM is about information, not the actual mechanics of observation.


It's worth mentioning that "quantum theory" has no implications for consciousness, especially given the headline proclaiming the opposite. There are plenty of interpretations that don't involve consciousness at all - and hopefully my use of the word interpretation clues you in to the fact that this is isn't really a physics question.

Weird philosophy aside, this does look like a good explanation of the Bell inequality.


No, you don't need consciousness. Since 1979, the concept of decoherence has replaced consciousness in the understanding of QM.


I'm deeply sick of the absurdly drawn-out living death of the Copenhagen interpretation, and its even more absurd "consciousness is made out of magic" descendants. This is bordering on straight-up irresponsible misinformation.


Maybe so, but please don't post unsubstantive comments to Hacker News.

The only information here is that you're annoyed and disapprove. That's not enough to make a post good. If you'd be willing to transmute annoyance into information others can learn from, that'd be great. If you don't want to, that's fine, but then please just don't post anything. When commenters use this forum merely as a venting outlet, quality suffers.


What do you mean? What is the death of the Conpenhagen interpretation?


The Conpenhagen interpretation is flat out wrong by any reasonable philosophy of science. You can choose many-worlds interpretation, or pilot-wave theory and either one would give you a consistent, simpler explanation of what is going on then some mumbo-jumbo hocus pocus about consciousness and observation of cats in boxes, with fewer assumptions to boot, and no paradoxical conclusions that require religious like mysteries to explain.

Yet for inane reasons the Conpenhagen interpretation is still ALL that is taught to the next generation of physicists, who in turn teach it to their students. Only the weird physics students (like me) who go "wtf?" in class and refuse to believe the teacher go out and learn pilot-wave theory (my preference) or many-worlds interpretation to re-inject some sanity into the world.


> The Conpenhagen interpretation is flat out wrong by any reasonable philosophy of science.

AFAIK, this is an exaggeration to the point of incorrectness.

The Copenhagen interpretation may be a poor interpretation which introduces absurd complicated ideas rather than the simpler ideas of other interpretations, but that doesn't make it "flat out wrong".


You ignored the "philosophy of science" part. Philosophy of science is how we decide which of two theories is "correct" when they offer the same experimental predictions. See also my reply to a sibling comment.


Unfortunately, the Copenhagen interpretation is weaved into the very fabric of QM as it exist to this day, and it would be a huge deal if we could replace it with some other consistent theoretical framework that would not be modeled after classical physics with its notion of measurable properties.


What do you mean by that? Pilot-wave and many-worlds have EXACTLY the same predictions as the Copenhagen interpretation. It's the same theory, in the sense that the same mathematical equations, expressed differently.


Conpenhagen interpretation is refined with the concept of decoherence to avoid those problems.


That's many-worlds interpretation that introduces decoherence to avoid the problems of Copenhagen.


There exist flavors of decoherence that do not invoke MWI (those flavors may be even more problematic however)


> The Conpenhagen interpretation is flat out wrong by any reasonable philosophy of science.

Why? Is there a formal proof of that claim?


Rhetorical question: why are epicycles wrong?

An infinite sequence of epicycles could be used to accurate model any orbital path, in a similar sense to how a Taylor series can represent any function as an infinite series of polynomials. It's not wrong in the mathematical sense, but rather the philosophical: a needlessly complex theory that is hard to work with and which provides no advantages or insight over the simpler theory is declared wrong, even if it provides, or could provide the same predictions.

So it is with quantum mechanics. The standard Copenhagen interpretation of QM requires notions of observers and mysterious faster-than-light transfer of state which is really hard to reconcile with the modern scientific view of the world. The mystery surrounding it (in the religious sense) has let to disproportionally many cranks who misinterpret the theory into statements about "quantum consciousness" or other new-age nonsensical tie-ins.

However these issues arise to a lesser extent with the many-worlds interpretation, and not at all with pilot wave theory. The former is nothing more than reinterpretation of the same equations, and the latter is a different formulation that is nevertheless mathematically identical as far as it has been worked out. If we had started with many-worlds, or better yet pilot-wave, then we wouldn't have a century of people growing up with 2nd-hand tales about how the world is governed by a mystical and incomprehensible theory of matter that even physicists don't understand. Which is utter bullocks.


Thank you, but I was looking for a proof formalized in hard and cold logic like in this paper below (so that assumptions are clear and formalized):

https://link.springer.com/article/10.1007/s11229-011-9914-8


That's missing the point. Copenhagen is not mathematically wrong. Alternatives like many-worlds interpretation and pilot wave theory have the same (MWI) or isomorphic (pilot wave) equations. Copenhagen is epistemologically wrong in that it requires strictly more assumptions than is strictly necessary, specifically an theory of wave function collapse. It is "wrong" by Ockham's razor.


> That's missing the point

I didn't say it is mathematically wrong

> It is "wrong" by Ockham's razor.

You do know that there are dozens of mathematical and formal versions of Ockham's razor.

Show me a formal version in which it is wrong. Otherwise arguing about this is like discussing politics where the loudest voice wins. Using English to debate the philosophical foundations of physcis after almost a hundred years of Hilbert, Gödel etc is what is wrong.


I already told you the added assumption: a theory of wave-form collapse as physical phenomena. Only the Copenhagen interpretation has this. I'm not going to spend my time putting that into formal logic notation to satisfy random person on the internet. Do it yourself.


No one with a modicum of decent formal training is going to get convinced by a random person's informal rant about the Copenhagen interpretation.

> Do it yourself.

This is why the Copenhagen interpretation still lives on and will live on if its detractors are too lazy to formalize things decently.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: