Apparent wave function collapse happens when a wave function in a superposition of several eigenstates appears to reduce to a single eigenstate. The apparent wave function collapse collapse is mathematically equivalent of quantum decoherence where the wave function never really collapses but the states gets entangled with the observer.
If somebody were able to formulate and experience that would show the difference between decoherence and collapse, that would be new physics and we would be able to rule out some interpretations of quantum mechanics. Until that happens, 'shut up and caluclate' seems to be valid course of action.
As far I understand, the philosophical difference between apparent and actual wave function collapse is that in the apparent collapse probabilities of other states get so close to zero that they don't matter, in actual collapse they are exactly zero.
The assumption that human consciousness has something to do with setting all other states to zero is weird one and can't completely understand the assumptions behind it. I guess the idea is that we would not experience the world as we experience it now if there is just continuing decoherenće.
How this is justified?
The problem with this interpretation is the fact that density matrices work: that certain probability-weighted ensembles of states are completely, provably equivalent to other probability-weighted ensembles of states. If you don't interpret collapse as physically real, then it remains to be explained why this should be the case.
Explanation is only needed if decoherence is not good enough explanation.
That's a description of photon __before__ decoherence and before observation. If you construct the combined wave function after the photon has interacted with the photon sensor those probabilities have settled one way or another.
(Scare quotes because people use "decoherence" to mean multiple different things)
You might as well postulate there are magical pink unicorns dancing on every particle. By your logic that theory also can't be ruled out.
It won't happen in a closed box, but what about an open box in a closed room? What about a closed box with a live video camera? What about a live video camera whose display no one is watching?
I'm sure this is a basic question but for the life of me, I've never really understood it. Thanks.
There are, broadly, three kinds of interpretations of what's 'really' going on behind the scenes:
1. Collapse - The world exists in an indeterminate state until observation occurs, when the world collapses into one a single determinate state. The probabilities of quantum mechanics map onto the the different parts of the unobserved indeterminate state.
2. Many Worlds - Quantum phenomena cause the world the branch into multiple worlds. The probabilities of quantum mechanics represent the 'share' of reality that branches in each direction.
3. Hidden variables - The probabilities of quantum mechanics are artifacts of our inability to know all the relevant variables. Measurement necessarily involves causal contact, and causal contact will always disturb some of the relevant variables in unpredictable ways. It would be cool if we could observe what goes on when measurement occurs, but to do that would require measurement! So we're stuck with hidden variables.
The public tends to hear the collapse interpretation most often. Physicists tend to like the many worlds interpretations. Philosophers of science tend to like the hidden variables interpretation, because the other options require an incoherent metaphysics. (I'm a Philosophy PhD student.)
People say that the hidden variables interpretation is ruled out on experimental grounds, but this is demonstrably false. Experimental data shows that hidden variables, if they exist, violate locality:
Locality - Causal interaction is a local phenomena. No action at a distance.
So long as you're willing to abandon locality, hidden variables can work. Given that the other two interpretations posit equally weird things, abandoning locality won't seem so weird.
For more on this, see Tim Maudlin's excellent paper, "Three Measurement Problems"
As a very lose analogy, the world is "normal" ( no collapsing or multiple universes or anything). It's simply that every particle leaves a "wake" just like a boat on a lake. That's it.
Classical interaction is two boats interacting with each other (crashing into each other, or one pushing the other, or pulling the other like a tugboat). And the only way two boats can affect each other is directly.
Quantum interactions are "wakes"/"waves" in the water interacting with boats. One boat can be far enough away, but if it's big enough it's wake can move your boat. This as you'd imagine happens more to smaller lighter boats. And as waves push small boats around they affect where a boat will be when you look to measure them. (Measurements also create a wake)
Finally a boat's wake can interfere with another boat's wake. And that can be constructive or destructive interference just like two waves in the water can cancel each other out. Finally a boat on the other side of the lake can affect your boat via it's wake, meaning interactions aren't strictly "local".
It's that simple (more or less), it explains all the predictions of QM and yet it's taking forever for Copenhagen to die off and all the absurd questions of collapse and the rest...
DeBroglie-Bohm is the only item that makes sense, people are just slow to accept that locality isn't always the case.
The pilot wave is the entire uncollapsed wave-function, exactly like in many worlds. There is also a particle and instant, universe-wide collapse, neither of which are detectable even in theory. Nothing except the wave function affects the dynamics at all.
Physics has a history of elegant theories that eventually turned out to be wrong. Phlogiston theory looked great at the time, and things like one-electron universe strike as beautiful when you first read about them.
According to Sean Carroll, this is because if you 'buy' the mathematics of the wavefunction, even before collapse you already have to accept "many worlds" at the quantum level. "many worlds" for them simply means superposition/linear combinations. After all, if you already accept "superposition" at the level of quantum states, you don't have to invoke anything new to deal with so-called "collapse", if everything simply stays as superpositions i.e. linear combinations.
Or, in other words, if you assume classical behavior first, and you need to get that out of superpositions at the quantum level, you need wavefunction collapse.
but if you start with superpositions at the quantum level, you already have superpositions, and then classical behavior can simply be derived from locating yourself in one of those superpositions.
Explained this way, "many worlds" doesn't seem so shocking, if you have already resigned yourself to quantum level superpositions.
I think of many worlds as kind of a literal interpretation that quantum superpositions are fundamentally real, as opposed to quantum superpositions merely being a very accurate mathematical model. (Most scientists think QM represents something real instead of some lucky equations.)
All three interpretations of QM 'accept' superposition, construed as a mathematical construct. What's at issue is how this mathematical construct maps onto reality.
If you mean something more by 'accepting superposition', you have to spell out what that is. And doing that just leads you right back to the three different interpretations.
So it doesn't really matter if it's a map or not - it tells you something about the territory.
 The completeness of quantum theory for predicting measurement outcomes, https://arxiv.org/abs/1208.4123
 On the reality of the quantum state, https://arxiv.org/abs/1111.3328v3
I'd phrase this as: the main assumption is that reality is literally encoded by some sort of objective hidden variables theory.
But a mathematical model != a theoretical model. The very same mathematical model will be compatible with uncountably many theoretical models. (By 'theoretical model' I mean something like 'an interpretation of what the math is representing'.)So you can't read off theoretical structure from mathematical structure. And so you can't read off the structure of the world from mathematical structure.
Occam's razor favours a theoretical model that corresponds more closely to the mathematics, rather than one that adds a bunch of epicycles to arrive at a different interpretation. If you follow your logic then you can never reject geocentrism, because it's possible to create a geocentric model that generates the same predictions as a heliocentric one; nevertheless we would generally say that heliocentrism is "more true" and "more physically real" than geocentrism.
I just linked to the first article I found that talked about the subject. And it's interesting that you look for something lazy instead of addressing more serious issues like Newton's model vs Einstien's. Very clearly the (much!) more complex answer is more correct than the simple one. Here's  another link discussing the issue and gives 2 examples.
Search for "Occam"
You should give links you're willing to stand by. I didn't go looking for something lazy, I went looking for talk about fundamental physics (where Occam's Razor is appropriate, and what we're talking about) and found only the barely even wrong throwaway line about M-theory.
> Very clearly the (much!) more complex answer is more correct than the simple one.
Newton and Einstein don't make the same predictions. When you include the amount that you'd have to add on to Newton's theory to generate the predictions Einstein gives you about the behaviour of light, Einstein ends up simpler.
> Here's  another link discussing the issue and gives 2 examples.
The "new particle" example shows the opposite of what's claimed. Bethe's only reason to dismiss a new particle as an explanation was Occam's razor. The part about Newton/Einstein is just wrong about the history; relativity wasn't developed as an effort to explain the orbit of Mercury, it was developed out of Maxwell and Lorentz's work on electromagnetism. A universe in which the only reason to believe relativity was those deviations in the orbit of Mercury probably would be a universe in which the true theory was Newtonian gravity with small correction terms, not a universe in which relativity was true.
Thank you for pointing out the other fault of Occam's razor: both sides of most arguments tend to assume the razor is on their side. This aspect was addressed in the second article I linked.
In general, it's hard to use Occam's razor in a non-question-begging way.
> ...we're not merely constructing a map, but a model, that is, we try to feel out and understand the territory, what it's made of, and how it works.
'map' is a metaphor there. You're description of what physics is doing still fits within the metaphor. It doesn't change the fact that there is something 'out there' and then there's our description of it, via physics, and our description is not the thing that's out there.
Suppose we develop a unified theory in physics which supersedes current formulations of quantum theory. Then would the old quantum theory still BE the territory at that point? Or would it be just a less accurate description than the new one we came up with?
No, I think that a mathematical expression describes the behavior of something real as close as possible to our knowledge -- it's not just a picture of how it looks.
In other words, in contrast with the map analogy, the equations for electrons etc can be used for a simulation.
>Suppose we develop a unified theory in physics which supersedes current formulations of quantum theory. Then would the old quantum theory still BE the territory at that point? Or would it be just a less accurate description than the new one we came up with?
Why assume one can describe the territory in just a single (perfect) level of representation? For some things not even the full precision we can muster today is even needed (e.g. I don't need Relativity to know where a baseball will land).
To the extent that "is" means anything, yes.
> Suppose we develop a unified theory in physics which supersedes current formulations of quantum theory. Then would the old quantum theory still BE the territory at that point? Or would it be just a less accurate description than the new one we came up with?
I think it's useful to treat "isness" as something analogue rather than binary here. The old theory sort-of-is the territory. The new theory a-bit-more-is the territory. But there's no Platonic ideal of the territory as a separate thing from the mathematical model that reality implements. There is no there there.
That's what they really think of many worlds. And, really, is it any worse than any of the alternatives?
> I think of many worlds as kind of a literal interpretation that quantum superpositions are fundamentally real, as opposed to quantum superpositions merely being a very accurate mathematical model.
If the stance is predicated on that assumption, as you suggest (and I'm inclined to agree, having read other of Carroll's writings), then I have to continue disagreeing. If the rest of the world of physicist luminaries were unanimously aligned with Carroll it would be more difficult (but still tempting...)—but luckily that's not the case.
I seem to have access to only one of these universes. How does nature decide which determinate state would my universe take? It still requires the magical collapse. (Or is it that I am branching off too and loosing access to myself in other branches?)
>> So long as you're willing to abandon locality, hidden variables can work.
(Asking, not questioning) Why is this so? I have read a bit or two about Bell's Inequalities, but do not understand why locality needs to be abandoned. With particles having random but entangled spins showing correlations even when separated out, how do we know there were no random hidden state associated with these particles when they separated. That is, how do we know that the decision on which particle has which spin did not happen during separation itself (getting stored as a hidden variable)?
You're branching too. It remains to be explained why your subjective probabilities of which branch you find yourself in follow the Born probabilities, but that's the only known way of assigning subjective probabilities that behave as probabilities should (in terms of things like "probability of A or B <= probability of A + probability of B").
> That is, how do we know that the decision on which particle has which spin did not happen during separation itself (getting stored as a hidden variable)?
Conway's "Free Will Theorem" offers a simplified proof that this can't happen: if you measure the particle along three orthogonal axes then you will always get two of one result and one of another, but there's no way to preassign values to all possible axes such that this remains true. So the result you get from measuring in a given direction cannot be fixed if you have a free choice about which set of directions you're measuring in.
Yes, you also split into branches. Of course the whole branches thing is a rough approximation - for example quantum computers work because they aren't branching.
> Why is this so? I have read a bit or two about Bell's Inequalities, but do not understand why locality needs to be abandoned.
I would recommend looking up the explanation from Dr. Chinese. Briefly it's because you measure each half of an entangled pair randomly on 1 of 3 angles, and you see that measurements of equal angles always disagree, yet measurements on different angles agree over 50% of the time. That's impossible to do with pre-determined values.
Or rather, the branch universes diverge and do their computing in parallel, and then finish by matching perfectly, with all universes having the same answer. Any interference from the environment outside of the qubits results in the universes not being perfectly similar at the end and you don’t get an answer back.
It's difficult to explain concisely, but John Stewart Bell wrote a nice albeit lengthy essay on this exact question:
Abandoning causality (that goes along with locality) wont seem so weird?
But I'll just say that most philosophers don't find anything incoherent in the idea of nonlocal causation. (Hume's classic attack on local causation might be worth a look.) See BoiledCabbages comment for a nice explanation of a model that involves nonlocal causation.
In the famous double-slit experiment, single particles, such as photons, pass one at a time through a screen containing two slits. If either path is monitored, a photon seemingly passes through one slit or the other, and no interference will be seen. Conversely, if neither is checked, a photon will appear to have passed through both slits simultaneously before interfering with itself, acting like a wave.
What does "monitored" mean here? The problem is that everywhere I look, I find a synonym of "monitored". My question is, "How does that break down?"
That's not the difficult part. Any physical interaction that causes a correlation between the particle's state and some other state (effectively making that thing a detector) will cause this. This includes air molecules, which is why we have to isolate the systems very well.
After interacting with the detector, the particle+detector system is still in a superposition of two states (corresponding to the two slits). When the particle hits the screen, the screen gets in on the entanglement. At all times there are (effectively) two paths, including when your body and brain interact with the experimental setup.
So why do you see only one result?
One answer (the Many-Worlds Interpretation) is that you don't see one result. One "copy" of you sees this one and "another" sees the other. Another possibility is that there really is some particular event that "collapses" the wave function, but AFAICT people mostly use this interpretation for practical reasons, bypassing the philosophical conundrum.
Or, if you insist that you you sees only one result, but you still like MWI, then the question becomes: why did I end up on this branch rather than another one? But any explanation would have to come from outside the system (the multiverse) more or less, so I wouldn't hold my breath waiting for an answer.
If you like MWI, then the answer is you end up on every branch. It's like another invocation of the hateful anthropic principle - in every branch you are asking about the result you saw in that branch. You are asking why you ended up in this branch because this is the branch in which you exist to ask that specific question.
The real question of MWI for me is, where the heck are all these squillions of universe copies every second and where does the energy come from and go to making squillions of universes worth of matter and creating universe sized places to put them and how does you interacting with a photon on Earth cause a copy of the Andromeda galaxy anyway?
How exactly, in moment-to-moment physical detail, do these splits happen?
And as for branching - how are observers assigned among universes?
The MWIers will shrug and tell you observer experiences are distributed at random.
Well - fine. So why can’t the collapse experience be inherently random too, without demanding squillions of Occam-busting unobservable universes?
It seems like the least plausible interpretation. Not only does it multiply entities for no good reason, and assume they can never be observed, but it doesn’t even solve the original problem.
“Yes, but the math...” isn’t a defence because there are plenty of situations in physics where technically correct solutions are discarded as unphysical. Why make an exception here?
You experience a superposition of universes. Why do you think you aren't? What would you expect to be different?
> How exactly, in moment-to-moment physical detail, do these splits happen?
The wavefunction is what's physically real. When you can express the wavefunction as a linear combination of two other wavefunctions, then it makes sense to model this as a split, and think of it as the universe dividing in some process like cell division (a wavefunction that's almost a linear combination of two other wavefunctions, but has a small interaction term, is like a universe that has almost split into two, but the two parts are still slightly attached). Sometimes a split happens at a discrete point corresponding to a discrete physical event - e.g. if a proton and an antiproton in known spin states that sum to 0 collide to generate two gamma rays, then that's a distinct, discontinuous event (and you'll see that in the wavefunction) and after it the wavefunction is expressible a linear combination of two wavefunctions. So we interpret this as "when the particles collided, the universe branched", and like most models that's a simplification of reality, but it's a good one that generally gives correct intuitions. But if you ever want the exact physical detail, just look at the wavefunction.
> Well - fine. So why can’t the collapse experience be inherently random too
It can, but the collapse is a much bigger assumption than subjective probability. It violates unitarity, it's unclear when it happens, it introduces a notion of observer and a direction to time and all of that - and for what? It doesn't let you get rid of the superpositions (you still have to assume that the superpositions exist before the "collapse", so you still need all the machinery that you would need to implement many-worlds). Occam's razor says we can and should do without it.
> without demanding squillions of Occam-busting unobservable universes?
By that logic the assumption that the universe is billions of light years across should require a huge pile of evidence, much more than we have. Occam doesn't say we should be parsimonious about the size of the universe we assume, it says we should be parsimonious about how many physical rules we assume.
Because the brain is warm and messy?
And Occam does not work that way.
Well, Occam is not a physical law, so that's neither here, nor there.
It's the same "mass" of universe, just split across squillions of pieces. Like a tree trunk dividing into two branches, or a river confluence in reverse. (In fact it makes no difference if there were always two universe branches, it's just that they happened to contain exactly the same physical events up until the point where they didn't). And actually it's continuous rather than discrete: most of the time rather than dividing into distinct branches it's more like it's spreading out across a landscape. The branches are an approximation to let us compute more easily rather than something fundamental (e.g. we take this big continuous landscape of "universes" and draw a line between the region where a given photon was up and the region where that photon was down, and treat this as two distinct universes, because what we're interested in is whether the photon was up or down, and for our purposes any two points in the universe-landscape in which the photon was up are equivalent to each other).
> how does you interacting with a photon on Earth cause a copy of the Andromeda galaxy anyway?
It doesn't; when you interact with the photon which is, from your perspective, in two forks, you entangle your own fate with it, and from the perspective of an observer in Andromeda, both you and the photon are now in two forks (the one where the photon was up and you measured it as up, and the one where the photon was down and you measured it as down). As and when the observer in Andromeda interacts with you, then they can also be seen as divided into two forks. But if the observer in Andromeda is behaving exactly the same way in either fork, then it makes more sense to see them as being in a single unified universe. (Again, the very concept of "forks" is a simplification over continuous reality; the point isn't that there's any concrete sense in which the observer in Andromeda is in one rather than two realities whereas you are in two rather than one, it's just that when we want to talk about you we can draw a line in the universe landscape (between universes in which you measured up and universes in which you measured down) and treat the universe landscape as a fork between two universes, while when we want to talk about the observer in Andromeda there is no value in using that particular model (though there's nothing wrong with doing so, and if you compute predictions about the observer in Andromeda while treating them as being in such a superposition you will obtain correct results)).
BTW, I added a paragraph to my answer above.
The observer can be anything. You are you, so by the time information comes to you, you've become entangled.
I find this interpretation to be mostly wordplay. After all how can something be "real" if you can never observe it or detect it? It's sort of saying the theoretical construct "those other worlds" are real, but in what sense?
Bohmian mechanics makes way more sense to me.
What about other minds? Do you ever observer/detect someone else's experiences, thoughts, feelings? Or just their behavior from which you infer a mind?
- Marking the photon itself (e.g. by rotating its polarization with a waveplate).
- Placing a crystal that does spontaneous parametric down-conversion after one of the slits. This splits the photon into two photons. Aim the secondary photon at a detector. Detector clicks -> went through that slit. No click -> went through other slit. But this will also mark the photon itself with a change in frequency, so really should do it to both slits.
- Having a magical device that stores a bit and toggles it anytime a photon passes through a piece of glass. Place the magical toggle-glass over one of the slits.
Anything that can distinguish the case where a photon goes through the left slit from the case where a photon went through right slit will work fine.
It doesn't. This is a reasonable approximation to the truth in many common situations, but it is not the truth.
The truth is that measurement and entanglement are the same physical phenomenon. See:
or the video version:
The video link is the same as the paper; if you find it, please post it. Thanks again.
When they present the hypothetical Schrödinger's Cat thought experiment, it's a thought experiment assuming somehow the box prevents all interaction between the cat inside and the outside world. In the real world however things like sound waves, light, x-rays, etc, will inevitably penetrate such a box. And even if the cat moves, it will cause some amount of detectable vibrations on the outside of the box. It would be impossible to theoretically prevent any energy from the cat inside the box to emanate outside as that would violate the law of preservation of information (even a black hole—the closest thing to an absolutely sealed box—releases information about its contents and expels energy).
There is no explanation for why the wave function collapses due to interaction other than "god rolling dice" and picking a collapsed outcome once such an outcome becomes relevant (through interaction/observation). Other interpretations of wave function collapse don't even suppose the collapse happens. For example in the many worlds interpretation, instead of having a single observer (the universe and its participants), there are multiple universes and multiple observers. In this interpretation there is no collapse of the wave function. Instead of many possibilities and one getting picked, there are many observers each independently observing one of the possible outcomes. Here the dice rolling still happens, but happens for every observer constantly (when deciding what outcome is observed next).
Strictly speaking, this is incorrect. The interaction between a photon and an electron, for instance, or an electron in an atom with the atom's nucleus are not "observations" and do not lead to the collapse of the wavefunction.
Observation (a.k.a. measurement) in QM implies interaction with "classical", i.e. non-quantum mechanical object.
The interesting thing about the controlled-Z is that it is actually a symmetric operation. You could think of it as "if A is ON then apply a Z to B", but it is exactly equivalent to say it is performing an "if B is ON then apply a Z to A" effect. The truly symmetric description is "apply a -1 factor to the weight of cases where A and B are both ON".
Because the controlled-Z is symmetric, and every measurement ultimately involves a controlled-Z, you can't move details about A into B without also kicking back details about B into A. When B is a big well-approximated-by-classical-physics system, the back-effect plays out in a way that we call collapse.
Said another way, B (you and/or instruments) can't get information about A (the quantum system) without collapsing it.
You can look at a better explanation here
In that case, our knowledge allows for more possibilities before making a measurement (observation), and collapses into something more definite after the measurement.
That's probably not the best description, but I think the basic idea about modeling knowledge makes a lot of sense. Eddington was primarily a physicist, but also a philosopher. I read this account of his in 'The Philosophy of Physical Science.'
If I'm not mistaken, it's pretty much in line with the 'Pilot Wave' interpretation which is getting more attention these days.
Edit: in other words, when you say:
> The mistake in this line of thinking is in the tacit assumption that a quantum system possesses a certain property prior to or independent from a measurement. It does not.
You are only asserting that's the case, not defending it. The rest of your post operates on the opposite, undefended assumption that a quantum system does not possess a certain property prior to or independent from a measurement.
But the whole point of the alternative interpretation I pointed out is a way of reconciling the observations with an essentially classical world (at least in the sense that there is a single objective reality which exists independent of observation)—do you see a way in which it fails to do that? My understanding is that considering the probabilistic aspect of the theory to be modeling our knowledge of the system instead, successfully accounts for the fundamentally 'non-classical' observations.
It just seems hasty to throw out such a seemingly well-supported hypothesis (single objective reality independent of viewer), if there are two effective interpretations, one of which conflicts with it, the other of which does not. From my reading, the primary reasons for the mainstream physics community having adopted the stance which competes with the 'objective reality' hypothesis (i.e. Many Worlds or Copenhagen interpretations), are historical accident and political, while the intrinsic merits of the alternatives are near equivalent.
If we're going to play that game, I'll choose spooky action at a distance anyday, and say that space is just a strong correlation MOST of the time, but doesn't have to be ALL the time. I wonder how much classical baggage YOU bring by thinking Bell's inequality disproves something because nonlocal interactions are not possible.
What's easier to imagine - correlations between particles far away or many universes which are not actually real in any sense you can define?
Many universes of course. At least that doesn't mess up with the workings and casuality of this universe.
How is this any different than the multiverse theory as a non-explanation to the fine tuning? You simply push the problem elsewhere.
Just because it predicts outcomes (probabilistically!) very accurately doesn't mean we have the full picture.
(how does gravity fit into this picture? it's completely let out of QM!)
Your comment is wrong. In the 3rd of the 3 interpretations of QM (Debroglie Bohm) this is exactly the case. The world is at all times in a well defined state. We just don't know it until we measure so we use probabilities to desceibe. All 3 QM interpretations are mathematically indistinguishable.
But any property we can measure is not well-defined, in the sense that before we have actually measured it, it doesn't exist.
Note "our" knowledge there. If I am making a measurement, and you are disconnected to me, how and when does it become our knowledge? How is it that one human measuring results in a collapse, with no other human knowing about it. Does something more happen when this first human tells about it to others?
That's counter to a century of experiments and theories. If that was the case, QM would be a trivial classical theory.
> What matters is if you extract information
> It does not require a conscious observer to be present.
But does it require a “you” trying to figure out and extracting information?
That is absolutely 100% not what happens. The experiments play out the same way whether or not a human's face happens to be nearby. All the data is collected automatically by photographic plates or electronic counters or other instruments, and "observation" always refers to some instrument being present or not.
P.S. Everyone - thanks! I get it now. And it's fascinating!
The truth is that measurement and entanglement are the same physical phenomenon. See:
But you provide an example of measurement causing the collapse of the wave function yourself!
"Now let us add a detector at the slits to determine which way the photon went. To describe this situation mathematically we have to add a description of the state of the detector:
(ΨU |DU> + ΨL |DL>)/√2
where |DU> is the state of the detector when it has detected a photon at the upper slit and |DL> is the state of the detector when it has detected a photon at the lower slit. Now the probability density is:
[|ΨU |^2 + |ΨL|^2 + (ΨUΨL <DU|DL> + ΨLΨU <DL|DU>)]/2"
That describes the quantum system formed by the photon and the detector. And when you reduce the density matrix to its diagonal, setting the interference terms to zero in a non-unitary way, you are collapsing it to a "classical" mixture of two states corresponding to the possible outcomes of the measurement (as described by von Neumann 85 years ago).
Yes, that's true, but that is not the same thing as "measurement causing the collapse of the wave function". Collapse is not a physical phenomenon, it a a mathematical approximation. When you measure a particle, nothing happens to that particle as a result of the measurement. It does not suddenly "decide" to go through one slit or the other. Anything that "happened" to the particle happened not as a result of the measurement but as a result of the initial entanglement with the ancilla.
Von Neumann also believed that consciousness was required to collapse the wave function. This is a non-falsifiable and hence non-scientific theory (which is the problem with all collapse theories). Von Neumann was a really bright guy, and he very nearly got the right answer, just as Lorentz very nearly got to relativity before Einstein. But just as Lorentz refused to let go of the idea of Galilean invariance, Von Neumann refused to let go of the idea that classical reality is a primary phenomenon. It isn't. Classical physics is an emergent, not a primary phenomenon. There are no particles, only quantized fields. (To be fair, Von Neumann did not live long enough to see Bell's theorem or the Aspect experiment, so he didn't quite have all the information he needed. But he was bright enough that he probably could have come up with Bell's theorem on his own if he'd decided to go that direction.)
From the quantum state of the particle/detector which is a superposition of both possible outcomes, what happens according to you as a result of the measurement? Is there a definite result?
What do you mean when you say that "if the detector is working properly then these amplitudes are zero and the interference term vanishes"? Is going from non-zero amplitude to zero amplitude a physical phenomenon? Why does it happen?
OK, the content is essentially the same.
> Or maybe I'm missing your point completely.
Yep, I'm pretty sure that's the case. You should probably re-read section 3.
Section 3 presents an argument for why is cannot be the case that a measurement causes a physical change in the system being measured (because it would result if FTL communications). If you don't see how that's relevant then you are beyond my ability to help.
> supposed to be more fundamental
You are reading way too much meaning into that introductory sentence. The two-slit experiment is often presented as fundamental because it is easy to describe, and because historically it was done very early in the development of quantum mechanics, but it is not fundamental. Indeed it is this false assumption that 2-slit is fundamental that has lead to the horrible pedagogical mess that QM suffers from to this day. It is simply false that no one understands QM, or even that QM is particularly hard to understand. It's just that the usual pedagogy of QM is based on a false assumption.
In 4.1 you look at a couple of similar examples that are quite simple, a particle/detector system in the first half ("measurement") and a particle/particle system in the second half ("entanglement").
The "measurement" part starts like von Neumann's measurement scheme, which is a two step process. In the first step, one assumes that the "target" and the "probe" are composed into a single quantum system (there is entanglement like in the particle/particle system considered later, I agree with that). This system is in principle in a superposition of states, the density matrix does contain interaction terms.
In the second step, one applies a non-unitary operator to actually take the measurement at the classical/macroscopic level and the interference terms dissapear. Instead of a pure quantum state we end with a diagonal density matrix representing a mixed state (the system is either in one state or the other, we don't know which one but we know the corresponding probabilities).
I still don't understand how the second step works in your approach and why do the interference terms vanish.
I understand even less how do you conclude that "the entanglement destroys the interference in exactly the same way (according to the mathematics) that measurement does." The entangled pair looks identical to the pre-measurement target/probe pair, but interference is still present at that point (and vanishes because of the measurement, whether or not we call this the collapse of the wave function).
By the way, in the video there is a slide saying that the resulting wave funtion is |ΨU |^2 + |ΨL|^2 but that's not a wave function.
But since you are obviously well versed in the math, I suggest you read the Cerf and Adami paper that my presentation is based on: https://arxiv.org/abs/quant-ph/9605002 Also, Zurek's work on decoherence. Those will probably answer your questions better than I can. I'm just a popularizer, not a physicist.
Reading your paper I thought something happened, somewhere in the transition from page 6 to page 7, because I understood that you were getting a well-defined value out of the measurement. The paper you linked now makes clear that in this model nothing happens at all, ever. Anything less than the wave function for the whole universe is just an illusion, there is no classical world.
It's not an accident that your description looked like the standard measurement scheme from von Neumann, as they are just reproducing it. They seem to think that when von Neumann described the measurement process as an interaction of two quantum systems he didn't notice that this interaction was beyond the classical concept of correlations, and this is why he needed a second stage to perform the measurement and collapse the state vector.
In the pre-measurement step, the 'target' (system to be measured, with states |u> and |d>) and the 'probe' (quantum detector, with states |+> and |->) are combined. This system will be a superposition of the states |u,+> and |d,->. The density matrix includes the diagonal terms |u,+><u,+| and |d,-><d,-| and also the diagonal terms |u,+><d,-| and |d,-><u,+|.
Von Neumann introduces a projection step which keeps only the diagonal terms. The resulting density matrix is no longer a pure state but a mixed state (for a closed system, pure states correspond to eigenvalues of the density matrix and then a diagonal matrix has to represent a classical mixture of the corresponding pure states and cannot be a superposition). After the measurement following von Neumann we have a statistical ensemble of the possible outcomes and choosing one of them doesn't pose any conceptual problem.
Cerf and Adami avoid the second step by taking the entangled system, ignoring the state of the target and looking only at the probe. Mathematically, one obtains a reduced density matrix by tracing out one subsystem and the result is a diagonal density matrix which contains only |+><+| and |-><-| without any interference terms. However, this is an improper mixed state and cannot be interpreted as before (because it's not the description of a full system but the result of looking at a subsystem).
Their reduced density matrix does not represent a defined (but unknown) state because the subsystem remains entangled with the rest of the system (target+probe). They say that the full system continues to evolve in a unitary way without any kind of collapse, but then how is any measurement done at all? Their description of the detector is an (improper) mixture of |+> and |->, at what point do we get one or the other as a result?
Note that Cerf and Adami consider their model completely unrelated to Zurek's. Decoherence and einselection are interesting, provide a physical mechanism to explain how the interference terms become (close to) zero in the "correct" basis in a (practically) irreversible way, and have some experimental support. But it doesn't completely solve the problem of measurement: how does the system settle on one particular outcome?
But to answer your question:
> at what point do we get one or the other as a result?
That is like asking "at what point does a blastocyst become a fully fledged human being?" There is no sharp dividing line between the quantum and the classical. There is not "point" at which the one transitions sharply to the other. There is a range beyond which the classical approximation is good enough for all practical purposes, but the physical world never "actually becomes" classical, just as gravity never "actually becomes" Newtonian.
> Cerf and Adami consider their model completely unrelated to Zurek's
I'm not sure that "completely unrelated" is a fair characterization. I think they would regard it as an "improvement". (But they are both still alive. We could ask them.)
BTW, you might want to read this:
and its precursor: http://blog.rongarret.info/2014/09/are-parallel-universes-re...
It may or may not answer any of your questions, but I would be very interested in your feedback either way. (And keep in mind that this was written for a lay audience.)
I find interesting that London and Bauer proposed a similar model already in 1939 (ref.  in the "Quantum mechanics of measurement" pre-print):
[9. Statistics of a System Composed of Two Subsystems]
"The state of a closed system, perhaps the entire universe, is completely determined for all time if it is known at a given instant. According to Schroedinger's equation, a pure case represented by a psi function remains always a pure case. One does not immediately see any occasion for the introduction of probabilities, and our statistics definitions might appear in the theory as a foreign structure
"We will see that that is not the case. It is true that the state of a closed system, once given pure, always remains pure. But let us study what happens when one puts into contact two systems, both originally in pure states, and afterwards separates them. [...]
"While the combined system I + II, which we suppose isolated from the rest of the world, is and remains in a pure state, we see that during the interaction systems I and II individually transform themselves from pure cases into mixtures.
"This is a rather strange result. In classical mechanics we are not astonished by the fact that a maximal knowledge of a composite system implies a maximal knowledge of all its parts. We see that this equivalence, which might have been considered trivial, does not take place in quantum mechanics. [...]
"The fact that the description we obtain for each of the two individual systems does not have the caracter of a pure case warns us that we are renouncing part of the knowledge contained in Psi(x,y) when we calculate probabilities for each of the two individuals separately. [...] This loss of knowledge expresses itself by the appearance of probabilities, now understood in the ordinary sense of the word, as expression of the fact that our knowledge about the combined system is not maximal."
[10. Reversible and Irreversible Evolution]
"I. Reversible or "causal" transformations. These take place when the system is isolated. [...]
"II. Irreversible transformations, which one might also call "acausal." These take place only when the system in question (I) makes physical contact with another system (II). The total system, comprising the two systems (I + II), again in this case undergoes a reversible transformation so long as the combined system I + II is isolated. But if we fix our attention on system I, this system will undergo an irreversible transformation. If it was in a pure state before the contact, it will ordinarily be transformed into a mixture. [...]
"We shall see specifically that measurement processes bring about an irreversible transformation of the state of the measured object [...] The transition from P to P' clearly cannot be represented by a unitary transformation. It is associated with an increase of the entropy from 0 to -k Sum_n |psi_n|^2 ln |psi_n|^2, which cannot come about by a unitary transformation."
[11. Measurement and Observation. The Act of Objectification]
"According to the preceding section, [the wave function after the measurement] represents a state of the combined system that has for each separate system, object and apparatus, the character of a mixture. [...] But of course quantum mechanics does not allow us to predict which value will actually be found in the measurement. The interaction with the apparatus does not put the object into a new pure state. Alone, it does not confer on the object a new wave function. [...]
"So far we have only coupled one apparatus with one object. But a coupling, even with a measuring device, is not yet a measurement. A measurement is achieved only when the position of the pointer has been observed. It is precisely this increase of knowledge, aquired by observation, that gives the observer the right to choose among the different components of the mixture predicted by the theory, to reject those which are not observed, and to attribute henceforth to the object a new wave function, that of the pure case which he has found."
They go on to discuss how "the observer establishes his own framework of objectivity and acquires a new piece of information about the object in question" while there is no change for us if we look at the "object + apparatus + observer" system from outside. The combined system will be in a pure state and we will have correlated mixtures for each subsystem.
London and Bauer say that the role played by consciousness of the observer is essential for the transition from the mixture to the pure case. Cerf and Adami claim to do without the intervention of consciousness but it's not clear how do they explain the transtion from the mixture to something else. They say things like the following, which doesn't look different from London and Bauer to me:
"The observer notices that the cat is either dead or alive, and thus the observer’s own state becomes classically correlated with that of the cat, although, in reality, the entire system (including the atom, the γ, the cat, and the observer) is in a pure entangled state."
From a very superficial look at the blog posts you linked to, I think I agree with some things (v.g. that most of this ideas have been around for quite some time), I disagree with others (v.g. that there is no measurement problem) but I guess most of the discussion is meta-physical so it cannot be "wrong".
I think dodging the measurement problem is not solving it. The core of the problem is here:
"But, while QM predicts that you will be classically correlated, it does NOT (and cannot) predict what the outcome of your measurements will actually be."
QM does not predict that the measurement will have an outcome. The transition from that probability distribution to one definite result is the measurement problem.
Weinberg, one of the greatest theoretical physicists alive, wrote last year a piece explaining why he's not satisfied with any interpretation of QM. He distinguishes two main approaches: "realist" and "instrumentalist". Here is the article and some comments:
He gives more details in section 3.7 of his "Lectures on Quantum Mechanics" (Interpretations of Quantum Mechanics) which ends as follows:
"My own conclusion is that today there is no interpretation of quantum mechanics that does not have serious flaws. This view is not universally shared. Indeed, many physicists are satisfied with their own interpretation of quantum mechanics. But different physicists are satisfied with different interpretations. In my view, we ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is only a good approximation."
The last chapter in Commins' "Quantum Mechanics: An Experimentalist’s Approach" (The Quantum Measurement Problem) gives a nice description of the problem and the proposed solutions which fall into three categories:
1.- there is no problem. [Decoherence doesn't completely avoid the problem; it makes the off-diagonal elements of the density matrix zero but the issue of going from a mixture to a specific outcome remains.]
2.- the interpretation of the rules must be changed but this can be done in ways that are empirically indistinguishable from the standard theory. [De Broglie-Bohm pilot-wave theory postulates "real" particles compatible with QM predictions, but currently only works well in the nonrelativistic setting.]
3.- deterministic unitary evolution is only an approximation. [Adding non-linear and stochastic terms produce "spontaneus localization", tuning the parameters we can explain both the microscopic "unitary" and the macroscopic "non-unitary" behaviours.]
"In conclusion, although many interesting suggestions have been made for overcoming the quantum measurement problem, it remains unsolved. We can only hope that some future experimental observation may guide us toward a solution."
[By the way, I sent you an email but I don't know if you got it.]
"Universes, histories, particles and their instances are not referred to by quantum theory at all – any more than are planets, and human beings and their lives and loves. Those are all approximate, emergent phenomena in the multiverse."
Indeed, time itself is an emergent phenomenon:
"[T]ime is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks – or of any objects usable as clocks) into the same history."
Here is Weinberg's fundamental problem:
"[T]he vista of all these parallel histories is deeply unsettling, and like many other physicists I would prefer a single history."
Well, I'm sorry Steven, but you can't have it. That's just not how the world is, and wishing it were so sounds as naive and petulant as an undergrad wishing Galilean relativity were true, or Einstein wishing that particles really do have definite positions and velocities at all times. Yes, it would be nice if all these things were true. But they aren't.
I don’t know if your point is that the universe splits more often than that or never. Note that Weinberg is referring to the usual MWI formulations, I don’t know what is the precise definition of the “multiverse theory” you mention. And actually he says that “the fission of history would not only occur when someone measures a spin. In the realist approach the history of the world is endlessly splitting; it does so every time a macroscopic body becomes tied in with a choice of quantum states.”
This is Deutsch’s description in a recent paper (https://arxiv.org/abs/1508.02048):
“when an experiment is observed to have a particular result, all the other possible results also occur and are observed simultaneously by other instances of the same observer who exist in physical reality – whose multiplicity the various Everettian theories refer to by terms such as ‘multiverse', ‘many universes', ‘many histories' or even ‘many minds'. “
The “plain English” description in the book you cite lacks any rigour and can be confusing if you know a bit of QM because it postulates that the universe splits already (but in what basis?) in the cases where single-universe QM works fine (the wave function can describe a pure quantum state which is a superposition). These “soft splits” can be undone and allow for interference (corresponding to the unitary evolution of the Schroedinger equation).
But, he says that “interference can happen only in objects that are unentangled with the rest of the world” and “once the object is entangled with the rest of the world [...] the histories are merely split further”. These “hard splits” are the branches in the MWI. And correspond perfectly to Weinberg’s description: “[the world splits] every time a macroscopic body becomes tied in with a choice of quantum states.”
In Deutsch’s multiverse the number of universes doesn’t grow because there is always an uncountable infinite number of them, but I’m not sure this makes the theory much better.
Unfortunately I don’t have time to continue this interesting discussion in the near future. At least you have to concede that in an infinite number of alternative universes you found my arguments convincing. That’s good enough for me.
"It accounts for the apparent contradiction between
quantum theory, which says that entropy is conserved in unitary transformations,
and the apparent increase in entropy that arises from the randomness in
"randomness is not an essential cornerstone of quantum
measurement but rather an illusion created by it."
So is it random or is it not, in your view?
It depends on your perspective. From the perspective of a classical entity, yes, it's random. From the perspective of the quantum wave function, no, it's not.
Both links are identical!
A good chunk of physicists (not all) would agree that the current understanding is unsatisfactory, but it's unclear whether now is the "right time" to research that question. It is unclear whether we have the experimental tools to probe the consequences of the different models, and whether different approaches produce different predictions in some testable scenario.
Of course, there is an active field of research probing this and related questions. Different physicists in there have different ideas, depending on how they interpret quantum mechanics. (See other responses in the thread for a sampling of those). But there is no satisfactory agreed-upon textbook answer.
Personally, I find the some aspects of the decoherence perspective appealing.
Weird philosophy aside, this does look like a good explanation of the Bell inequality.
The only information here is that you're annoyed and disapprove. That's not enough to make a post good. If you'd be willing to transmute annoyance into information others can learn from, that'd be great. If you don't want to, that's fine, but then please just don't post anything. When commenters use this forum merely as a venting outlet, quality suffers.
Yet for inane reasons the Conpenhagen interpretation is still ALL that is taught to the next generation of physicists, who in turn teach it to their students. Only the weird physics students (like me) who go "wtf?" in class and refuse to believe the teacher go out and learn pilot-wave theory (my preference) or many-worlds interpretation to re-inject some sanity into the world.
AFAIK, this is an exaggeration to the point of incorrectness.
The Copenhagen interpretation may be a poor interpretation which introduces absurd complicated ideas rather than the simpler ideas of other interpretations, but that doesn't make it "flat out wrong".
Why? Is there a formal proof of that claim?
An infinite sequence of epicycles could be used to accurate model any orbital path, in a similar sense to how a Taylor series can represent any function as an infinite series of polynomials. It's not wrong in the mathematical sense, but rather the philosophical: a needlessly complex theory that is hard to work with and which provides no advantages or insight over the simpler theory is declared wrong, even if it provides, or could provide the same predictions.
So it is with quantum mechanics. The standard Copenhagen interpretation of QM requires notions of observers and mysterious faster-than-light transfer of state which is really hard to reconcile with the modern scientific view of the world. The mystery surrounding it (in the religious sense) has let to disproportionally many cranks who misinterpret the theory into statements about "quantum consciousness" or other new-age nonsensical tie-ins.
However these issues arise to a lesser extent with the many-worlds interpretation, and not at all with pilot wave theory. The former is nothing more than reinterpretation of the same equations, and the latter is a different formulation that is nevertheless mathematically identical as far as it has been worked out. If we had started with many-worlds, or better yet pilot-wave, then we wouldn't have a century of people growing up with 2nd-hand tales about how the world is governed by a mystical and incomprehensible theory of matter that even physicists don't understand. Which is utter bullocks.
I didn't say it is mathematically wrong
> It is "wrong" by Ockham's razor.
You do know that there are dozens of mathematical and formal versions of Ockham's razor.
Show me a formal version in which it is wrong. Otherwise arguing about this is like discussing politics where the loudest voice wins. Using English to debate the philosophical foundations of physcis after almost a hundred years of Hilbert, Gödel etc is what is wrong.
> Do it yourself.
This is why the Copenhagen interpretation still lives on and will live on if its detractors are too lazy to formalize things decently.