Note that, contrary to popular understanding, it isn't even established that quantum mechanics is indeterminate or not subject to strict cause-and-effect.
It's indeterminate according to the mainstream Copenhagen interpretation, but de Broglie–Bohm theory [1] ("pilot wave theory") is an interpretation that is entirely deterministic, and its assumptions result in exactly the same final equations as the Copenhagen one. But every particle always has a definite position+momentum and there's no notion of wave collapse or philosophical trickiness of what constitutes an "observer". For more on the historical reasons why Copenhagen became mainstream and de Broglie-Bohm didn't, see [2].
I think you are abusing language here, probably not intentionally, and it distracts from the actual issue. You seem to understand the difference but for those who don't, here's a crucial distinction that's often overlooked by laypeople:
The Schrödinger (Dirac) equation itself is deterministic. Given a wavefunction at time t=t0, you can calculate its evolution to time t=t1. What is indeterminate is the observation of the observable. The distribution of observations is entirely determined. Only the result of a single observation is indeterminate: in the limit of infinite measurements, you would get the probability distribution back which itself is determinate. The various "interpretations" concern themselves with what happens at the time of measurement. For example, the Copenhagen interpretation says that the wavefunction, which was evolving deterministically, mutates into a new wavefunction which itself evolves deterministically. This act of going from one to the other is considered indeterminate and is given the name of "collapse". However the dynamics of each wavefunction is entirely determinate.
I'm a big fan of epistemological interpretations of QM. Everything becomes far less mysterious when it is a theory of knowledge, IMO. Wavefunction collapse? No wonder it collapsed: it represents your knowledge and you just observed the system.
That (to my complete layman's eye) seems reasonable, but is it really much more than shuffling labels around? It seems the 'deeper' mystery remains: what is going on at the fundamental level of reality? (Or, in other words, why do our observations have these properties?) Whether or not that question is answerable -- or even meaningful -- I think we'll always be driven to ask it.
I'm not super knowledgeable but I love this series, Looking Glass Universe, on the subject [0,1]. Specifically, she goes into some common but wrong beliefs about Bohemian Mechanics including that there is NO quantum weirdness. There is - it's called contextually. I don't know more than that.
It's just many-worlds style wave-function realism with some extra epicycles on top. Treating the wavefunction as physically real is perfectly reasonable. Treating the wavefunction as physically real and then assuming there are also pseudoclassical particles on top adds nothing except appealing to the confused - "oh no, the particles aren't in a superposition, the particles are just ordinary classical particles... all of whose properties are determined by the pilot wave... which is in a superposition. Totally different."
Not true, a lot of mathematical problems behave better (but other semiclassical approaches also exist).
As for the epicycles comparison, it's actually apt. They could have given complex plane and Fourier analysis for free if they had a chance (not to mention Copernicus actually used more of them than Ptolemy)(which incidentally is also the case with Many-World-necessiated landscapes and parameter spaces - just as much additional layer but with additional problems that are, in contrast, untractable by current math).
> Treating the wavefunction as physically real is perfectly reasonable.
What's your justification for that? I remember Arthur Eddington pointing out that it could be interpreted as modeling our limited knowledge of a quantum system just as well as its 'physical reality'. But there's obviously justification for the first one, while the second one appears to be a large unjustified leap.
Occam's razor. The wavefunction is the simplest construct that explains our experimental results (it appears in every interpretation to some degree or another, certainly in pilot wave theory where it is the pilot wave), so it's simplest to assume it just is what's really happening.
> I remember Arthur Eddington pointing out that it could be interpreted as modeling our limited knowledge of a quantum system just as well as its 'physical reality'.
Hidden variable approaches are incompatible with our experimental results unless you introduce some seriously undesirable properties (e.g. nonlocality).
I didn't say that wavefunction realism is the only reasonable approach, just that it's a reasonable approach. Pilot wave theory isn't - it introduces an extra layer of complexity justified by nothing more than human intuition (not even innate intuition, but the experience of classical mechanics).
> The wavefunction is the simplest construct that explains our experimental results (it appears in every interpretation), so it's simplest to assume it just is what's really happening.
You are addressing a straw man. I never said anything about an alternative to wavefunctions—I brought up how they should be interpreted. Additionally, your claim, "so it's simplest to assume it just is what's really happening" has no more justification to it than the first time you said it.
> Hidden variable approaches are incompatible with our experimental results ...
He wasn't talking about a hidden variable theory, just about how to interpret the wavefunction, holding all theoretical elements constant: does it model our knowledge of physical states, or the physical states themselves.
The first one is simpler because we already know we are observing the physical states through an intermediary and that our knowledge is limited; the second one introduces a new capability to the universe.
> I never said anything about an alternative to wavefunctions
Unless you're denying that physical reality exists at all, to suggest that the wavefunction isn't physical reality is implicitly to suggest that something else is.
> He wasn't talking about a hidden variable theory, just about how to interpret the wavefunction, holding all theoretical elements constant: does it model our knowledge of physical states, or the physical states themselves.
If you're assuming there are "physical states themselves" that the wavefunction is merely our knowledge of, that's practically the definition of a hidden variable theory.
> The first one is simpler because we already know we are observing the physical states through an intermediary and that our knowledge is limited; the second one introduces a new capability to the universe.
Nonsense. Observation is an ordinary physical process that happens inside the universe; anything that "observing" can do must already be something the universe is capable of.
I don't think the wavefunction is a physical object. It's just a mathematical abstraction to describe reality, theres no real justification to treat it as something physical?
I think in Pilot Wave Theory it is considered a physical wave of some sort.
> According to pilot wave theory, the point particle and the matter wave are both real and distinct physical entities (unlike standard quantum mechanics, where particles and waves are considered to be the same entities, connected by wave–particle duality). The pilot wave guides the motion of the point particles as described by the guidance equation.
> I don't think the wavefunction is a physical object. It's just a mathematical abstraction to describe reality, theres no real justification to treat it as something physical?
What do you consider justification to treat something as physical? Do you consider e.g. electrons "just a mathematical abstraction to describe reality", or do you consider them "something physical"? If so, why?
De Broglie-Bohm is probably not the correct theory, since (if I am not mistaken) it's not reconcilable with relativity, and has a host of other problems which led its original proponents to abandon it.
Copenhagen is also probably not correct, though it is still frustratingly mainstream. Copenhagen has the well known problem of lacking a physical definition of "observer", but it also has conflicts with delayed choice quantum eraser experiments (which suggest that the wavefunction never collapses), and contradictions that arise when you try to locate the proposed wave collapse in spacetime.
The Everett many-worlds interpretation is gaining favor, and has none of the problems of the other two pictures. In the Everett view, scientists and experiments and instruments and universes are ensembles of particles like any other, and enter superpositions upon interacting with other systems in superposition. In a sense, every possible outcome of a quantum measurement actually happens in due proportion to the probabilities described by quantum mechanics, but upon becoming entangled with the experiment result, the different outcomes can no longer influence or measure each other. It is the most ontologically minimal theory, since it proposes no other events or laws beyond the already-known laws of quantum mechanics.
The stock many worlds hypothesis has a couple of really big problems. For one, the information content of the multiverse expands exponentially towards infinity. Second, it basically makes the amplitude of the wavefunction meaningless, since everything that can happen, does. If you make the assumption that the measure of the multiverse is fixed and quantized, things become a bit less conceptually problematic, but that breaks the math. Regardless, it still has the fewest problems among current paradigms.
> For one, the information content of the multiverse expands exponentially towards infinity.
I don't agree. I would say that the biggest difference-- really, the fundamental difference-- between Everett and Copenhagen is that Everett says that the state evolution of the wave function is always unitary, while Copenhagen says that it's only unitary sometimes but not others (and don't ask about how to calculate which way it will act).
Under unitary state evolution the volume of phase space is preserved and no information is globally created or destroyed. So the "branching" picture of Everett is not precisely correct, though it is handy language for building intuition.
This should not be surprising because of complementary observables-- confining the state of one property will cause some other property to become proportionally unconfined, per Heisenberg.
In the Everettian interpretation, experiments do not have results (!) and it is not clear what probability even means, much less how the Born rule and observed statistics arise. That's because it is not a theory about our world, but about an imagined world where the only things existing is the psi function.
> every possible outcome of a quantum measurement actually happens in due proportion to the probabilities described by quantum mechanics
This cannot work, because those probabilities are defined by a basis in Hilbert space and that basis is defined by details of the experiment, which does not actually happen in the Everettian view.
I am not sure what you mean by "experiments do not have results." A measurement in Everett is the environment becoming entangled with the measured system, and a "result" is a particular basis of the combined wavefunction.
> what you mean by "experiments do not have results."
Let the experiment measure spin projection s_y of an atom previously prepared in spin state |s_x+>. It is known empirically that this experiment has two possible mutually exclusive results, +1/2 and -1/2, with probability of each being 1/2. Standard quantum theory accepts this and uses mathematics to predict those probabilities from data on the mutual arrangement of preparator and analyser of the spin.
Such predictions are difficult to be made in Everettian theory, because that theory describes both the particle with spin and those forming the measuring apparatus in the same way. When that is done, the physical process of measurement can be mathematically described by the Schroedinger equation, which may seem like a win, but it has a big downside: all it gives is a ray in the big Hilbert space of particle plus apparatus at later times; it is not clear what this calculation result is good for. It is not the prediction of real experiment's result; it is not even clear how to use it to calculate probabilities of those results. Certainly not by Born's rule - because what is the eigenfunction to be used in the Born formula? Such eigenfunction is, in standard theory, determined by the classical settings of the apparatus, such as orientation of the SG magnet. But there is no such thing in Everettian theory, there is just psi function.
If I have two bases |O_up> and |O_down>, representing the observer in the state of having measured an up spin and a down spin respectively, what is wrong with taking <O_up|psi> and <O_down|psi> to find the fraction of Hilbert space occupied by either possibility?
It does not work as written. How is |O_up> defined in terms of the basis of the global Hilbert space? There is infinity of directions in space which we can choose as the defining axis of the ket |O_up>. You could calculate the inner product for any of them. Does that mean that the inner product is typically zero and to get a non-zero probability, one has to talk not about probability of possible results of measurement, but about probability on the space of possible measurement setups? If so, why bother with the concept of quantum measurement at all? Why not just say any configuration of the world from some measurable set has probability int |psi|^2 dq? That would be the other, non-projection Born rule (or interpretation of psi), which doesn't have the problem of preferred basis and does not need it to be chosen by the classical apparatus.
If you square the probability amplitude and take the absolute value, that you get something that works very much like a probability. But there is nothing in many-worlds theory that says this quantity has any special significance.
I still either don't see why you say that, or don't follow what you are trying to say.
The Born rule is a fundamental feature of all quantum theories, and I don't see why Everett is an exception.
If you want an intuitive picture of what the squared amplitude means, it seems that it would suffice to think of it as the fraction of phase space which evolves into the configuration associated with that amplitude.
In this instance, the popular understanding corresponds to the latest research better than realist theories like pilot-wave theory. Proponents of realist interpretations of QM have relied for decades on identifying possible classical channels of relativistic communication, or theorizing the existence of hidden variables.
The double slit experiment has been used in various forms to, over time, eliminate these channels. I'm not a physicist, but the language that they are using quite a few papers leads me to believe that these channels have been closed, and the realists are out of remaining options for maintaining realism.
Wherein the authors show, using previously untested correlations between two entangled photons, that maintaining realist interpretations of quantum mechanics (such as pilot-wave theory) as a fundamental concept require the introduction of locality-defying 'action at a distance' actions.
Quantum erasure with causally disconnected choice, 2012
Wherein the authors, using an interferometric quantum eraser experiment, eliminate past and future communication channels by enforcing Einstein locality via a mechanism that I don't understand yet. They conclude that, "No naive realistic picture is compatible with our results because whether a quantum could be seen as showing particle- or wave-like behavior would depend on a causally disconnected choice. It is therefore suggestive to abandon such pictures altogether."
Experimental non-classicality of an indivisible quantum system, 2011
I understand even less of this one, looks like it has to do with performing the kinds of simultaneous measurements of various properties that one would expect to be able to, if realist interpretations were true. They conclude, "Not only is a single qutrit the simplest system in which such a contradiction is possible, but, even more importantly, the contradiction cannot result from entanglement, because such a system is indivisible, and it does not even allow the concept of entanglement between subsystems."
UPDATE:
Experimental loophole-free violation of a Bell inequality using entangled electron spins separated by 1.3 km 2015
"For more than 80 years, the counterintuitive predictions of quantum theory have stimulated debate about the nature of reality. ... In the past decades, numerous ingenious Bell inequality tests have been reported. However, because of experimental limitations, all experiments to date required additional assumptions to obtain a contradiction with local realism, resulting in loopholes. Here we report on a Bell experiment that is free of any such additional assumption and thus directly tests the principles underlying Bell's inequality. ... This result rules out large classes of local realist theories, and paves the way for implementing device-independent quantum-secure communication and randomness certification."
I'm not aware of any significant challenges to these studies.
> Wherein the authors show, using previously untested correlations between two entangled photons, that maintaining realist interpretations of quantum mechanics (such as pilot-wave theory) as a fundamental concept require the introduction of locality-defying 'action at a distance' actions.
Not sure what the big revelation is supposed to be here. Everyone knows that Bohmian mechanics is non-local, that's exactly the point!
> The double slit experiment has been used in various forms to, over time, eliminate these channels. I'm not a physicist, but the language that they are using quite a few papers leads me to believe that these channels have been closed, and the realists are out of remaining options for maintaining realism.
Why do you think this disproves pilot wave theory?
> Physicists also compulsorily reject Bohm’s construction because it explicitly builds nonlocality into its framework—even though violations of Bell’s inequality have conclusively shown that the stage of our universe is nonlocal. This is perplexing. Nonlocality is unavoidable in any theory that recovers the predictions of quantum theory. Therefore, any criticism of a theory that displays Nature’s nonlocal feature in an obvious way is both unfounded and counterproductive. Despite this, Bohm’s inherent explication of nonlocality continues to be obnoxiously mistaken as a strike against it instead of for it.
I might not be using the same definitions as you are or maybe I'm getting something mixed up, so I'd be glad if you could elaborate on what exactly you mean by "determinism", "realism" and especially "locality".
(In my book, the Copenhagen interpretation is a non-realistic(∆), non-deterministic and local theory, which would contradict your statement.)
(∆) Assuming, of course, that the wave function is not an object of reality, as I think is standard.
Many would argue that Copenhagen is not local, but its very hard to even define locality if you are genuinely non-realist about everything.
Regardless, if it was possible to be "local + non-deterministic" many of us would be fine with that. But its not - Bell rules out "locality + realism", regardless of whether the realistic theory is deterministic or non-deterministic.
Who? I believe the point of view that the Copenhagen interpretation is local is the standard one.[1]
> but its very hard to even define locality
How so? There are various definitions of locality—in terms of no faster-than-light transmission of information, in terms of commutators of field operators at spacelike distances vanishing as well as in terms of C* algebras—and the Copenhagen interpretation fulfills them all.
> if you are genuinely non-realist about everything
I'm being non-realist only about things whose existence we can't prove, e.g. the wave function.
> Regardless, if it was possible to be "local + non-deterministic" many of us would be fine with that. But its not - Bell rules out "locality + realism"
I don't see how this disproves anything of what I said. While I think you're right[2] about the fact that the violation of Bell's inequality rules out local realism—irrespective of determinism—, I already said that, in my book, Copenhagen is not a realistic theory because the wave function is not an object of reality.
[2]: "However, Fine's theorem shows that, this deterministic assignment of properties is not required to prove Bell's theorem. This is because the set of statistical distributions for measurements on two parties, once locality has been assumed, are independent of whether or not determinism is also assumed." (https://en.wikipedia.org/wiki/Principle_of_locality#Local_re...)
Where did I invoke mysticism? And where did I end the quest for answers?
I said "in my book" for a reason. You might of course argue that the wave function is real, which would make the Copenhagen interpretation a realistic, albeit non-local theory (as the collapse of the wave function would happen instantaneously everywhere). I'd be very happy to discuss this but please bear in mind that this was not what the discussion was about.
Because pilot wave theory assumes realism and determinism[1], and the authors of the first paper I listed conclude that their result "suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned."
Their conclusions could be wrong, of course, but if they are not, according to the authors' interpretation physicists should abandon realist theories like pilot-wave theory.
It would help if these weren't all from the same single source: Zeilinger's Vienna group and friends.
On the other hand formalism works seamlessly mathematically, which cannot be said of standard quantum mechanics. Even this year's Fields medalist Figalli has a paper on how good things are in Bohmian land.
Experimental loophole-free violation of a Bell inequality using entangled electron spins separated by 1.3 km 2015
https://arxiv.org/abs/1508.05949v1
"...This result rules out large classes of local realist theories, and paves the way for implementing device-independent quantum-secure communication and randomness certification. "
This isn't relevant because Bohm's theory isn't local. There are much stronger results like PBR ruling out a large class of non-local realist theories, but that also doesn't include pilot wave theory.
No known experiment can rule it out because it's predictions equal those of Copenhagen.
This is something different. Zeilinger paints himself disproving realism alone and has a well known philosophical stance/bias here. Local realism testing here however is about being both local and realist, and pilot waves aren't local to begin with.
Relativistic extensions aren't pleasant in standard QM either.
Bohmian formalism would require something perhaps no less aesthetically repulsive, but surely more alien: a preferred foliation of spacetime to avoid a preferred reference frame. Turns out it wouldn't have to be an artificial choice, as it can be derived from the wave function itself https://arxiv.org/abs/1307.1714 (that would also mean this preference existing within QM irrespective of any interpretation).
Sadly this seems to be an issue that is shrouded in a lot turf war within the physics community. As someone who is mostly interested in the history of it from my armchair, I think it's important to put out there:
The assertion that Bell's inequality shut down Bohemian Mechanics is absolutely false. This claim has been floating around since the the 60s. The biggest disagreement came from ... J.S. Bell himself. He was a proponent of Bohemian Mechanics and didn't agree that he had dismantled it.
==== The criticism ====
> Recently, however, physicists more commonly cite the Kochen-Specker Theorem and, more frequently, Bell’s inequality in support of the contention that a deterministic completion of quantum theory is impossible. We still find, a quarter of a century after the rediscovery of Bohmian mechanics in 1952, statements such as these:
> The proof he [von Neumann] published …, though it was made much more convincing later on by Kochen and Specker (1967), still uses assumptions which, in my opinion, can quite reasonably be questioned. … In my opinion, the most convincing argument against the theory of hidden variables was presented by J.S. Bell (1964). (Wigner [1976] 1983: 291)
==== Bell's Take ====
> There was, however, one physicist who wrote on this subject with even greater clarity and insight than Wigner himself: the very J. S. Bell whom Wigner praises for demonstrating the impossibility of a deterministic completion of quantum theory such as Bohmian mechanics. Here’s how Bell himself reacted to Bohm’s discovery:
> But in 1952 I saw the impossible done. It was in papers by David Bohm. Bohm showed explicitly how parameters could indeed be introduced, into nonrelativistic wave mechanics, with the help of which the indeterministic description could be transformed into a deterministic one. More importantly, in my opinion, the subjectivity of the orthodox version, the necessary reference to the “observer”, could be eliminated. …
> But why then had Born not told me of this “pilot wave”? If only to point out what was wrong with it? Why did von Neumann not consider it? More extraordinarily, why did people go on producing “impossibility” proofs, after 1952, and as recently as 1978? … Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show us that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice? (Bell 1982, reprinted in 1987c: 160)
> Wigner to the contrary notwithstanding, Bell did not establish the impossibility of a deterministic reformulation of quantum theory, nor did he ever claim to have done so. On the contrary, until his untimely death in 1990, Bell was the prime proponent, and for much of this period almost the sole proponent, of the very theory, Bohmian mechanics, that he supposedly demolished.
I wonder if anyone here would be so kind as to explain Schrodinger's argument. I'm not grasping it. Is he making a fundamental point about the degrees of freedom an electron can have, delimiting ideas about how a brain can process thought with the help of self-willed electrons? Or is it confined to whether an electron itself can have thought. What point is he making in the debate about whether humans are automata? I'll read it again, but it's subtle and I might not understand quantum mechanics sufficiently to understand it.
He's saying that just because the movements of particles are not predictable doesn't mean you all of a sudden get to claim "free will"-- the ability to somehow impose an arbitrary desire in the chain of causality between your sense experiences and your actions. In order for you to have "free will" in this classical sense, you'd have to control the "slit" the electrons "swerve" through. But you don't. They just act randomly. They're not the things that impose your will. Your will is still an illusion.
I think he would not like it. The theorem is underwhelming and uses misleading attention-seeking terminology. A more appropriate way to state it is, I think, this: if experiment settings violate determinism, its results have to violate determinism as well. To the author's credit, they say as much in the introduction themselves - "I saw you put the fish in".
If electrons are the lowest level of granularity when it comes to determining state in computers then maybe it is the same in the mind?
I imagine a computer made of silicon with the trillions of grooves that the mind has in biological form would find itself to have a conscious just the same. Remove the silicon from the computer or biomass from the brain leaving just electrons behind and you would have a network I would argue gives rise to the seat of consciousness.
From what I understand "seeing" is an active proccess, since it needs light. If we want to observe small things we need more precise light, which means more energy. The problem is not that small particles are somehow become aware when we observe, but the observation itself adds energy to the system, thus alters it. Is this correct?
"Observing" means entangling with photons which have previously been entangled with the observed object. Thus, viewing an object means changing its state in such a way that you're temporarily also part of its state, forming an entropy gradient to gain energy/information from it.
It's indeterminate according to the mainstream Copenhagen interpretation, but de Broglie–Bohm theory [1] ("pilot wave theory") is an interpretation that is entirely deterministic, and its assumptions result in exactly the same final equations as the Copenhagen one. But every particle always has a definite position+momentum and there's no notion of wave collapse or philosophical trickiness of what constitutes an "observer". For more on the historical reasons why Copenhagen became mainstream and de Broglie-Bohm didn't, see [2].
[1] https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory
[2] http://qr.ae/TUNXvC