The blog article bears no resemblance to the actual math, but the issue is the same as always: if you assume locality, you can have hidden variables. Pusey et al assume separability (i.e., no entanglement). Hardy does so as well. It's put in weakly and carefully, but it's still assuming locality and showing that you can have hidden variables.
Everyone gets so stuck on the wave function. Look, if you assume that your physical system's state is described by a vector in a linear space, that you can project onto subspaces to describe subsets of the state, and that the space has a metric and is complete (all sequences in the space converge to something in the space) then if you impose a smooth dynamics on the space, then there will be an equivalent differential equation, because there's only one infinite dimensional Hilbert space up to isomorphism, and L2, the space of square integrable functions, happens to be one of its faces.
The real questions is why the theory is formulated in a linear, complete metric space. Someone posted a lecture by Scott Aaronson a few days back that had some interesting pointers on that.
Seriously, if the wave function bothers you, just don't use it.
>if you assume locality, you can have hidden variables.
Are you sure you don't mean if you assume non locality? PBR tightens things up beyond Bell a bit with a requirement of wave function being real itself.
Also correct me if i'm wrong but I believe the assumption is not against entangled states but instead against entangled states via independent preparation.
This is a very unimpressive article. I am surprised to see Colin link to it.
In particular, while the paper the article references in a short part between the history lesson is important, it does not actually change things much.
It rules out those who think that the wave function is a distribution over an underlying reality but it does not affect those who believe that there is no hidden reality. So called Quantum Bayesians (imo the clearest not necessarily best framework). Nor does it affect the Many Worlders who think the wave function is real, and no hidden variables. Non Local Hidden Variable Theories with real wave functions are still acceptable. So things are no where near as clear cut settled as the article would have you think.
And even for those who believe the wave function is an epistemic tool for an underlying reality, they can still abandon Bell's framework using exotic escape routes like "retro causality". See Matt Leifler's blog for more on that: http://mattleifer.info/2011/11/20/can-the-quantum-state-be-i...
I don't get it. Either quantum waves correctly describe particle physics, or they don't. If the waves behave correctly in all observable ways, how can we judge whether they are "actually" what's going on, rather than some mathematically different – but observably identical – process?
Is it that quantum waves were thought to overfit the observations which led to QM, and now these folk have made a prediction using waves of something hereforeto unobserved, and have in fact observed the predicted behavior? In that case I would say that the refinement of QM involving quantum waves has been further shown to be consistent and in fact necessary to describe quantum theory, not that quantum waves have been shown to be "real".
it's not well explained, but i think it's about whether or not physical theories with "hidden variables" are possible.
the idea is that maybe the uncertainty of quantum mechanics is a result of us simply not understanding things right, and there's actually some deeper "reality" (a better theory that we don't have yet) without the uncertainty. http://en.wikipedia.org/wiki/Hidden_variable_theory
i thought this was already disproved (or it may be that hidden variable theories can only work if physics is non-local? which is something people don't like), but it's been years since i understood / tried to understand this, and it's horribly subtle, so i guess there's more detail than i ever knew...
Hidden variables are disproved by violations of Bell's Inequality. https://en.wikipedia.org/wiki/Bell_test_experiments#A_typica... Basically, quantum systems are better (at certain statistical correlations) than any classical system could be. The only classical-ish explanation is transmitting information faster than light.
This is exciting to me because it seems to make the Everett "Many Worlds" view inevitable, because if A) the wave function exists (which this article seems to indicate) and B) if it never collapses spontaneously (and 50 years of research into this would seem to say no it does not), then necessarily one gets "many worlds", i.e., a large number of co-existing universes inhabiting an N-dimensional Hilbert Space, or, to put it differently, there are worlds in which Schrodinger's cat lives, and ones where it does not.
All very cool, but I still am not planning to sign up for the quantum suicide experiment to test for quantum immortality just yet. ;)
Only tangentially related, but I think it is very unfortunate that Everett's interpretation is called "Many Worlds". Talking about "Many Worlds" paints a picture of parallel universes that one can travel between. That makes no sense, and it is not what Everett says.
I was introduced to quantum mechanics from a TCS point of view, i.e. via quantum computing. After some time of getting used to the mathematics, it seemed quite natural to me to suppose that the universe "really is" a vector in a Hilbert space, without any spontaneous collapses, and so on. This actually caused me to reject the notion that there are "Many Worlds". After all, it's all just components of a single vector describing our universe. It's a single world that just happens to work according to rules that are unintuitive relative to our everyday experience (and the real mystery lies in how consciousness works, but that's a mystery even without QM).
So I was really surprised when I found out that what was meant by "Many Worlds" is almost exactly how I had interpreted things as well, and to this day it feels to me as if the name as an attempt to popularize this interpretation dilutes a proper understanding.
Under the Many Worlds Interpretation, the observations we could explain in terms of waveform collapse are due to entanglement decoherence. The argument for Many Worlds is that since our observations can be explained[1] in terms of other well understood properties of quantum systems that we even have equations defining, there's no need for this "waveforms collapse" hypothesis that isn't even mathematically where we don't have any equations that tell us when it occurs.
The idea, if you're not familiar with it, is that when a photon hits a partial mirror part of the waveform goes one way and part goes another way. When scientists conduct an observation of the photon they become entangled with it, and similarly there are two sections to the scientist+photon waveform. But because the mass of the waveform is now really humongous, you aren't going to be able to observe quantum effects the way you do with a single photon.
[1] Except that alone doesn't tell us where Bode statistics come from.
The fact that the likelihood of you seeing your instruments report a particle in a given place is proportional to the square of the waveform's magnitude at that place.
What I mean is it does not collapse "all by itself", rather, it needs environmental interference in order to "collapse" from the observer's point of view. In the view of the many-worlds theory, though, it never really collapses, only appears to from the observer point of view, due to environmental interference.
I got turned off of this stuff in college. The whole probability wave business has always smelled like smoke and mirrors to me. Schrodinger's cat has always smelled to me as well ... especially a few days after the experiment, even if I didn't open the box...
I like this Occam's razor approach. No more magic. There's a field there. We don't yet understand it, but it's there, effecting things, and it's our job to actually figure out what it is, not wave our hands about probabilities. This feels real to me. The other interpretation did not.
The hardest task of quantum mechanics is learning that the intuitions about the universe baked into you by being human are wrong. You never actually escape them, but you do learn to suppress the naive physics hardwired into your brain and slowly and awkwardly simulate a better understanding using another part of your brain.
Everyone gets so stuck on the wave function. Look, if you assume that your physical system's state is described by a vector in a linear space, that you can project onto subspaces to describe subsets of the state, and that the space has a metric and is complete (all sequences in the space converge to something in the space) then if you impose a smooth dynamics on the space, then there will be an equivalent differential equation, because there's only one infinite dimensional Hilbert space up to isomorphism, and L2, the space of square integrable functions, happens to be one of its faces.
The real questions is why the theory is formulated in a linear, complete metric space. Someone posted a lecture by Scott Aaronson a few days back that had some interesting pointers on that.
Seriously, if the wave function bothers you, just don't use it.