Clearing up Mysteries - The Original Goal http://bayes.wustl.edu/etj/articles/cmystery.pdf
" ...we must keep in mind that Einstein's thinking is always on the ontological level; the purpose of the EPR argument was to show that the QM state vector cannot be a representation of the "real physical situation" of a system. Bohr had never claimed that it was, although his strange way of expressing himself often led others to think that he was claiming this.
From his reply to EPR, we find that Bohr's position was like this:
"You may decide, of your own free will, which experiment to do. If you do experiment E1 you will get result R1. If you do E2 you will get R2. Since it is fundamentally impossible to do both on the same system, and the present theory correctly predicts the results of either, how can you say that the theory is incomplete? What more can one ask of a theory?"
While it is easy to understand and agree with this on the epistemological level, the answer that I and many others would give is that we expect a physical theory to do more than merely predict experimental results in the manner of an empirical equation; we want to come down to Einstein's ontological level and understand what is happening when an atom emits light, when a spin enters a Stern-Gerlach magnet, etc. The Copenhagen theory, having no answer to any question of the form: "What is really happening when - - - ?", forbids us to ask such questions and tries to persuade us that it is philosophically naive to want to know what is happening. But I do want to know, and I do not think this is naive; and so for me QM is not a physical theory at all, only an empty mathematical shell in which a future theory may, perhaps, be built."
"Biologists have a mechanistic picture of the world because, being trained to believe in causes, they continue to search for them and find them. Quantum physicists have only probability laws because for two generations we have been indoctrinated not to believe in causes - and so we have stopped looking for them. Indeed, any attempt to search for the causes of microphenomena is met with scorn and a charge of professional incompetence and "obsolete mechanistic materialism." Therefore, to explain the indeterminacy in current quantum theory we need not suppose there is any indeterminacy in Nature; the mental attitude of quantum physicists is already sufficient to guarantee it."
However, one reason that physicists don't spend too much time thinking about what quantum theory really means (besides laziness and bad science) is because we know that quantum theory is not the ultimate theory of reality. Why try to force ontology on a theory that can not even answer all empirical questions without ad hoc additions (standard model of particle physics) or force mating with another theory (general relativity)? We are all waiting for a better theory of physics. Of course rewriting the quantum theory in different forms, like this attempt, might help in getting there.
Note Einstein developed his theories of relativity based on very scant evidence just by contemplating the aesthetics of what physics should be. I think there's a big lesson to be learned here - that some heuristics are generalizable.
The reason most physicists don't think about the philosophical underpinnings has more to do with the way quantum mechanics is taught. Many of the founders were interested in the philosophical questions but after WWII you can see a sharp decrease in that aspect and more of the hardcore calculation and the philosophy kind of disappeared. Here  is a great talk that discusses this very point. Unfortunate for quantum mechanics because understanding the philosophical underpinnings is critical to understanding what quantum mechanics says and doesn't say. But, thanks to things like quantum computing and better experiments, interpretations is becoming more mainstream since there are experiments being planned (at least when I was heavily involved in the area in 2010) that would be able to test some of the claims made by different interpretations.
>We are all waiting for a better theory of physics. Of course rewriting the quantum theory in different forms, like this attempt, might help in getting there.
To me this is a contradiction. If we are all waiting for a better theory, then why are we re-writing the theory? A critical part of all the interpretations I've encountered  is making different ontological claims about quantum mechanics. Take Bohmian, which posits a guiding wave. Sure we can make some of the mathematics nicer, but we can't go very far without explaining what this 'guiding wave' is, which itself is rather ad hoc. Plus, how do we rectify it with Bell's theorem on hidden variables? To me, we can't really begin re-writing quantum mechanics without inevitably running into ontological statements.
But I still agree with your premise. I believe that by pushing the limits of quantum mechanics both scientifically and philosophically we will get a better grasp of what things to look for / where to look, for a deeper theory. Case in point, wavefunction collapse. Some might consider it a huge problem of quantum mechanics, but if you take a different ontological view, wavefunction collapse isn't that interesting or important. So maybe wavefunction collapse isn't as important and trying to develop a theory that doesn't have it is the wrong idea.
On the rewriting of theory, I should have been clearer. I meant that when you rewrite the theory you should think about ontology. Differences in ontology between different rewrites, as well as interpretations of those rewrites, will point to how quantum theory can be generalized.
I agree completely that quantum mechanics is taught horribly. Strangest of all are introductory quantum/modern physics courses that are meant to give students an intuition of quantum theory use the Copanhagen interpretation -the one that says the least about ontology and emphasizes calculations and empiricism.
I still stand behind my central claim. Let me be clearer. The mathematical model of classical physics says something about physical reality. The mathematical model of quantum mechanics says something very different about reality. Even if there are disagreements, we all agree that the ontology of classical physics is very much incorrect. Extrapolating from this I claim that the next theory of physics will have an ontology that will look nothing like that of quantum theory. As I have said previously, the ontology of quantum theory can be a means to an end (the next theory), but not the end itself.
I'm not a physicist, but I've always wondered about Caroline Thompson's work, like:
"Chaotic Ball" model, local realism and the Bell test loopholes
Small correction. Except for the first two lines of the parent post, the rest is a quote from Jaynes' paper linked above. It may be hard to tell because of the limited formatting options on HN.
Science looks at results and provides useful models. Whether those models are representations of the 'real', or whether the entire universe is actually a bunch of magic pixies who just happen to arrange the universe to look as if it follows a particular theory, or whether the whole thing is a simulation embedded in the real universe which is entirely different are questions not actually accessible to science.
They may, but it doesn't have to be that way. It may just be that reality at that scale is not made of trajectories, balls, and simple causality. As evolved monkeys with 5 senses we are used to intuitive concepts like continuity, predictions, trajectories, objects.
Now imagine a world 1,000,000 times smaller. Will it behave, look and feel the same way? Maybe. Let's shrink it 10,000 times again. Now we are at the size of an atom. Does the landscape look familiar? It may be so unfamiliar that trying to shoehorn our monkey intuitive world-view is helpless.
No objects, no concept of position, no matter as "substance occupying a space", no limitations regarding being in different places at the same time. It may well be that things are so different that it's best described as a mathematical abstract world ... because no intuitive structures and behaviours exist there. (Heisenberg has some interesting correspondence with Bohr on the matter)
It would be like trying to search for Mario inside your nintendo. Mario and his physics only live as abstract equations. Taking apart the computer will not show us what Mario's flesh is made of. The analogy can't be taken too far of course, because there is no known "chip" besides the mathematical structure of the universe.
There is plenty of research on quantum reality (from one of my colleagues, for eg. http://arxiv.org/abs/0709.1149), but it may turn out that there isn't an easy "particle does X when it enters Stern-Gerlach". The concept of particle is probably completely inadequate.
We've been modeling ever since, hoping that enough pieces of the puzzle come together for a cogent picture to emerge.
> But very often the conclusion that is drawn from this observation is that our _knowledge_ about the path followed by the electron causes the interference pattern to disappear.
Although we couch the idea in terms of "knowledge", the usual meaning of that word is not what anyone thinks is actually the trigger. What exactly is meant by "knowledge" is not well-agreed upon, but no one thinks it is something particular to humans (or sentient beings, I suppose) that causes the wavefunction to collapse. It has more to do with whether the information exists - if the particle interacts with something in such a way that its position suddenly becomes deducible, then it collapses. Like I said, the specifics of this are a known hole in the Copenhagen theory - but this "knowledge" tangent is a clear strawman.
As a student of that group as well, I can't say that I am all that pleased by that kind of argument.
The bottom line is that collapse is untenable. There is no good place to put it and yet it is needed by the standard theory.
"if the particle interacts with something in such a way that its position suddenly becomes deducible, then it collapses" would be a theory in the range of possibilities. It would be much more convincing to argue against something like that.
Eliezer Yudkowski has a brilliant treatise on the Many Worlds interpretation here: http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ that really should be required reading for anyone that wants to talk intelligently on the subject.
Edit: seriously, don't even bother reading the article. It (like most science journalism) is garbage. Take the time to work through Eliezer's sequence.
I see a lot of links in the article you gave, but I don't understand what we're supposed to discover in Yudkowski's writings after trying to follow most of them. There's a lot of free text, not much physics. The standard model is a lot of smart formulas supported by the decades of expensive elaborate measurements (and vice versa), however his texts look more like writings of some philosophy student who knows a little of the math than like a physicist's material. I'd also really welcome opinions of professional physicists.
Edit: Wikipedia entry about him seems to fit my impression: http://en.wikipedia.org/wiki/Eliezer_Yudkowsky "Yudkowsky (...) is an American blogger, writer, and advocate for Friendly artificial intelligence (...) Largely self-educated."
Yudkowsky certainly hasn't invented the many worlds interpretation, which was originally formulated by the physicist Hugh Everett in 1957. Even though originally scorned, in the more recent times it has gained popularity among physicists. The series of blog posts by Yudkowsky are (in my opinion, at least) a persuasive argument in its favor against the competing interpretations, and are very much recommended reading for anyone who would like to better understand the issue.
Thanks, that's exactly what I wanted to know.
I'm completely satisfied with the "shut up and calculate" approach. For me, until somebody shows that he/she can calculate (that is, predict) more than what physicists achieve, they are the ones the closest to "the truth" and not the "interpreter."
The issue here is subtle, and it's that the most popular interpretation, Copenhagen, isn't a complete theory because it doesn't tell you algorithmically when collapse occurs. For any possible algorithmic way to handle collapse, there's a corresponding experiment that could (at least in theory) differentiate between Copenhagen and Many Worlds. But the Copenhagen is inordinately slippery in that collapse is defined to occur ex post facto in whatever way is needed to make the experimental results match the theoretical results.
It's perhaps not so surprising that this shortcoming was overlooked in the beginning because Copenhagen was hypothesized before we really had a clear handle on the study of algorithms. But the fact that Copenhagen is still as popular as it is means that Yudkowski needs to spend a lot of time on philosophy of science, because that's what's holding back most people from seeing the problems with Copenhagen, and why at first glance it looks like philosophy.
Text books ought to be rewritten to teach decoherence instead of outdated stuff like wave-particle duality, wavefunction collapse and such. That is the history of the development of QM and not QM as it is known today, in my limited knowledge.
If you get decoherence, much of the "mysteriousness" and "spookiness" that's talked about in such magazines just disappears and you find them all, every one of them, shallow.
The major thing missing from the article for me is that she didn't mention that the co-author of the most recent paper on the topic is much more known as the security researcher than as somebody who does anything related to quantum physics:
He's author of (to me) very useful book "Security Engineering":
If I understand you, we can expect that somebody is maybe going to be the first to discover something new thanks to the way he's used to think about the subject. Still, before that happens, what do we have?
Edit: Only real hard science. The discoverer we expect of course must decide from which side to attack the matter to reach the new discoveries.
The reason I don't know how to answer your question is that I interpret it to mean: what tangible result do we have before we have our first tangible result?
Of course, if that's not what you mean, then please clarify. You may want to read his book, though, as this concept is central to some of it. If you're saying, "what is the effect of such paradigms before they change," then it's best to read his book. One point he makes is that everyone operates under a paradigm, whether conscious of it or not. That is, we must think about our scientific work in some way, and whatever way we think about it will influence what scientific work we do. It is generally the case, though, that many people have some form of agreement on that "some way." When we do, that sets informal bounds on what is "acceptable science."
He does use quantum mechanics as an example, but that's loaded because we're still hashing it out. Another example he uses is phlogiston chemistry (http://en.wikipedia.org/wiki/Phlogiston_theory), which has thoroughly been supplanted. Those who first encountered oxygen were unable to recognize what it was because the paradigm in which they operated didn't contain the concept.
The analogy I draw from it is not direct, but merely: scientists unavoidably think of their work in certain ways that are themselves not tested. This thinking influences their work.
(You've made some changes since I responded, so let me respond to your last question: No, they cannot. Just as choosing what articles goes into the newspaper makes the news inherently subjective, scientists themselves choose what problems to work on, and experiments to carry out. You want them to just "do the science," but what science? The mental framework that helps them decide what science to do is what Kuhn calls a paradigm, and my claim is that the interpretations of quantum mechanics are such a paradigm. That, then, means that the interpretations are important, even if we can't yet directly test them.)
I agree! As in my other comments here, as long as there isn't any scientific result, I also won't regard the products of the proponents of such "interpretations" better than you.
Many Worlds is much tidier than all the alternatives.hh
You're not really going to find anything ground-breaking, but it may help your intuitions about QM even if you're a physicist. As a chapter from the Aaronson book (http://www.scottaaronson.com/democritus/lec9.html) I linked to in a cousin comment says in the first two paragraphs:
There are two ways to teach quantum mechanics. The first way -- which for most physicists today is still the only way -- follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then you learn about the "blackbody paradox" and various strange experimental results, and the great crisis these things posed for physics. Next you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you're lucky, after years of study you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.
Today, in the quantum information age, the fact that all the physicists had to learn quantum this way seems increasingly humorous. For example, I've had experts in quantum field theory -- people who've spent years calculating path integrals of mind-boggling complexity -- ask me to explain the Bell inequality to them. That's like Andrew Wiles asking me to explain the Pythagorean Theorem.
(And on EY himself, just read the million or so words of the Sequences and you'll see he is actually really smart across multiple domains. ;))
As much as I remember Feynman's lectures, he introduced the amplitudes very early?
Similarly physicists have done amazing work over the decades, and now it's time to draw conclusions from all that evidence. But it turns out that physicists are not domain experts in drawing conclusions from evidence. That kind of skill is a separate domain in its own right.
The Physics arXiv Blog writes about it here:
For the curious, the link to the paper can be found at the end of the article.
I remember being downvoted by circle of HN folks who weren't comfortable with Information transfer using Quantum Teleportation. Depite efforts to refute it, here's a team that suceeded doing exactly this => http://phys.org/news/2014-05-team-accurately-teleported-quan...
Edit: Another link I remembered that's strongly in the vein of QM for programmers is (suitably titled) here: http://oyhus.no/QuantumMechanicsForProgrammers.html
> We're able to set the spin (rotational direction) of these particles in a predetermined state, verify this spin and subsequently read out the data.
This is terribly unclear. I don't have access to the article - maybe its better explained there?
The problem with using quantum entanglement for information transfer is that you can't cause a particular spin at either location - they're just both reading random data that correlates. Nothing about this allows any actual transfer of information. Why are they not explaining how they got around this? That's the interesting bit, as far as I can tell.
Paper in full: http://arxiv.org/abs/1404.4369
To the physicists among the us, please share your opinion on the paper with the rest of us.
It's not easy to digest the paper. The finding is not only going to change finance, but also the whole data economy will speed up. Now 'Quants' can be located anywhere and everyone has can enjoy same advantage of datacenter-closeness, evening out the prestigious role of the select few. Well, not really. It's going to take quite some time until the mid-class can access this technology unless, someone finds out a way for mass-production. That would be stellar.
That's it. I hate nobody with a rational mind of whatever kind, but naysayers really itch me.
He didn't like it, as if there were any doubt.
This seems to be the key point - there is absolutely no FTL information transfer going on, which is pretty much what I expected. As I said before, it would've been much bigger news if there was. The press article was clearly written by someone who didn't understand what was going on (or was intentionally deceitful, but that seems unlikely).
Imagine Schrödingers Cat. Only we have a physicist in the box instead! Also inside the box is a photon emitter programmed to send a photon after 30 seconds. Our dear physicist has been instructed to, if he is still alive after 20 seconds (adjusted to have 50% prob) to move the emitter just a tiny little bit to the left.
Now according to the quantum laws, the photon from the original position should create an interference pattern with the photon from the slightly moved position... provided we succeeded in creating a superposition of dead/alive physicist.
Of course, every single experiment would just give one detection, so we would have to do it many times, to really verify that it worked.
The experiment could of course be scaled up, so that we instead proved that we had superpositions of planets being blown up or something. At some point, when the system in superposition is large enough - or significant enough -, I don't see how you could refrain from calling that many worlds.
Anyway, I think my proposal had a key advantage. The interference pattern could in theory be measured. We could get tables and graphs and sigma values.
Basically, you get a mass density on physical space by integrating over the wave function. Then you have correlations of changes in the mass density. Reality becomes a bit like fuzzy tv reception with overlapping channels.
There is no splitting of the universe or any other nonsense.
I still don't like the theory, but at least it is a well-defined theory.
Here's what I would have added: Yudkowsky explains quantum mechanics in terms of the decoherence interpretation and how it makes logical sense as following from a simple rule that nonetheless contradicted widely held philosophical assumptions that kept its simple truth from being appreciated.
Eliezer is a smart guy with great insights into how to think about questions. His "position of authority" is that he writes in a way that's clear and conveys meaningful novel insights. If you think that only Ph.D.s have insights worth reading you're missing out. There's nothing "new" or "publishable" in the sequence, but Eliezer collates a lot of good ideas and presents them well.
For what it's worth, I personally feel like I've learned a lot from reading him. I was just trying to share it in the hopes that someone else on HN may find him as insightful as I do, and the quantum physics sequence is one of his better works. It was just a friendly pointer. If you're not interested, disregard.
>I would never say instead of.
When you recommend something, you're implicitly advocating that it be read in preference to other works on the matter.
It makes no sense to say "I recommend this, but it's no better than anything else."
The recommender is in no position to know whether your next-best alternative was reading another work on the subject or reloading hacker news or taking a nap.
People frequently bring up single-particle self-interference as explanation for Hitachi too. Well, in case of photon the self-interference pattern looks the same as positional interference because of the same wavelength. In case of electron and especially neutron the self-interference pattern would be much more dense than observed one because of the much higher de Brogile frequency than the frequency of the positional quantization.
The wave-like quantization of position is the Pivot Wave. The really important thing here is to understand that Pivot Wave isn't a real wave/object. It is just a description of possible positions of the particle at specific times. Like a trajectory of a bullet isn't a real thing, it is just description of possible positions of the bullet at specific times. It is just in QM the trajectories are wave-like quantized and probabilistically spread - that the non-smooth at small scales structure of our Universe shows its "ugly" face :)
Again, taking the quantized wave-like trajectory description of a particle for a real thing, the particle itself, is the main misinterpretation that has been going for years, especially in Copenghagen interpretation.
Is it possible that pilot waves are ripples in space-time? We should not make strong assertions especially when a theory is young. You make it sound like nothing more than the a regular QM wave function.
Don't get me wrong - i'm not denying de Brogile, i'm just saying that wave-like nature of a particle isn't necessary (and thus isn't proven by ) the double-slit experiment.
The single-particle double-slits are never really "single" particle - they always show statistical aggregate of many single particles and thus they are explainable by positional quantization alone (of small specks as described above).
In contrast, Bell’s Theorem can be formulated without even speaking
about hidden variable theories: the theorem states that some predictions
of QM, well confirmed by several experiments, can not be explained by
any local theory. And BM is nonlocal, as well as QM is. In fact BM inspired
Bell to investigate non-locality, finally leading himto discover his famous
inequalities. Bell was one of the most prominent proponents of BM and
wrote many articles explaining it in great detail.
Also here's a wikipedia article that talks about Bohmian Mechanics and entanglement.
IIRC, There is also an interesting 'alternative' relativity which has non-local effects and a universal frame of reference formulated by a physicist named Frank Tangherlini (I'll be interviewing him this month). It also has weird properties like anisotropy of the vacuum speed of light!!
Might be interesting if Bohmian and Tangherlini mechanics provided a better mathematical rapprochement of quantum mechanics with relativity than Copenhagen/Lorentz/Einstein
Correct. The same is true of Bell's theorem; it shows that no local hidden variable theory can reproduce the predictions of standard quantum mechanics. It's true that the "local" part often gets left out in pop science treatments of Bell's theorem; but Bell himself was quite clear about it, and about the fact that Bohm's pilot wave theory is nonlocal. The article completely fails to mention this, which IMO is a huge omission.
Comment on Y. Couder and E. Fort: "Single-Particle Diffraction and Interference at a Macroscopic Scale", Phys. Rev. Lett. (2006)
I don't run in experimental-physicist circles, granted, but I've definitely encountered countless cases of a clear, obvious, correct solution being brought up and summarily ignored for what proves to be a poor solution. The probabilistic theories have always made for good Science Fiction, but that should hardly matter.
There is an initial training session where a light tells them whether their answer was correct, with one little caveat: only A gets real feedback. B's light just duplicates A's, so the answers that B receives are essentially random (I think they get different picture as well).
Since the task is not very difficult (on purpose), the As learn the task in 80% of the cases. The Bs have a much more difficult task, they are required to try and find order in a random world. They form very complex theories to account for this.
However, that's not the experiment quite yet. The real experiment is that As and Bs are then put together to discuss their results. What happens then is stunning: instead of rejecting the B's theories as unnecessarily complex, the As are usually so impressed with the subtle complexity and detailed brilliance of the B's theories, that they change their mind and accept the B theories!
When asked who will improve in the next round, all the Bs and most of the As pick the Bs. And they are right, because the As will have accepted at least some of the Bs ideas and thus perform more poorly.
Reference: http://omg.pytalhost.net/dls/ebk_wwidw.pdf (German)
Probably not, but one can hope!
This magical bracelet is powered by the wave field shaping the contours of the superfluid of space time! Carrying you through life on positive pilot waves!
No they do not.
> Different "interpretations" cannot produce different predictions in experiments.
Yes they do.
"And just as measuring the trajectories of particles seems to “collapse” their simultaneous realities, disturbing the pilot wave in the bouncing-droplet experiment destroys the interference pattern."
If quantum behavior is really classical like I think they're claiming, wouldn't that mean quantum computers wouldn't provide any benefit?
That said, I think the answer to your question is that the "test particle" in the pilot-wave model is always reaching its destination at the speed of light. If however you model the pilot wave with Newtonian physics and place a literal test particle in it, well, even if the particle is moving very fast, its meandering route will all but guarantee a much slower (likely asymptotically slower) traversal than the pilot-wave test particle.
But IANAP and welcome corrections.
So quantum computing is rather orthogonal to this.
To be charitable to Copenhagen, we would say that we do see the results of collapse. The main problem with the collapse is that is simply not well-specified by the theory.
So no, there is no experiment that can tell the difference. It is possible that pilot wave theory has predictions that the standard formalism is silent on. However, something that is inspired by pilot wave theory can be easily co-opted by standard approaches, e.g., Bell's theorem.
Of course, once one has pilot wave theory on it, then whatever needs to be deduced can be and then given to the standard theory.