When these papers started coming out we debated the notion of 'faster than light' communication. The counter argument was that you had to move the particles apart and that was constrained by the speed of light. Then the question of "when" the state was resolved was pondered. There were two thought experiments proposed at this meetup.
In the first, two highly accurate clocks, separated by enough distance that that speed of light issues were unambiguously resolved by the clocks, which where triggered by the resolution of the quantum state of an entangled particle. Capturing the exact time when the state was known (detangled) on each clock, and then comparing the times on the clocks to determine whether or not it was truly 'instant'.
The second experiment was to set up a 'set' of entangled particles at a distance and resolve only 'some' of them. The set representing 'bits' and the resolved set representing the 'information.'
The question of course is whether or not you can move information this way. Seems like we are getting closer to answering those questions in a definitive way.
I always thought it would be fun to write a short story about someone doing this and discovering that it was the galactic equivalent of 'shortwave' radio and have the alien equivalent of the FCC come out to the planet to arrest the scientists for transmitting without a license :-)
"Fine Structure" -- bits of a soft-SF story you can read online http://qntm.org/structure http://everything2.com/?node=fine+structure
It's of varying quality, I think people usually link to "Oul's Egg" first because it's one of the best sections.
It's of varying quality
Information cannot be transferred faster than light using entanglement. This is already proved by the no communication theorem. Essentially, because since the outcome of measurements is random/probabilistic, ambiguity is introduced on the receiver end. So since we do not allow [local] hidden variables, there is no way for person B to know whether or not they need to apply a further transformation to get to the original state without person A transferring classical bits.
I am not sure what the news in this article is because I found nothing surprising in it.
The universe is a strange place.
Or the channel is already filled completely with spam.
It's clearer with an example. Let's suppose that you and I have each one a particle that is a half of an entanglement particle pair:
* If you don't do anything with your particle, for each measurement that I can do there are some possible results with some probabilities.
* If you do any measurement to your particle, for each measurement that I can do I have exactly the same possible results with the same probabilities.
* If you do annihilate your particle :( , for each measurement that I can do I have exactly the same possible results with the same probabilities.
* Whatever you do with your particle, for each measurement that I can do I have exactly the same possible results with the same probabilities.
In that sense my particle is "unaffected". I can't do any measurement to know what you have done with your particle. So you can't use your particle to transmit information to me.
The strange effect is that if we later meet and compare notes, we will see that the results you get with your particle and the results I get with my particle are related. Some measurements always are equal, some have the opposite value and some are related in more complex ways.
Each measurement (yours and mine) alone are perfectly normal. The strange effect appears only when the result are compared.
You are saying that, if theres an entangled pair, they cannot be used to transmit information that couldn't be otherwise transmitted via "conventional" means. However, if you performed the measurements, and then compared notes you will find that somehow, it looks like the measurements were "synced"?
How does it work this way? that is, how could it be the case that those measurements can be related, but you can't use that relation to transmit informatino?
In a simple example, both people measure for example the spin in the x axis of the particles that travel in the z direction. Each has a 50% probability of obtaining "up" and a 50% probability of obtaining "down". But in every case the get the opposite result, so each one has a random number generator that is synchronized with the other. But each one alone has only a random number generator.
Lest suppose that someone try to use that to create an intergalactic first person shooter for two players :). Using the "synchronized random number generator" it is possible to make the bots move exactly in the same way in both players computers, but they are only "synchronized random number generator" so it's impossible for one player to know what the other player has done (unless they wait until the information arrive in a conventional way, with a speed <=c, but that would be a very big laaaag).
But this is only part of the story, because this process can be simulated with a central "random number generator" that sends the signals to both players.
The strange property is that if both players "magically" decide begin to measure the spin in the y direction, they will have the same 50% chances and always get the opposite result. Another possibility that avoids magic is that one of the players continues measure spin in the x direction and the other players begins to measure the spin y direction. Now each one has a "random number generator" that is independent of the other "random number generator", so each player has no clue about what the other player result. It's not useful for a IFPS, but it's useful as a physic experiment.
(And one of the problems with quantum mechanics is that you can't measure the spin in both directions, you must choose one. It's a little more complicate, but I don't' want to enter into the technical details.)
But this is only part of the story, because this process can be simulated with a central "random number generator" that generate two random numbers and then sends the signals to both players, one for the x direction and one for the y direction. This is a simplification of the "hidden variable theory", that says that the particles "know" in advance what to do if they are measured in for the x direction and in the y direction, in spite of that you can't measure both.
In the experiments the idea is that the measurements/player decide which direction to use while the particles are flying, so they don't have enough time to communicate (at <= light speed) and agree what the result would be. They have to be in accordance from the start (hidden variable theory) OR they have to communicate faster than light OR something even more strange.
Really, to do the measurements it's possible to choose not only the x or y axe, but any direction in that plane. So for every direction each player has a "random number generator", but they are not independent. If one measure in the x direction and the other at 45º the probability that the results agree in some number in between 50% and 100%. The 50% is for orthogonal directions that have independent results, the 100% is for the same direction that has ever the same/opposite result, and for the other angles there is a formula to calculate the value. To simulate this you need a lot of hidden variables, or at least a few an a formula to calculate the result for each direction, or any other variation of this idea. There are many possible proposal, some are more simple and some are more complicated, so the idea is to put all of them in the "hidden variable theory" bag and forget for a moment the details.
The problem is that Bell proved that for any "hidden (local) variable theory" some inequality holds. This inequality ignores the details of the specific theory, so it's not possible to invent a more complex "hidden (local) variable theory" that breaks the inequality. When the same calculations are evaluated using the quantum mechanics the result is a value that for some angles is allowed by the Bell inequality but for other angles the value is forbidden by the Bell inequality. So there is a different prediction of some measurement using the quantum mechanics and any "hidden (local) variable theory".
And it is possible to do this experiment and calculate a value for every angle. And the result is that for the problematic angles the result agrees with the quantum mechanics predictions and don't holds the inequality that is predicted from any "hidden (local) variable theory". So we must eliminate all the "hidden (local) variable theory". (For the non problematic angles the result agrees with the quantum mechanics predictions again, and holds the inequality as expected because they are not problematic).
More details: http://en.wikipedia.org/wiki/Bells_theorem
QM is so differnet than how we are used to dealing with things the only really useful way to understand it is the Math it's based on. But, decoherence is the hart of a lot of the spooky voodo that really trips people up so it's a much better place to start than the 'cool' vs. trying to extrapilate based on the odd stuff that goes on.
PS: Also, avoid thinking at the large and small scales at the same time. Sensors really are just more fields and particles just like everything else. Also, decoherence is not an all or nothing event it's more a sliding scale between wave and particle.
Just consider, if I take a black marble and a white in hand, shake them up and put them in two boxes so I don't know which marble is in which box and put the two boxes on two sides of the world. Then, when I open one box, I instantly learn the state of the marble in the other. The information is "transmitted" to me "faster than the speed of light" but all because the actual switch event happened in the past. Now, as far as I can, QM entanglement is pretty much the same thing as these boxes. The only weird thing is that the standard interpretation (not the standard theory since it is not necessary for any meaningful measurement), the standard interpretation is that in QM, the equivalent of our marble switch happens when the final measurement happens.
There's nothing in any of this research that involves any concrete evidence whatsoever that faster-than-light information transmittal happens, (feel free to counter-references that), but as far as I've read, all the "transmittal" is "transmittal" the sense of above.
To see the effects of QM, build a device to flip a quantum-mechanical coin. If it comes up heads, put the marbles into the two boxes. If it comes up tails, put the marble into the two boxes in a slightly (but unmeasureably) different way. I.e., whichever procedure you chose, P(white in right)=0.5.
However, when you actually mix in the coin flip, the white marble always in the right box.
This cannot happen in ordinary probability theory, since:
P(white in right) = P(white in right|heads)P(heads) + P(white in right|tails)P(tails) = 0.25+0.25
However, while are many amazing effects there, I don't see how your particular example of QM weird or others relate make the information transmittal at a distance part. IE, the interference part isn't directly related to the entangled at a distance except it provides a box that you can't look into.
To be clear, what really happened under MWI is that the wavefunction "branched" into two around the time you created the two particles, and the two branches spearated from each other in configuration space just as the two marbles in your analogy separated in physical space. When you did the measurement, you merely realized which branch you are in.
Where the analogy fails is that the probabilities in QM are inconsistent with there actually having been some hidden choice of marbles made beforehand.
Perhaps another approach will be clearer:
Imagine magical filter goggles, of two types: red and blue. People wearing red goggles can only see other people wearing red goggles, people wearing blue goggles can only see other people wearing blue goggles.
You put a pair of red and blue goggles to each box, then have four people pick a pair of goggles each.
The result is that each person only sees one other person, near the second box, wearing the same colour goggles.
This is more or less how many-worlds works, except instead of both red and blue goggles in the box you have a single pair of goggles that is both red and blue, but not at the same time (decoherence), and thus instead of two people wearing different kind of goggles, you have a single person wearing goggles that are both blue and red.
Based on what I've learned of these experiments, it seems to me that the least mind-bending interpretation is that the entanglement event results in two particles that have some complementary property k (i.e. one particle has k and the other has ~k) and which one of the two, k or ~k, is had by one of the particles is unknowable without collapsing the quantum state. However, both particles have the property from their inception, and so no faster-than-light or non-local interpretation should be needed. When the particles are separated to great distance and one is observed to collapse to k, the other must be observed to collapse to ~k because it was always a ~k particle.
Clearly, there's some aspect of the experiments that I don't understand that invalidates this simple interpretation, but I do not know what it is.
The summary: If you assume simple locality (your k and ~k,) you get different results than what quantum entanglement predicts.
If you want to get rid of the spooky action at a distance, then you need much, much more complicated hidden variables.
Incidentally, I'll mention that your argument is essentially what Einstein et al were arguing for in their famous EPR paper. The fact that it's even possible to rule out local hidden variable theories is highly surprising, and is what makes Bell's discovery (30 years after EPR framed the problem) so monumental.
"Chaotic Ball" model,local realism and the Bell test loopholes
...I've always wondered if much of the Einstein-was-wrong business is really a category error. I personally liked:
Clearing up Mysteries - The Original Goal
" ...we must keep in mind that Einstein's thinking is always on the ontological level; the purpose of the EPR argument was to show that the QM state vector cannot be a representation of the "real physical situation" of a system. Bohr had never claimed that it was, although his strange way of expressing himself often led others to think that he was claiming this.
From his reply to EPR, we find that Bohr's position was like this:
"You may decide, of your own free will, which experiment to do. If you do experiment E1 you will get result R1. If you do E2 you will get R2. Since it is fundamentally impossible to do both on the same system, and the present theory correctly predicts the results of either, how can you say that the theory is incomplete? What more can one ask of a theory?"
While it is easy to understand and agree with this on the epistemological level, the answer that I and many others would give is that we expect a physical theory to do more than merely predict experimental results in the manner of an empirical equation; we want to come down to Einstein's ontological level and understand what is happening when an atom emits light, when a spin enters a Stern-Gerlach magnet, etc. The Copenhagen theory, having no answer to any question of the form: "What is really happening when - - - ?", forbids us to ask such questions and tries to persuade us that it is philosophically naive to want to know what is happening. But I do want to know, and I do not think this is naive; and so for me QM is not a physical theory at all, only an empty mathematical shell in which a future theory may, perhaps, be built."
As far as the philosophy goes, I'd say most people's frustration from Bohr's positivist position is actually due to the real flaw with Copenhagen, which is the ambiguity bundled up into the word "measurement". If you eliminate this ambiguity (which I think is being done incrementally through the study of decoherence), you'll find that a positivist position is actually a lot more attractive. At least, that was how my feelings evolved on the issue.
They're probably still cursing our Hubble telescope, not to mention computers with their decreasing semiconductor scales.
This comment probably just triggered their simulation-awareness detector. :-)
Actually, I think there's a silly but fascinating SciFi short story waiting to be written about assembling a group of people to try to privilege-escalate out of the simulation without triggering the simulation-awareness detectors. "I need you to think about this, but not all at the same time, or in the same place. Oh, and try to think about it only in vague terms."
This is what I like about Hacker News; when I ask a deeply technical question in a field removed from my own, there are plenty of people who can point me in an interesting direction.
Bell's Theorem was the aspect of observational measurement that I was missing, and it's absolutely fascinating. Even moreso in that it was envisioned mathematically before we had the technology to prod at its claims. Thank you everyone for showing it to me.
Not that interpreting all of this to mean that physics is non-local "spooky-action-at-a-distance" is the only viable route, mind.
Consider this: why, exactly, do you believe that when you measure the particle you somehow force it to enter one particular state and therefore the entangled one that's sitting X miles away suddenly enters the opposite one? Are we, humans, sitting outside of quantum mechanics and looking down upon it - and then what we observe is the one true way the world is?
Why would you not, instead, when you measure the particle, entangle the measuring device, and yourself, with the state of the particle? You are, after all, only another part of quantum mechanics the same as anything else.
I'd say it's a less fancy (and more specific) way of saying "hidden variable." I read the parent as saying, "I don't understand why there can't be a hidden variable" which is quite a different thing than saying "this can work without a hidden variable: [thing involving hidden variable]".
For instance, I could phrase things like this:
- Let particles A and B get into an entangled state
- Measure the state of particle A, see that it has property k.
- Predict that B has property ~k even though it's not being measured yet.
- Measure B. If B has property ~k, then we've proven that B had an internal state before we performed the measurement.
From all the "popular" description of "quantum entanglement", I don't see anything in it that's significantly different from things like, say, the preservation of momentum in Newtonian physics.
Of course, I must be missing something, but I don't know what is that thing yet.
This part is the flaw in the logic. That doesn't prove that B has an internal state. All it proves is that, in some sense, B's state is related to A (assuming you repeat the experiment until you reach statistical significance).
Entanglement is nothing more than saying one particle is connected, in some sense, to another. Like a see-saw: if one side goes up, the other must go down. It's only "confusing" in the sense that the particles are physically separated, and the link between them is not mediated by a force as we understand it. Instead, they're linked by their very nature. It's almost as if you have just one particle, that's somehow been split into two and spatially separated. That's the conceptually difficult part.
If an object A is moving (with a known momentum) and hits a stationary object B, we can measure the momentum of either A or B and instantly know the momentum of the other object without having to measure it. The two objects are somehow "connected"; their state (momentum) is "entangled".
Why is quantum entanglement different?
This is incorrect, both particles have a superposition of k and ~k (to use your terminology). It is not the case that when the wavefunction collapses, we just suddenly find out the "internal state" of the particle. This "internal state" simply does not exist until the wavefunction collapse has occurred.
The "no faster than light" doctrine refers specifically to the transfer of information. And no information, in the strictest sense, has been transferred. Think of it like this: try to imagine how you could communicate with someone using quantum entanglement. If you can communicate something, then information has been transferred. But you can't. All you know is that if your particle is k, then theirs is ~k.
Now, you could make up a rule book that says "if my particle is k, then I'll do x, and you'll do y" but that rule book will have to have been shared between both parties before hand, at less than the speed of light.
Edit: should just mention I'm not an expert here, and articles like this are always interesting. Finding loopholes in our conventional understanding. But it's important to know just /what/ our conventional understanding is first to realise why things like this are important.
What would the implications be if particles did have an internal state?
You see, the problem is not whether you can transfer information in human readable form (though if you could that would certainly be a huge problem with relativity!), but whether any effect that propagates faster than the speed of light exists.
You'll have a hard time explaining that in the frame of wavefunction collapse, I think.
Keep in mind we're working with quantized values. Spin is always completely up or down, no matter what angle you measure it from. Measuring it at that angle locks it down, but tilting another detector in the chain after means you go back to a non-linear probability calculated by quantum mechanics (cosine of the angle in this case) to if it is up or down again at the new tilt and locks it down to that new tilt. There isn't any pre-determined value the electrons can be set to beforehand that matches the observed number that deflect in the detectors up or down at all measurement angles.
I'm not a physicist and I don't really understand it fully, but I believe this is what you're talking about.
Recently on a call-in podcast I listen to, a caller has claimed that the following article shows that a) realism has been disproven and b) that proves we are all in a computer simulation:
>Now physicists from Austria claim to have performed an experiment that rules out a broad class of hidden-variables theories that focus on realism -- giving the uneasy consequence that reality does not exist when we are not observing it
Sure. Anyway, I'm certain this guy is equivocating, but I don't even know where to read up on the relevant topics. And I read articles like these and they keep using the word realism without defining it.
- Realism, Locality, or Separability
Realism - Einstein's idea of 'elements of reality' - the idea that things actually are material and unchanging. Their properties are constant unless acted upon (ie a Hydrogen atom at time1 and point1 has a constancy of properties at time2 and point2). You could imagine a universe that had, as a property of the universe, atoms change mass every other day - just for the hell of it.
Locality - The idea that a system can only be affected by things within its lightcone - no effect can travel faster than the speed of light. And two objects must be 'local' before they can interact.
Separability - any system separated by non-trivial space/time can be considered independent of each other. The idea that 1/2mv^2 does not include every other part of the universe in trying to figure out what m or v are. Not quite the same as locality - as it would still require local interactions, but that net of interactions could be so thick and pervasive as to prevent any meaningful isolation of any system.
As is stated above, these concepts fall out of the EPR experiment.
You pointed me in the right direction though and I'm convinced that the term is being abused by the caller:
>Realism in the sense used by physicists does not equate to realism in metaphysics. The latter is the claim that the world is in some sense mind-independent
If papers are being written about how "local realism" should be banned from scientific discussion, I'm fairly certain it isn't appropriate for laymen to bandy about the term and then make wild ass philosophical assertions based on it.
I'm still not entirely sure why so many people consider collapse to be simpler. Is it too daunting to think of the world as an amplitude distribution that propagates in a way that we're not intuitively used to? Too hard to think of ourselves as part of quantum mechanics, because that thing I see there on the measuring device must be the reality, damn it, and what the hell do you mean I've just entangled myself and the device along with the system?
Or maybe just tradition and accepting the "scripture" coming from the established authority. How could we best test that?
Other interpretations of quantum mechanics start with the assumption that the world is fundamentally like our naive experience leads us to think of it as. Therefore when we observe something, we couldn't really be thrown into a superposition of states, each of which observed a different thing and which cannot meaningfully interact due to quantum mechanical principles. If we can't be thrown into such a superposition, then there must be some sort of underlying reality, or quantum collapse, or other weirdness that is not explained by QM.
All of the weirdness of interpretations other than many worlds can be understood as the conflict between how they would like to understand the world (there is a reality that happened), and how quantum mechanics described things.
All of the weirdness of the many worlds interpretation can be understood as, "We can't perceive the process of quantum superposition, so it seems really, really weird to us."
In Many Worlds you'd find that for each measurement the wavefunction split into two branches, one corresponding to 1 and one to 0, in each location. From the sites of both measurements the decoherence spreads outwards as particle interactions happen, and when they finally touch the two 1 segments and the two 0 segments merge, and there is one area split along 1/0 in the wave function instead of two.
That's all horribly simplified, but should get the idea across.
Maybe my confusion is that I'm not sure a which point the universe splits. Suppose I entangle two particles, then separate them by a large amount, my universe hasn't split yet, because before I make the measurement, I haven't become entangled with the state of the particles, but when I do, surely the spreading entanglement of myself with the possibilities centers at both entangled particles (i.e. it is nonlocal)?
The answer to that is that the universe never really splits. It remains one big very complicated universe, with very very very messy superpositions of state in it. But different components of these superpositions lose "coherence", which gives the appearance to us of wave collapse, even though there actually is no wave collapse.
Lookup "decoherence" for more info on this.
It isn't that the universe has split, it's that one section of the universe's wavefunction has split, or more exactly become a bimodal distribution. At both locations there are bunches of the wavefunction corresponding to both possible observations - but both locations are entangled since the two particles that touched this off were. That means that when you observe the two experiments you don't "split" into four, but only into two since they are already entangled and you only get entangled with this complex once. The information of "What's entangled with what" spread at strictly sub-light speeds, and the information of "Did it pass through the polarizing filter or not" also traveled at sub-light speeds.
That's why I find the description of it as 'local' somewhat weird, but I accept that I'm probably using the term incorrectly.
Two particles are entangled and separated by a vast distance.
At some point, I conduct a measurement on the first particle, causing me to split into two different versions of myself that have measured two different states. Now, the effect of that measurement spreads outwards from the site of the measurement, so I may affect other things based on the measurement, etc.
My problem is that if a measurement is done on the other particle, long before any message from me could possibly have got there, then that measurement and the effects it has on its local environment and the effects they have are all also dependent on my measurement.
I imagine this as a wave of universe splitting spreading out from the site of my measurement and the entangled particle despite the separation.
Now, perhaps this is a nontechnical use of the word nonlocal, but I'd describe that as a nonlocal phenomenon.
Imagine that your friend Joe writes the same word on two pieces of paper, puts them in envelopes and hands one envelope to you and mails the other envelope to Paris. At some point in time, you open up your envelope and read the word "popcorn". At some other point in time, Pierre open up his envelope and reads the same word.
The word that Pierre reads is not "dependent on your measurement". Nothing nonlocal occurred.
The Bohm Interpretation is the experimentally indistinguishable dual of MWI that proposes hidden variables rather than the absence of wave collapse.
Despite the hidden variables in the Bohm Interpretation, there is nothing wrong with it, modulo Occam's razor.
As I understand it, hidden variables are fine if you're happy to throw away locality, which is exactly what Bohm interpretation does and exactly what we're talking about here.
Since your analogy is purportedly showing me how MWI is local, showing me that something that is significantly different (i.e. is a hidden variable explanation) is also local doesn't help a lot.
I think that's your trouble there. Those are totally independent of your measurement, the waveform splits at the measurement of the other particle the exact same way, regardless of what happened to you locally.
Two exceptions to this are (1) that the Many Worlds Interpretation is experimentally indistinguishable from the Bohm Interpretation, unless you want to commit suicide many times with a quantum revolver; and (2) the Copenhagen Interpretation doesn't define what a "measurement" is, so scientifically determining whether it's correct or not is impossible.
None of this implies that any of this is "like religion".
Remember, waveform collapse or "some new physics which makes QM break at a certain scale" was actually the assumption of the people who started quantum physics, and the default assumption of popular writing including the article we're discussing. It wasn't until much later that Everett proposed that it might be an unnecessary hypothesis like Maxwell's aether.
If your assertion is that we might never know which is true, the Many Worlds Interpretation or the Bohm Interpretation, then you might be right. But, if it comes down to these two, most scientists are going to go with the Many Worlds Interpretation by Occam's razor.
There are many other theories that we reject by Occam's razor that we can't disprove, and yet we don't typically consider these rejections to be a matter of "religion".
I might except MWI clearly has more True Believers.
If your assertion is that we might never know which is true
My assertion is that since all interpretations lead to the same exact predictions for every imaginable experiment, it doesn't make sense to say that in any meaningful way one interpretation is true and the others aren't. You might as well be arguing whether the electromagnetic field is weaved by tiny angels out of their hair or not.
The Copenhagen interpretation is the closest to "no interpretation" in that it's the most direct way of translating the math into actual predictions.
Quoting the guy who said you shouldn't multiply entities without necessity, to justify the introduction of infinitely many parallel worlds, is an interesting rhetorical maneuver.
This is not the case. Other than MWI and Bohm, the major interpretations (not including Copenhagen) make different predictions that are, in theory, distinguishable. It's just that it is completely infeasible to perform the experiments at this time. Maybe in 100 years or in 1,000 years, we'll have the technology to perform these experiments.
When the time comes that we can perform these experiments, one of the possible outcomes is that we narrow it down to MWI/Bohm. Once that happens, however, we will have no way to scientifically determine which of the two is the correct interpretation, except to the degree that we trust our intuitions about Ockham's razor. But that is certainly not going to let us know the answer for sure.
> The Copenhagen interpretation is the closest to "no interpretation" in that it's the most direct way of translating the math into actual predictions.
If we ever want to actually do experiments to determine under precisely which situation the probability wave collapses, then we have to do better than the Copenhagen Interpretation, since it doesn't define what a "measurement" is. Without such a definition, there's no way to test whether it is correct or not.
> Quoting the guy who said you shouldn't multiply entities without necessity, to justify the introduction of infinitely many parallel worlds, is an interesting rhetorical maneuver.
It's not a "rhetorical maneuver". I'll just refer you to the Stanford Encyclopedia of Philosophy for more info:
It seems that the majority of the opponents of the MWI reject it because, for them, introducing a very large number of worlds that we do not see is an extreme violation of Ockham's principle: "Entities are not to be multiplied beyond necessity". However, in judging physical theories one could reasonably argue that one should not multiply physical laws beyond necessity either (such a verion of Ockham's Razor has been applied in the past), and in this respect the MWI is the most economical theory. Indeed, it has all the laws of the standard quantum theory, but without the collapse postulate, the most problematic of physical laws. The MWI is also more economic than Bohmian mechanics which has in addition the ontology of the particle trajectories and the laws which give their evolution. Tipler 1986 (p. 208) has presented an effective analogy with the criticism of Copernican theory on the grounds of Ockham's razor.
So, what exactly are those experiments that allegedly could distinguish between interpretations?
"Other than Android and Windows Mobile, all major cell phone operating systems (not including iOS)"
The long and short of it is, for purely philosophical reasons you don't like the notion of the state vector collapse. You freely admit that there is no way to experimentally distinguish between your favorite interpretation and the Copenhagen interpretation. You just declare that it's not even a contender, using arguments which have nothing to do even in principle with the outcome of any experiments.
Saying that something is not precisely defined sounds to me totally like grasping at straws. Nothing's ever precisely defined in science, you could criticize any theory including Newton's mechanics by saying that it doesn't define precisely what a measurement is. Which never stopped anyone from measuring things and comparing the values they measured with what the theory predicted.
You quote someone who made an analogy with Copernicus. The Copernican theory simplified the calculations right away, whereas with Quantum Mechanics, the calculations stay exactly the same no matter what story you feel like telling yourself so that you can take the outcome of these calculations and compare them with the real world.
Let me make this clear that I'm not against MWI. I care about MWI exactly as much as about the Copenhagen interpretation (which is not very much.) I am however opposed to pretending that one of the two exactly equivalent ways of saying something is "more true" than another.
Staying within Quantum Mechanics, there are two ways of writing the equations of motion: the Heisenberg picture and the Schroedinger picture. In the former the state vector is constant but the operators are a function of time, in the latter the operators are constant and the state vector evolves with time. The two formulations are equivalent, sometimes it is convenient use one or the other for a specific calculation and often you use a mix of both (so called interaction picture.) Nobody argues that say the Heisenberg picture is "really true" as opposed to the Schroedinger picture. If someone did, that would be inane, even if they invoked Copernicus and Occam (even though the analogy with Copernicus would be maybe better, since the calculations actually are different depending which picture you choose.)
That's not the long and the short of it. I didn't say anything of the sort.
You ramble on apparently oblivious that to the fact that different interpretations of QM can and do make different predictions about what causes wave-function collapse. E.g., GRW makes different predictions from Penrose's interpretation. Why you are oblivious to this fact, I cannot fathom, as I've mentioned this fact several times now.
So, as you should be able to see, this is in no way comparable to different forms for equations that make identical predictions.
As to the measurement problem
A. Neither, because the Copenhagen Interpretation doesn't define "measurement", and so we have no idea whether or not measurement by cats counts as "measurement".
You don't like cats, substitute in nanoscale molecular robots instead.
Looks like you are trying to confuse the issue: to deny that QM interpretations are not experimentally testable, you bring into discussion a bunch of things which are not interpretations, but actual scientific theories, and call them "interpretations." You might as well argue that the ancient Greek religion was testable (you could climb Mount Olympus and see whether Zeus was there or not), therefore religions are testable, therefore the existence of God is a scientific fact.
Regardless of wonderful qualities of the GRW theory and the Penrose interpretation, the fact stands that there is no experimental way, even in principle, to distinguish between MWI and Copenhagen. If you teach cats or mice or West Highland Terriers to perform quantum mechanical experiments, they will still not be able differentiate experimentally between MWI and Copenhagen. And you will not be able to determine whether measurements performed by cats cause wave function collapse, because you will have to observe the cat to ask him what he observed.
I have no desire to get into a debate on terminology. I've just been using the terminology that was taught to me in an entire semester-long class I took at MIT on QM and its various "interpretations". GRW was called an "interpretation", as was Penrose's. The term "interpretation" is also the term used on the Wikipedia page for "Penrose's Interpretation".
> the fact stands that there is no experimental way, even in principle, to distinguish between MWI and Copenhagen.
That is not a fact; your assertion is false:
I searched for that paper but couldn't find it, nor any description of the experiment it proposes. I wrote an email to professor Deutsch asking him to send me a pdf copy. I will get back to you if/when he responds.
I still kind of suspect that the paper will make some assumptions that will effectively mean "if <magic> then we could test the MWI in the following way: ...".
In any case, I'm sure you can come up with a definition of "measurement" that might make the Copenhagen Interpretation experimentally indistinguishable from MWI, but why? MWI is and always will be a simpler theory, and thus preferred by Occam's razor.
The problem with the Copenhagen Interpretation is that it is NOT a scientific theory. It is not a scientific theory because it is not falsifiable. It also builds non-fundamental things like "measurement" right into the fundamental laws of physics, which is absurd. Copenhagen is not falsifiable, because if I were to attempt to falsify it by demonstrating that a "measurement" did not cause the wave function to collapse, you could always assert that I had used the wrong definition of "measurement".
GRW, on the other hand, can be seen as a sub-interpretation of the Copenhagen Interpretation because it rigorously defines the term "measurement". I.e., entanglement of the particles in question with a "large enough" collection of additional particles. "Large enough" here needs some experimental tuning, but some day we may be able to perform these experiments and attempt to falsify GRW. Because we can falsify GRW, it IS a scientific theory.
Most other collapse interpretations that I have heard of can likewise be seen as sub-interpretations of Copenhagen, in that they define "measurement".
One of those sub-interpretations is the Wigner Interpretation, or the "consciousness causes collapse" interpretation. Counter to your previous assertion, we could in theory experimentally determine whether this is true via trained rats: (1) Train a rat to perform measurements, (2) kill the rat before it has a chance to tell you the results, (3) check to see whether wave function has collapsed. We can do this because there are experiments that will tell you if two particles are entangled or not.
Animal consciousness not good enough for "measurement"; it needs to be human? Okay, Nazis could in theory perform this experiment, as could future evil alien overlords.
Now let's go back to MWI: I think that many people have a misconception about MWI, and perhaps this is due to its name. (If we had stuck to the name "Everett Interpretation" perhaps that would have been better.) MWI doesn't really imply multiple worlds. It implies one very complicated superimposed world. It is also the simplest theory, as it does not add the complication of wave function collapse. Furthermore, it is completely consistent with every bit of data that has ever been collected.
Another misconception is that MWI asserts that the "other worlds" that fall out of it are "real". This is not the case. MWI is agnostic on this issue. For instance, Stephen Hawking is in favor of MWI, but he doesn't like the name, because he thinks that asserting that the "other worlds" are "real", rather than just mathematical artifacts of the theory, is not something that we can scientifically know.
Executive summary: MWI is the simplest theory, and is consistent with all data. By Occam's razor, we are required to give this theory preference until we have evidence that contradicts it.
The counter argument to the above is that the ontological cost of all these many worlds (or maybe even the complicated superpositions of state) is too great, and that this somehow violates Occam's razor.
Well, first of all it doesn't, since Occam's razor these days is almost always taken to prefer the SIMPLEST THEORY, regardless of additional philosophical worries like, "It's just creepy to think that there might be so many other worlds."
Furthermore, this objection is based on a misinterpretation of MWI. MWI is completely agnostic about the ontological status of these "other worlds". It's just a mathematical formulation for making scientific predictions. There are many cases in the history of science where "creepy" things fall out of the math, if we were to grant them the status of being ontologically "real", and yet we don't reject the theories because of this. E.g., virtual particles and advanced waves. Sometimes scientists at some point decide that mathematical artifacts of theories are "real". E.g., virtual particles. And at other times, they remain just mathematical artifacts. E.g. (maybe), advanced waves.
Are the other worlds in MWI "real"? You tell me! Science cannot answer that question. This does not imply that MWI isn't the best theory.
Measure the current time and call it t1. Note that you are conscious at the time you are observing the value t1. Wait. Check your watch again. It is showing a different time t2 now. You are still conscious and your consciousness is in a different state than at t1. Therefore you've detected experimentally a superposition of distinct states of human consciousness, since there exists a formulation of Quantum Mechanics in which time is a regular operator like any other observable and you've observed two different values of it.
Consider a computer which is so well isolated that interference can be observed between it's computational states. The computer is programmed to perform an algorithm which takes one bit of input, and produces one bit of output. The algorithm is very computationally expensive and takes a long time T to complete. We communicate with the computer via two observables, I (for input) and O (for output).
Prepare the initial state as a superposition of both input values 1/sqrt(2) * (|I=0> + |I=1>). After the time T, the computer will be in the state 1/sqrt(2) (|I=0,O=f(0)> + |I=1,O=f(1)>), where f(n) is the output the algorithm produces for the input n. So by measuring I and O at this point we will either learn the value f(0) or f(1), but not both. But say we are really interested in f(0) XOR f(1). Classically, it's impossible to calculate it without computing both f(0) and f(1) so it has to take the time 2T. But with our computer, which is in a quantum superposition of states, and with the help of some clever algebra, we can construct another observable R. When we measure R one of the two things happen with equal probabilities: either we get the correct value of f(0) XOR f(1), or we lose any hope of learning it from our system. We know which one happened, i.e., with probability 0.5 we will have the correct value for f(0) XOR f(1) and we will know for sure it is correct.
Since we obtained f(0) XOR f(1) in half the time, clearly there existed two parallel worlds in which two versions of the computer calculated f(0) and f(1).
(It is noted that some members of the audience objected that Experiment 2 is conceptually no different from the two-slit interference experiment. The author allows that this may be so in a sense, since indeed, the two-slit experiment alone should be enough to make it obvious that the Everett's interpretation is right, however Experiment 2 makes it even more obvious.)
Experiment 3 (simplified version):
Consider a system consisting of a spin 1/2 particle and a quantum computer running a simulation of human consciousness. Prepare the spin in the state |→> = 1/sqrt(2) (|↑>+|↓>), i.e., measuring the spin along the x axis will always show it's pointing to the right, which means that measuring the spin along the z axis may give up or down with equal probabilities. Have the conscious being in the computer measure the spin along the z axis and communicate to the outside world the fact that he/she observed one of the values 'up' or 'down' (without saying which one). Then undo all the transitions the combined system underwent, i.e., revert it to the original state (which is in principle possible for a system consisting of a quantum computer and a microscopic system). Then measure the spin of the particle along the x axis. If it shows 'right' every time (we need to repeat the whole procedure many times), then the Everett interpretation must be true, since otherwise the fact that a conscious being observed 'up' or 'down' would have caused the particle to collapse to a state in which 'left' and 'right' are equally likely.
(The original formulation of Experiment 3 was more complicated, with three spins not one and some more clever algebra, the purpose of which if I understand correctly is to prove that it's possible to communicate to the outside world the fact that a measurement along z axis was taken, without losing the ability to revert the system to the original state.)
So, there. I'll let you form your own judgement.
Counter to your previous assertion, we could in theory experimentally determine whether this is true via trained rats: (1) Train a rat to perform measurements, (2) kill the rat before it has a chance to tell you the results, (3) check to see whether wave function has collapsed.
It doesn't work that way, because even if the rat doesn't cause the wave function to collapse, it interacts with the system and causes a transition, from the original state to a state which is a superposition of states corresponding to various values the rat might have gotten from the measurement, each of these states individually looking exactly as if the rat collapsed the wave function, with the coefficients such that the probabilities for each value come out right. And killing the rat afterwards does not undo it. So you will observe that the wave function has collapsed. Same if you use a mechanical detector in place of a rat.
Yes, I would like copy of the paper. Please send it to doug at alum dot mit dot edu. Thanks!
> And killing the rat afterwards does not undo it.
Yes, sorry; it's been a long long time since I've thought about this sort of stuff in any detail. This is what I should have written:
Train a rat to perform measurements on an observable with two possible outcomes and have it press lever A for outcome A and lever B for outcome B. Put the rat into a sealed box to perform the measurement. A dial on the outside of box will read either A or B once the rat has performed the experiment and recorded the result.
You can now come up with a complex observable on the whole system, i.e., the original observable being measured by the rat, plus the rat, and the box, that will give two different results on different occasions if the rat did not collapse the wave, but will always give you the same result if the rat did collapse the wave.
The problem with this complex observable is that for it to work, you must consider every molecule in the rat, every molecule of air it interacts with, etc., etc., etc. Miss a single molecule and the results are randomized.
Are we ever going to be up to this task? Not any time soon! But it could be child's play for the aforementioned evil alien overlords.
One complication, I can imagine, is that for this to work, you'd need to have a perfect model of the rat's biology and cognition in order for you to come up with the right observable. In the face of not yet being sure how collapse works, this might be very difficult. But then again, I'm sure that evil alien overlords are up the task.
David Albert talks about doing these sorts of experiments on p. 88 of Quantum Mechanics and Experience, and it was this that I was thinking of. Only Albert's examples don't have a trained rat, but rather other, simpler measuring equipment for which we are trying to determine whether or not it causes collapse.
As for Deutsch three experiments, it looks like Experiment 3 has two huge advantages over my trained rat system: (1) Since it's all contained inside a quantum computer, it seems a lot more feasible without the help of aliens. (2) If I understand correctly, the reversal stages means that you end up with a very simple observable, rather than the unfeasibly complex observable that you would need for my trained rat system.
As for Experiment 1 and Experiment 2, I don't understand #1 at the moment. And #2 seems to me so obvious as to go without saying. But it doesn't seem to prove anything that we didn't already know. Of course an uncollapsed wave can compute more than a collapsed wave!
I'll quote a bit:
But if quantum mechanics isn't physics in the usual sense -- if it's not about matter, or energy, or waves, or particles -- then what is it about? From my perspective, it's about information and probabilities and observables, and how they relate to each other.
Ray Laflamme: That's very much a computer-science point of view.
Scott: Yes, it is.
Do any experiments past, current or future purport to test predictions made from MWI?
While MWI may make sense in a universal reading of QM, I'm not sure we can simply extrapolate this onto the macro level, despite it's pragmatic successes over the last century. IMHO QM itself, still feels like a temporary hack (shim), when compared alongside ER.
Until we have a TOE, it may be too presumptuous to be making positive assertive claims on the correlation between the MWI model and reality.
Of course! Nobody questions that. The two are equivalent. That means, neither is more true than the other. You just happen to be in the minority which feels that thinking about multiple worlds makes their head ache less than thinking about the state vector reduction.
What if the history of physics had transpired differently—what if Hugh Everett and John Wheeler had stood in the place of Bohr and Heisenberg, and vice versa?
My guess is, we wouldn't have had to wait until 1957 for someone to say: "Fuck that noise, let's just forget about the multiple worlds and pretend the wave function collapses when I make a measurement." They would be like, "All right buddy, I'm sure you're totally right that there are really multiple worlds and all that, I'll just act like it's merely a wave function collapse for a moment, just to get on with my work of actually doing something as a physicist." My guess is that would happen within the year.
PS. IvoDankolov: I find it difficult or impossible to think intuitively about quantum phenomena.
Ever heard of chaotic systems? Any dynamic system that shows extreme sensitivity to initial conditions is chaotic. For instance smoke curling from a cigarette, planetary orbits over time, weather, water dripping from a tap - all of these show extreme sensitivity to initial conditions.
Here is the fun thing. In quantum mechanics everything evolves linearly. Therefore extreme sensitivity to initial conditions is entirely impossible. We only think that we observe that. Yet the world is full of cases where we can demonstrate such sensitivity!
See http://www.iqc.ca/publications/tutorials/chaos.pdf for some of the attempts to reconcile observed classical facts with what we think are true quantum truths.
I read the first section of that paper ("Why quantum chaos?"), and its proposition makes no sense to me. None other than Poincare PROVED (proved!) ages ago that the motion and position of just three little points attracted to each other according to Newton's formulas are extremely sensitive to infinitesimal changes in their initial motions and positions (i.e., the system is chaotic in the classical sense). And as you point out, the world is full of cases that demonstrate such hyper-sensitivity. Yet, according to this paper, such a system is impossible in nature. I don't get it.
Instinctively, I have to believe there must be a more fundamental underlying explanation for this and other apparent contradictions... it's just that at the moment no one knows what this explanation might be.
I'm not sure if that is true. You can have dense sets created by Linear Operators if you're in a infinite dimensional setting (which you _are_ in QM). That's what Hypercyclic Operators are.
In any case, although the wave function evolves linearly, the _square_ of the wavefunction obviously does not (by the product rule). Since all observations are going to be based on square of the wavefunction, or the product of it with it's gradient, all observable quantities will typically evolve nonlinearly.
What do you make of this problem with distant entangled particles?
The double slit experiment and interference in general?
The Heisenberg uncertainty principle? (Or as I'd like to call it, Heisenberg's horribly mislabeled-in-order-to-confuse-students principle)
A shot in the dark - many of the problems with coming to terms with quantum mechanics arise from trying to impose on it that it should somehow behave like classical mechanics, or that somehow we humans stand above it and look down upon it, and heaven forbid that we're part of a qunatum system).
In most of the other areas of knowledge we can make progress because they do resemble reality in ways that we are familiar with. For example rotation in classical mechanics or special relativity can be somewhat confusing to a beginner, but if we think a little bit deeper than what we are used to then we can see that the results make sense and match what we experience. And also this experience is consistent at different scales. Now you move to QM and the first thing that hits you is that reality "breaks" after some point in the size scale, beyond that everything is different, with uncertainties, probabilities and a bunch of other odd properties. Learning QM is an exercise in mind-stretching, even for the most capable of us. For me the problem is not that it is different, but why is different. Why do we have such a gap between the large scale and the small scale? That is what baffles me.
(Man, I love arxiv.org. It's so nice.)
So let's clear up some misconceptions: First, this is not a problem specific to Copenhagen or any other interpretation of QM. Second, it is not just the EPR paradox. Third, it is not the first time someone has proposed a system utilizing quantum entanglement for superluminal communication; that title goes to Karl Popper:
Popper's experiment was a very clever way to get around the no-communication theorem. It was eventually performed, but no FLT communication could be observed [some suggest we need to keep looking]. I presume that this new paper is another clever way to avoid the NCT, but I don't have time to read it at the moment.
It's kind of weird and has some asides about consciousness and what not but other than that, it gives you a really good mindset for thinking about the subject.
When you entangle two systems, they have to actually be in the same place/with slower-than-light communication distance. Thus, they each store "secret" information in some deterministic way which is impossible to measure directly, and which we can probe only using repeated experiments and probability distributions. The idea is that while quantum systems are deterministic just like mechanical ones, they hide their determinism in black boxes whose mechanisms it is impossible to discover. The "uncollapsed waveform" has in fact been collapsed ahead of time, you just don't know what the answer is until you measure.
Back to our entangled particles, when you separate them the determined decision they made when they were together is revealed. They do not need to communicate, as they already have their story straight.
It seems this entire area of research is a misnomer - if you let the prisoners talk to each other just after arrest, of course their stories will be complimentary later, even if one is in Alcatraz and the other in Siberia. You just have to assume that particles so small we can only "see" them in the way they interact with other particles have some internal structure/state that we also can't see, and thus like the prisoners, can remember the story.
(of course, declaring an entire area of research irrelevant is usually a sign that I've missed the point wildly - if anyone would like to correct my mistake I'd be interested to hear!)
"Repeated experiments have verified that this works even when the measurements are performed more quickly than light could travel between the sites of measurement"
What interests me at this stage is not 'why' entanglement works, but rather how it can be used. The article indicates that repeated experiments have verified it appears to be a real process, where actions on one particle can influence another particle (seemingly) no matter how far away, so why can't we use it for communication? To use a simple example, if I encode data as movement in the 'sender' particle, and the movement can be monitored in the 'receiver' particle, then you can have a system for sending messages.
Seems to me the need to control the internal state of the particles to communicate may be unnecessary. Perhaps to explore further, can anyone here the types of measurable phenomena that have been seen to be mirrored in entangled particles (the ones you know of, anyway)? Would particle spin be one example? Thanks!
Unless... they're using it as a placeholder for as-yet undiscovered ways that entangled particles can interact, and it's known that it's a placeholder.
Though, I should add, what makes this paper interesting is that they try and find loophole around Bell's theorem. This would be a big deal if true.
We need an update to Time Bandits.
In this case they are doing experiments where they expect the hidden variables to do a very specific job (transmit information) and are trying to test whether they do what they expect them to do or not.
That is science.
I wonder if they'll directly mention quantum entanglement?
From a recent paper :
"Bohmian mechanics reproduces all statistical predictions of quantum mechanics, which ensures that entanglement cannot be used for superluminal signaling. However, individual Bohmian particles can experience superluminal inﬂuences."
This paper  referenced in the Ars Technica article shows that finite superluminal velocities (c < v < inf) can be exploited to achieve superluminal signalling.
Very interesting result. I assume BM is still consistent with this. The paper does mention BM:
"Bohmian mechanics and the collapse theory of Ghirardi, Rimini, and Weber [...] reproduce all tested quantum predictions, [however] they violate the principle of continuity mentioned above (otherwise they would not be compatible with no-signalling as our results imply)."
The principle of continuity described in the paper:
"In both cases, we expect the chain of events to satisfy a principle of continuity: that is, the idea that the physical carriers of causal influences propagate continuously through space."
"Clearly, one may ask whether infinite speed is a necessary ingredient to account for the correlations observed in Nature or whether a finite speed v, recovering a principle of continuity, is sufficient."
Actions can happen instantaneously under BM ("infinite speed"), so BM is still consistent with QM and doesn't allow FTL communication.
It's like saying "Gravity violates conservation of energy!!!" No it doesn't; quit playing fast and loose with well-understood principles.
You may be familiar with the Schrodinger equation (HΨ = ih' dΨ/dt) or the more accurate time-dependent Dirac equation. In each of these equations is a function called the wavefunction (denoted with Ψ). This function represents the "quantum state" of your system -- in other words, all the information that exists within a system. Ψ evolves deterministically in time. You can apply an operation to this function, and when you do, you get the original function back multiplied by some value. There are different operators, each corresponding to an "observable" (the thing you measure in the lab). For example, you can measure momentum, position, energy, spin etc... and each of these observables has a different corresponding mathematical operation that you perform on Ψ to get XΨ, where X is the mean value of the observable.
Now in QM textbooks, you'll frequently see Ψ written as Ψ(r, t), where r is a position vector and t is time. r denotes the position of whatever particle constitutes your system. So what if you have a two particle system like hydrogen (proton and electron) or positronium (positron and electron)? Well, a QM textbook will write the state of your quantum system as Ψ1 * Ψ2 and completely gloss over the fact that this approximation does not apply in all situations. It is physically inaccurate to say that Ψ of two particles is two individual Ψ's multiplied together. For some systems, it is a good approximation and makes things easy to calculate, but in reality, there is really only one wavefunction Ψ and it is a function of all the particles in the universe.
So now you can see where entanglement comes in. If Ψ is a wavefunction of all extant particles, then surely there will be correlations between every measurement of Ψ that you take! Now most of the time, you can't find correlations -- too many particles are affecting too many other particles (decoherence). But if you prepare two of them together and keep outside particles from interfering with them, then you can observe the correlations between two particles no matter what the distance!
So now for the whole "information transfer" business. I said earlier that applying an operator to Ψ gives you the value of an observable -- what you measure. The weird thing though is that what you measure isn't always exactly this value. Instead, the mean value of many measurements will be this value. You can also compute the standard deviation of these measurements using Ψ, but that's about it. Nobody knows where the random "noise" in measurements comes from. So far, it seems as though our universe just has some randomness inherent to it (and we're quickly ruling out all remaining superdeterministic theories; Gerard t'Hooft seems to be a hold-out: http://physics.stackexchange.com/questions/34217/why-do-peop...)
So you can't control or predict the individual measurements to as much precision as you'd like. Sucks, huh?
Anyway, you can plot and analyze this data, and what you'll notice for two entangled particles separated by thousands of miles or more is that there are statistical correlations between the two sets of data (again, data you can't control -- if you can't control it, you can't send information with it).
Obviously, you need both sets of data to notice that there are correlations.
> For example, you can measure momentum, position, energy, spin etc... and each of these observables has a different corresponding mathematical operation that you perform on Ψ to get XΨ, where X is the mean value of the observable...
>The weird thing though is that what you measure isn't always exactly this value. Instead, the mean value of many measurements will be this value. You can also compute the standard deviation of these measurements using Ψ, but that's about it.
A good QM textbook will actually say something much more precise. It says that (a) the set of possible outcomes of the measurement is equal to the spectrum of the observable being measured and (b) the chance of getting a particular outcome is given by the squared inner product of the wavefunction with the appropriate eigenvalue. (The whole business of calculating means and standard deviations is confusing unless you understand that; unfortunately, this is allowed to happen often in into QM courses.) This means that QM doesn't just predict some statistical properties of the outcome distribution, it completely specifies the distribution.
Also, I figure you know this, but I want to mention that when you say
> you can plot and analyze this data, and what you'll notice for two entangled particles separated by thousands of miles or more is that there are statistical correlations between the two sets of data
it's important to emphasize that these are non-local correlations (in the Bell sense). You can generate mere local correlations using everyday classical systems.
I suspect that this might give some interesting results.
Yes it does. And in fact that is the case: The beam has compressed. A fully rigid beam is impossible.
So, in the best case, this method of communication is equivalent to communicating with electromagnetic waves. And they have a speed limit: the speed of light :)
So this wouldn't be faster than light communication.
To think about it, if you have a sufficiently long beam you won't be able to give it any angular momentum at all, regardless of how much force you apply. It will be fixed in space. Perfect starting point if your hobby is moving stars with a combination of chains and pulleys.
A beam made of matter is not rigid (as we think of the word) on that scale, as others in the thread have pointed out.
So yes, relativity dictates that you can't create a rigid beam 25 molecules long, much less 25 million miles long. There can be no such thing as a perfectly rigid beam of any length.
Except you can't. A existence of sound traveling through the beam proves without a doubt that the beam is not rigid.
The future causing the past.