Hacker News new | comments | show | ask | jobs | submit login
Quantum entanglement shows that reality can't be local (arstechnica.com)
158 points by suprgeek 1846 days ago | hide | past | web | 166 comments | favorite



This is for me the most interesting part of the whole teleportation / entanglement research.

When these papers started coming out we debated the notion of 'faster than light' communication. The counter argument was that you had to move the particles apart and that was constrained by the speed of light. Then the question of "when" the state was resolved was pondered. There were two thought experiments proposed at this meetup.

In the first, two highly accurate clocks, separated by enough distance that that speed of light issues were unambiguously resolved by the clocks, which where triggered by the resolution of the quantum state of an entangled particle. Capturing the exact time when the state was known (detangled) on each clock, and then comparing the times on the clocks to determine whether or not it was truly 'instant'.

The second experiment was to set up a 'set' of entangled particles at a distance and resolve only 'some' of them. The set representing 'bits' and the resolved set representing the 'information.'

The question of course is whether or not you can move information this way. Seems like we are getting closer to answering those questions in a definitive way.

I always thought it would be fun to write a short story about someone doing this and discovering that it was the galactic equivalent of 'shortwave' radio and have the alien equivalent of the FCC come out to the planet to arrest the scientists for transmitting without a license :-)


> short story about someone doing this and discovering that it was the galactic equivalent of 'shortwave' radio and have the alien equivalent of the FCC come out to the planet to arrest the scientists for transmitting without a license

"Fine Structure" -- bits of a soft-SF story you can read online http://qntm.org/structure http://everything2.com/?node=fine+structure

It's of varying quality, I think people usually link to "Oul's Egg" first because it's one of the best sections.


Thank you. I had never seen "Fine Structure" and enjoyed it.

    It's of varying quality
It's worth a lot more than what I paid for it. ;)


Hehe, you got me curious to read Oul's Egg.. quite interesting, but kind of bothered by the ending...


The experiment you all came up with sounds great, a lot like Bell's experiment but without the detail of setting up correlations and their implications.

Information cannot be transferred faster than light using entanglement. This is already proved by the no communication theorem. Essentially, because since the outcome of measurements is random/probabilistic, ambiguity is introduced on the receiver end. So since we do not allow [local] hidden variables, there is no way for person B to know whether or not they need to apply a further transformation to get to the original state without person A transferring classical bits.

I am not sure what the news in this article is because I found nothing surprising in it.


This doesn't work because there is no moment at which the state is 'known' or 'detangled'. There is no way to tell, by observing one particle, when the other particle was observed. If there was you could send a signal with it, which would be no good.

The universe is a strange place.


Plus the other particle could later be "unobserved" through quantum erasure..

http://en.wikipedia.org/wiki/Quantum_eraser_experiment


Not in the way that you're imagining. Otherwise, you could easily communicate by repeatedly observing and then unobserving until finally you see the bit that you don't want to send, and then you let it be.


and have the alien equivalent of the FCC come out to the planet to arrest the scientists for transmitting without a license

Or the channel is already filled completely with spam.


Or filled with get rich quick scams. That reminds me of a Charles Stross novel, I think Accelerando.


That should be a quest in some space exploration game. You research hiperspace radio, then you decode alien languages, then you get mysterious quest involving intergalactic wars, lost artifacts and last scions of ruling families. You duly follow the leads to learn more about these subjects, find that nothing seems to check up, only to find out at the end that it was just Nigerian scam.


Maybe they can tell me how to make my jagon bigger.


Greg Egan - I can't say more because spoilers.


Diaspora?


That sounds like a very cool science fiction idea :-)


The most interesting question to me is - what happens when one of the entangled particles get annihilated. If the other one does too, instantly, then we can use that to communicate faster than light.


No. If you annihilate one of the particles, the other particle continues "unaffected". Whatever you do to one of the particles, the other particle continues "unaffected". Quantum mechanic is a little more complex, so it is difficult to explain what "unaffected" really means.

It's clearer with an example. Let's suppose that you and I have each one a particle that is a half of an entanglement particle pair:

* If you don't do anything with your particle, for each measurement that I can do there are some possible results with some probabilities.

* If you do any measurement to your particle, for each measurement that I can do I have exactly the same possible results with the same probabilities.

* If you do annihilate your particle :( , for each measurement that I can do I have exactly the same possible results with the same probabilities.

* Whatever you do with your particle, for each measurement that I can do I have exactly the same possible results with the same probabilities.

In that sense my particle is "unaffected". I can't do any measurement to know what you have done with your particle. So you can't use your particle to transmit information to me.

The strange effect is that if we later meet and compare notes, we will see that the results you get with your particle and the results I get with my particle are related. Some measurements always are equal, some have the opposite value and some are related in more complex ways.

Each measurement (yours and mine) alone are perfectly normal. The strange effect appears only when the result are compared.


Hmm, this somehow defies common sense (but of course, its quantim mechanics, so thats a given!).

You are saying that, if theres an entangled pair, they cannot be used to transmit information that couldn't be otherwise transmitted via "conventional" means. However, if you performed the measurements, and then compared notes you will find that somehow, it looks like the measurements were "synced"?

How does it work this way? that is, how could it be the case that those measurements can be related, but you can't use that relation to transmit informatino?


> How does it work this way? That is, how could it be the case that those measurements can be related, but you can't use that relation to transmit information?

In a simple example, both people measure for example the spin in the x axis of the particles that travel in the z direction. Each has a 50% probability of obtaining "up" and a 50% probability of obtaining "down". But in every case the get the opposite result, so each one has a random number generator that is synchronized with the other. But each one alone has only a random number generator.

Lest suppose that someone try to use that to create an intergalactic first person shooter for two players :). Using the "synchronized random number generator" it is possible to make the bots move exactly in the same way in both players computers, but they are only "synchronized random number generator" so it's impossible for one player to know what the other player has done (unless they wait until the information arrive in a conventional way, with a speed <=c, but that would be a very big laaaag).

But this is only part of the story, because this process can be simulated with a central "random number generator" that sends the signals to both players.

The strange property is that if both players "magically" decide begin to measure the spin in the y direction, they will have the same 50% chances and always get the opposite result. Another possibility that avoids magic is that one of the players continues measure spin in the x direction and the other players begins to measure the spin y direction. Now each one has a "random number generator" that is independent of the other "random number generator", so each player has no clue about what the other player result. It's not useful for a IFPS, but it's useful as a physic experiment. (And one of the problems with quantum mechanics is that you can't measure the spin in both directions, you must choose one. It's a little more complicate, but I don't' want to enter into the technical details.)

But this is only part of the story, because this process can be simulated with a central "random number generator" that generate two random numbers and then sends the signals to both players, one for the x direction and one for the y direction. This is a simplification of the "hidden variable theory", that says that the particles "know" in advance what to do if they are measured in for the x direction and in the y direction, in spite of that you can't measure both.

In the experiments the idea is that the measurements/player decide which direction to use while the particles are flying, so they don't have enough time to communicate (at <= light speed) and agree what the result would be. They have to be in accordance from the start (hidden variable theory) OR they have to communicate faster than light OR something even more strange.

Really, to do the measurements it's possible to choose not only the x or y axe, but any direction in that plane. So for every direction each player has a "random number generator", but they are not independent. If one measure in the x direction and the other at 45º the probability that the results agree in some number in between 50% and 100%. The 50% is for orthogonal directions that have independent results, the 100% is for the same direction that has ever the same/opposite result, and for the other angles there is a formula to calculate the value. To simulate this you need a lot of hidden variables, or at least a few an a formula to calculate the result for each direction, or any other variation of this idea. There are many possible proposal, some are more simple and some are more complicated, so the idea is to put all of them in the "hidden variable theory" bag and forget for a moment the details. The problem is that Bell proved that for any "hidden (local) variable theory" some inequality holds. This inequality ignores the details of the specific theory, so it's not possible to invent a more complex "hidden (local) variable theory" that breaks the inequality. When the same calculations are evaluated using the quantum mechanics the result is a value that for some angles is allowed by the Bell inequality but for other angles the value is forbidden by the Bell inequality. So there is a different prediction of some measurement using the quantum mechanics and any "hidden (local) variable theory".

And it is possible to do this experiment and calculate a value for every angle. And the result is that for the problematic angles the result agrees with the quantum mechanics predictions and don't holds the inequality that is predicted from any "hidden (local) variable theory". So we must eliminate all the "hidden (local) variable theory". (For the non problematic angles the result agrees with the quantum mechanics predictions again, and holds the inequality as expected because they are not problematic).

More details: http://en.wikipedia.org/wiki/Bells_theorem


Entanglement is a lot more superficial / complex than that.

QM is so differnet than how we are used to dealing with things the only really useful way to understand it is the Math it's based on. But, decoherence is the hart of a lot of the spooky voodo that really trips people up so it's a much better place to start than the 'cool' vs. trying to extrapilate based on the odd stuff that goes on. http://en.wikipedia.org/wiki/Quantum_decoherence

PS: Also, avoid thinking at the large and small scales at the same time. Sensors really are just more fields and particles just like everything else. Also, decoherence is not an all or nothing event it's more a sliding scale between wave and particle.


(X -> FLT) -> ~X


There's a saying, "Relativity, causality, FTL. Pick two."


Ok, as far as I can tell, this all just a rehash of EPR paradox and the weirdness of the Copenhagen interpretation of QM. The many worlds interpretation resolves everything with full locality. http://en.wikipedia.org/wiki/Many-worlds_interpretation

Just consider, if I take a black marble and a white in hand, shake them up and put them in two boxes so I don't know which marble is in which box and put the two boxes on two sides of the world. Then, when I open one box, I instantly learn the state of the marble in the other. The information is "transmitted" to me "faster than the speed of light" but all because the actual switch event happened in the past. Now, as far as I can, QM entanglement is pretty much the same thing as these boxes. The only weird thing is that the standard interpretation (not the standard theory since it is not necessary for any meaningful measurement), the standard interpretation is that in QM, the equivalent of our marble switch happens when the final measurement happens.

There's nothing in any of this research that involves any concrete evidence whatsoever that faster-than-light information transmittal happens, (feel free to counter-references that), but as far as I've read, all the "transmittal" is "transmittal" the sense of above.


Your description is wildly incomplete. What you've just described is merely ordinary probability theory.

To see the effects of QM, build a device to flip a quantum-mechanical coin. If it comes up heads, put the marbles into the two boxes. If it comes up tails, put the marble into the two boxes in a slightly (but unmeasureably) different way. I.e., whichever procedure you chose, P(white in right)=0.5.

However, when you actually mix in the coin flip, the white marble always in the right box.

This cannot happen in ordinary probability theory, since:

    P(white in right) = P(white in right|heads)P(heads) + P(white in right|tails)P(tails) = 0.25+0.25
Quantum mechanics isn't weird because it has many worlds. Ordinary probability theory does too. Quantum mechanics is weird because those worlds interfere with each other.


Sure, I never claimed to describe all of QM.

However, while are many amazing effects there, I don't see how your particular example of QM weird or others relate make the information transmittal at a distance part. IE, the interference part isn't directly related to the entangled at a distance except it provides a box that you can't look into.


What you are describing is a "hidden variable theory", but it has been shown (See "Bell's Theorem") such a theory cannot be true. QM is stranger than that.


Bell's theorem only shows that local hidden variable theories cannot be true. Non-local hidden variable theories, such as Bohm, are still possible.


I'm not saying QM's mechanisms are the same boxing marbles. I'm just saying that separating entangle particles isn't just more interesting than boxing marbles, IE, all the modifications that are going interfere with each can be seen as already having happened - which is why QM doesn't allow you to transmit information faster than light.


Fair enough, if it's just an analogy I agree it gives the right intuition.

To be clear, what really happened under MWI is that the wavefunction "branched" into two around the time you created the two particles, and the two branches spearated from each other in configuration space just as the two marbles in your analogy separated in physical space. When you did the measurement, you merely realized which branch you are in.

Where the analogy fails is that the probabilities in QM are inconsistent with there actually having been some hidden choice of marbles made beforehand.


This Google Talk from last year provides an argument and interpretation of this that's fairly understandable with a programmer's mindset:

http://www.youtube.com/watch?v=dEaecUuEqfc&sns=em


Could you explain why your example isn't just a hidden variable explanation, which as we know from measurements of Bell's Inequality ( http://en.wikipedia.org/wiki/Bell%27s_theorem ), cannot result in a theory compatible with both locality and the predictions of quantum mechanics.


The example is a hidden variable explanation, simply because you cannot do metaphors for quantum physics using classical physics.

Perhaps another approach will be clearer:

Imagine magical filter goggles, of two types: red and blue. People wearing red goggles can only see other people wearing red goggles, people wearing blue goggles can only see other people wearing blue goggles.

You put a pair of red and blue goggles to each box, then have four people pick a pair of goggles each.

The result is that each person only sees one other person, near the second box, wearing the same colour goggles.

This is more or less how many-worlds works, except instead of both red and blue goggles in the box you have a single pair of goggles that is both red and blue, but not at the same time (decoherence), and thus instead of two people wearing different kind of goggles, you have a single person wearing goggles that are both blue and red.


One thing I've never understood about interpretation of quantum entanglement experiments:

Based on what I've learned of these experiments, it seems to me that the least mind-bending interpretation is that the entanglement event results in two particles that have some complementary property k (i.e. one particle has k and the other has ~k) and which one of the two, k or ~k, is had by one of the particles is unknowable without collapsing the quantum state. However, both particles have the property from their inception, and so no faster-than-light or non-local interpretation should be needed. When the particles are separated to great distance and one is observed to collapse to k, the other must be observed to collapse to ~k because it was always a ~k particle.

Clearly, there's some aspect of the experiments that I don't understand that invalidates this simple interpretation, but I do not know what it is.


If you're willing to read through it, there's a Wikipedia article on Bell's theorem: http://en.wikipedia.org/wiki/Bell%27s_theorem

The summary: If you assume simple locality (your k and ~k,) you get different results than what quantum entanglement predicts.

If you want to get rid of the spooky action at a distance, then you need much, much more complicated hidden variables.


Physicist here. NegativeK is correct. Bell's theorem is the key to understanding why your explanation doesn't work. Bell's theorem, and the associated experimental demonstration, has been called the most profound discover of science. It's well worth a few hours of your time to try to understand it. You don't need to know any quantum mechanics or advanced math to do so.

Incidentally, I'll mention that your argument is essentially what Einstein et al were arguing for in their famous EPR paper. The fact that it's even possible to rule out local hidden variable theories is highly surprising, and is what makes Bell's discovery (30 years after EPR framed the problem) so monumental.


Non-physicst here. What do you think of Caroline Thompson's work, like:

"Chaotic Ball" model,local realism and the Bell test loopholes

http://arxiv.org/abs/quant-ph/0210150

...I've always wondered if much of the Einstein-was-wrong business is really a category error. I personally liked:

Clearing up Mysteries - The Original Goal http://bayes.wustl.edu/etj/articles/cmystery.pdf

" ...we must keep in mind that Einstein's thinking is always on the ontological level; the purpose of the EPR argument was to show that the QM state vector cannot be a representation of the "real physical situation" of a system. Bohr had never claimed that it was, although his strange way of expressing himself often led others to think that he was claiming this.

From his reply to EPR, we find that Bohr's position was like this:

"You may decide, of your own free will, which experiment to do. If you do experiment E1 you will get result R1. If you do E2 you will get R2. Since it is fundamentally impossible to do both on the same system, and the present theory correctly predicts the results of either, how can you say that the theory is incomplete? What more can one ask of a theory?"

While it is easy to understand and agree with this on the epistemological level, the answer that I and many others would give is that we expect a physical theory to do more than merely predict experimental results in the manner of an empirical equation; we want to come down to Einstein's ontological level and understand what is happening when an atom emits light, when a spin enters a Stern-Gerlach magnet, etc. The Copenhagen theory, having no answer to any question of the form: "What is really happening when - - - ?", forbids us to ask such questions and tries to persuade us that it is philosophically naive to want to know what is happening. But I do want to know, and I do not think this is naive; and so for me QM is not a physical theory at all, only an empty mathematical shell in which a future theory may, perhaps, be built."


Based merely on the abstract, I'd guess that I wouldn't find Thompson's paper very interesting. I'm glad there's someone worry about possible loopholes in Bell experiments (just like I'm glad there are physicists working on GRW and astronomers looking out for killer asteroids) but I don't think there's anything to find there.

As far as the philosophy goes, I'd say most people's frustration from Bohr's positivist position is actually due to the real flaw with Copenhagen, which is the ambiguity bundled up into the word "measurement". If you eliminate this ambiguity (which I think is being done incrementally through the study of decoherence), you'll find that a positivist position is actually a lot more attractive. At least, that was how my feelings evolved on the issue.


Nobody seems to like my theory that the programmers of the simulation we're all living in were deferring procedurally generating that level of detail until it was necessary because we'd actually examined it closely.

They're probably still cursing our Hubble telescope, not to mention computers with their decreasing semiconductor scales.

This comment probably just triggered their simulation-awareness detector. :-)

Actually, I think there's a silly but fascinating SciFi short story waiting to be written about assembling a group of people to try to privilege-escalate out of the simulation without triggering the simulation-awareness detectors. "I need you to think about this, but not all at the same time, or in the same place. Oh, and try to think about it only in vague terms."


Your theory reminds me of Fassbinder's 1973 film Welt am Draht (World on a Wire). The technical director of a simulated world project slowly realizes he lives within a larger simulation.

https://en.wikipedia.org/wiki/Welt_am_Draht


Your theory reminds me of Fassbinder's 1973 film Welt am Draht ("World on a Wire"). The technical director of a simulated world project slowly realizes he lives within a larger simulation. He tries to outwit the simulation's programmers, as they deactivate or reprogram his friends and co-workers to stop him.

https://en.wikipedia.org/wiki/Welt_am_Draht


doh! I replied to my own post instead of editing it in place. This is getting pretty meta. <:)


That's because the simulation is telling everyone to reject your theory and ridicule you for presenting it.


(Replying to my own comment as basically every response raised Bell's Theorem)

This is what I like about Hacker News; when I ask a deeply technical question in a field removed from my own, there are plenty of people who can point me in an interesting direction.

Bell's Theorem was the aspect of observational measurement that I was missing, and it's absolutely fascinating. Even moreso in that it was envisioned mathematically before we had the technology to prod at its claims. Thank you everyone for showing it to me.


This article[1] by David Mermin does a good job explaining why the second particle couldn't have always been a ~k particle.

[1] http://www.phy.duke.edu/undergraduate/physics-articles/mermi...


PDF


The way I've usually seen this one presented involves precisely the observation, in that you gain knowledge of the other particle. Of course, saying "because it was always a ~k particle" does not a good explanation make, because that would imply that the resolution of the measurement was somehow predetermined, which is a fancy way of saying that there's a hidden variable.

Not that interpreting all of this to mean that physics is non-local "spooky-action-at-a-distance" is the only viable route, mind.

Consider this: why, exactly, do you believe that when you measure the particle you somehow force it to enter one particular state and therefore the entangled one that's sitting X miles away suddenly enters the opposite one? Are we, humans, sitting outside of quantum mechanics and looking down upon it - and then what we observe is the one true way the world is?

Why would you not, instead, when you measure the particle, entangle the measuring device, and yourself, with the state of the particle? You are, after all, only another part of quantum mechanics the same as anything else.


> [...] which is a fancy way of saying that there's a hidden variable.

I'd say it's a less fancy (and more specific) way of saying "hidden variable." I read the parent as saying, "I don't understand why there can't be a hidden variable" which is quite a different thing than saying "this can work without a hidden variable: [thing involving hidden variable]".


I agree with you. And further more, wouldn't entanglement itself be some sort of evidence that the particle does have an internal state even before being observed?

For instance, I could phrase things like this:

- Let particles A and B get into an entangled state

- Measure the state of particle A, see that it has property k.

- Predict that B has property ~k even though it's not being measured yet.

- Measure B. If B has property ~k, then we've proven that B had an internal state before we performed the measurement.

From all the "popular" description of "quantum entanglement", I don't see anything in it that's significantly different from things like, say, the preservation of momentum in Newtonian physics.

Of course, I must be missing something, but I don't know what is that thing yet.


"If B has property ~k, then we've proven that B had an internal state before we performed the measurement."

This part is the flaw in the logic. That doesn't prove that B has an internal state. All it proves is that, in some sense, B's state is related to A (assuming you repeat the experiment until you reach statistical significance).

Entanglement is nothing more than saying one particle is connected, in some sense, to another. Like a see-saw: if one side goes up, the other must go down. It's only "confusing" in the sense that the particles are physically separated, and the link between them is not mediated by a force as we understand it. Instead, they're linked by their very nature. It's almost as if you have just one particle, that's somehow been split into two and spatially separated. That's the conceptually difficult part.


Why not choose the mundane and least astonishing explanation?

If an object A is moving (with a known momentum) and hits a stationary object B, we can measure the momentum of either A or B and instantly know the momentum of the other object without having to measure it. The two objects are somehow "connected"; their state (momentum) is "entangled".

Why is quantum entanglement different?


Because that momentum transfer is mediated by a fundamental force, typically the electromagnetic force. Entanglement is a property of the particle, and is not mediated by a force: the collapse of the wavefunction of one particle doesn't /cause/ the collapse of the other, rather they both collapse simultaneously.


Because we can measure a difference empirically http://en.wikipedia.org/wiki/Bell%27s_theorem#Practical_expe...


"However, both particles have the property from their inception, and so no faster-than-light or non-local interpretation should be needed."

This is incorrect, both particles have a superposition of k and ~k (to use your terminology). It is not the case that when the wavefunction collapses, we just suddenly find out the "internal state" of the particle. This "internal state" simply does not exist until the wavefunction collapse has occurred.

The "no faster than light" doctrine refers specifically to the transfer of information. And no information, in the strictest sense, has been transferred. Think of it like this: try to imagine how you could communicate with someone using quantum entanglement. If you can communicate something, then information has been transferred. But you can't. All you know is that if your particle is k, then theirs is ~k.

Now, you could make up a rule book that says "if my particle is k, then I'll do x, and you'll do y" but that rule book will have to have been shared between both parties before hand, at less than the speed of light.

Edit: should just mention I'm not an expert here, and articles like this are always interesting. Finding loopholes in our conventional understanding. But it's important to know just /what/ our conventional understanding is first to realise why things like this are important.


Even with such a preshared rule book you wouldn't get any FTL communication. You could crate a pair of random number generators this way which will always give the same result. Certainly useful for cryptographic purposes but sadly not direct communication.


Agreed; that's what I was trying to say. Perhaps it's disingenuous to frame the problem in this way (that is, talking about rule books and the like), but I was just trying to make it more relatable to normal human experience.


> both particles have a superposition of k and ~k (to use your terminology). ... This "internal state" simply does not exist until the wavefunction collapse has occurred.

What would the implications be if particles did have an internal state?


It would be a violation of Bell's theorem as we currently understand it. In other words, it can't happen, unless our understanding is wrong. (For more info, the wiki on Bell's theorem has been linked to elsewhere in this thread.)


But when you "collapse the wavefunction" and "create" the internal state of the particle, do you also create the internal state of the other one that is entangled to it? Do you create it instantaneously?

You see, the problem is not whether you can transfer information in human readable form (though if you could that would certainly be a huge problem with relativity!), but whether any effect that propagates faster than the speed of light exists.

You'll have a hard time explaining that in the frame of wavefunction collapse, I think.


In the usual experiment used, if the two electron spin detectors for the two entagled particles are tilted the same, they'll get opposite results always. But if you tilt them differently, however, you no longer get results that are exactly opposite each other all the time. Therefore the electrons are in a superposition until they hit the detectors. It isn't decided yet what they will be.

Keep in mind we're working with quantized values. Spin is always completely up or down, no matter what angle you measure it from. Measuring it at that angle locks it down, but tilting another detector in the chain after means you go back to a non-linear probability calculated by quantum mechanics (cosine of the angle in this case) to if it is up or down again at the new tilt and locks it down to that new tilt. There isn't any pre-determined value the electrons can be set to beforehand that matches the observed number that deflect in the detectors up or down at all measurement angles.


I believe you're talking about "hidden variables" and the argument against it is Bell's Theorem http://en.wikipedia.org/wiki/Bell%27s_theorem

I'm not a physicist and I don't really understand it fully, but I believe this is what you're talking about.



Thank you for asking this so well. I have been keeping the page hanging around in a tab, waiting until the answers showed up!


Can someone provide or point to a layman's definition of "realism" in quantum mechanics?

Recently on a call-in podcast I listen to, a caller has claimed that the following article shows that a) realism has been disproven and b) that proves we are all in a computer simulation: http://physicsworld.com/cws/article/news/2007/apr/20/quantum...

>Now physicists from Austria claim to have performed an experiment that rules out a broad class of hidden-variables theories that focus on realism -- giving the uneasy consequence that reality does not exist when we are not observing it

Sure. Anyway, I'm certain this guy is equivocating, but I don't even know where to read up on the relevant topics. And I read articles like these and they keep using the word realism without defining it.


Philosophically (as written by Einstein, Heisenberg, etc.) there are three assumptions about the world - of which one is broken by the quantum world. We do not know which one. Depending on your interpretation of the Schrodinger equation you can interpret the breaking in any of the three ways. This article says its realism - but as far as I know there is no way to prove its realism and not one of the others. One of these cannot hold:

- Realism, Locality, or Separability

Realism - Einstein's idea of 'elements of reality' - the idea that things actually are material and unchanging. Their properties are constant unless acted upon (ie a Hydrogen atom at time1 and point1 has a constancy of properties at time2 and point2). You could imagine a universe that had, as a property of the universe, atoms change mass every other day - just for the hell of it.

Locality - The idea that a system can only be affected by things within its lightcone - no effect can travel faster than the speed of light. And two objects must be 'local' before they can interact.

Separability - any system separated by non-trivial space/time can be considered independent of each other. The idea that 1/2mv^2 does not include every other part of the universe in trying to figure out what m or v are. Not quite the same as locality - as it would still require local interactions, but that net of interactions could be so thick and pervasive as to prevent any meaningful isolation of any system.

As is stated above, these concepts fall out of the EPR experiment.


Thanks for the explanation. I was still confused about how "things actually are material and unchanging" led to "reality does not exist when we are not observing it."

You pointed me in the right direction though and I'm convinced that the term is being abused by the caller:

>Realism in the sense used by physicists does not equate to realism in metaphysics. The latter is the claim that the world is in some sense mind-independent

http://en.wikipedia.org/wiki/Local_realism#Local_realism

http://arxiv.org/pdf/quant-ph/0607057v2.pdf

If papers are being written about how "local realism" should be banned from scientific discussion, I'm fairly certain it isn't appropriate for laymen to bandy about the term and then make wild ass philosophical assertions based on it.


That's interesting. At first glance, it seems like we invented Separability just to make things easier for us. In my head I'd always assumed that forces diminish continuously; But I guess that's what I get for thinking of force as something different from a particle. If all force is particles, I suppose it makes sense.


Do you have any sources for the separability part? From what I remember, the three assumptions should be realism (the universe would exist even if no one would 'observe' it), locality or counter-factual definiteness - the ability to talk and reason on results of measurements that have not been performed.


Search this article - it explains much of the original rationale:

http://plato.stanford.edu/entries/qt-epr/


As an aside, congrats to Ars for including a link to the paper (and a DOI) in the article. They always do this, and it's really something that needs to be encouraged elsewhere, especially in newspapers. I think it's fair to say if you don't link to the original paper, what you do shouldn't be counted as "science journalism."


This article completely ignored the Many Worlds Interpretation, which preserves all of realism, locality, and Relativity.


Yeah. It shouldn't be that surprising that when you arbitrarily assume that part of the wavefunction magically disappears sometimes, you get weird results like non-locality.


It's always amusing to see the kind of excuses that pop up when you try to explain entanglement or interference in the mindset of wavefunction collapse.

I'm still not entirely sure why so many people consider collapse to be simpler. Is it too daunting to think of the world as an amplitude distribution that propagates in a way that we're not intuitively used to? Too hard to think of ourselves as part of quantum mechanics, because that thing I see there on the measuring device must be the reality, damn it, and what the hell do you mean I've just entangled myself and the device along with the system?

Or maybe just tradition and accepting the "scripture" coming from the established authority. How could we best test that?


The Bohm interpretation has a universal wavefunction and non-locality, though.


Could you expand on this a little more? I'm not quite sure I follow.


The many world's interpretation is based on the assumption that both the observed system and the observer is modeled by a wavefunction. In that case observation is entanglement, which seems weird to us, but which makes perfect logical sense.

Other interpretations of quantum mechanics start with the assumption that the world is fundamentally like our naive experience leads us to think of it as. Therefore when we observe something, we couldn't really be thrown into a superposition of states, each of which observed a different thing and which cannot meaningfully interact due to quantum mechanical principles. If we can't be thrown into such a superposition, then there must be some sort of underlying reality, or quantum collapse, or other weirdness that is not explained by QM.

All of the weirdness of interpretations other than many worlds can be understood as the conflict between how they would like to understand the world (there is a reality that happened), and how quantum mechanics described things.

All of the weirdness of the many worlds interpretation can be understood as, "We can't perceive the process of quantum superposition, so it seems really, really weird to us."


Lets say you have some experiment which gives you a 1 or a 0, and you have two entangled particles which will give the same result depsite the result being random and the measurements being taken outside each other's lightcones. In Collapse QM there was some instantanious communication making that possible.

In Many Worlds you'd find that for each measurement the wavefunction split into two branches, one corresponding to 1 and one to 0, in each location. From the sites of both measurements the decoherence spreads outwards as particle interactions happen, and when they finally touch the two 1 segments and the two 0 segments merge, and there is one area split along 1/0 in the wave function instead of two.

That's all horribly simplified, but should get the idea across.


Can you explain how MWI is local? In the period of time before the observer is 'entangled' into the same system as the observed particles, the different possibilities can interfere with each other. If this can happen even if the entangled particles are separated by a lot of space, surely that is the same as nonlocality.

Maybe my confusion is that I'm not sure a which point the universe splits. Suppose I entangle two particles, then separate them by a large amount, my universe hasn't split yet, because before I make the measurement, I haven't become entangled with the state of the particles, but when I do, surely the spreading entanglement of myself with the possibilities centers at both entangled particles (i.e. it is nonlocal)?


> Maybe my confusion is that I'm not sure a which point the universe splits.

The answer to that is that the universe never really splits. It remains one big very complicated universe, with very very very messy superpositions of state in it. But different components of these superpositions lose "coherence", which gives the appearance to us of wave collapse, even though there actually is no wave collapse.

Lookup "decoherence" for more info on this.


The splitting is a quantitative rather than a qualitative process. You can observe quantum effects with 100 atom molecules the same way as with single particles, it's just they require far finer observations to detect. If no collapse occurs, the the time entangled complexes reach visible scales there's no way you could ever detect the quantum interference.

It isn't that the universe has split, it's that one section of the universe's wavefunction has split, or more exactly become a bimodal distribution. At both locations there are bunches of the wavefunction corresponding to both possible observations - but both locations are entangled since the two particles that touched this off were. That means that when you observe the two experiments you don't "split" into four, but only into two since they are already entangled and you only get entangled with this complex once. The information of "What's entangled with what" spread at strictly sub-light speeds, and the information of "Did it pass through the polarizing filter or not" also traveled at sub-light speeds.


Yes, but this complex is growing the whole time, as it entangles with other parts of the universe around it in both locations. That means that I become entangled with it in one location, and simultaneously become entangled with some other part of the universe that is not remotely local to me.

That's why I find the description of it as 'local' somewhat weird, but I accept that I'm probably using the term incorrectly.


Let me try to explain myself better:

Two particles are entangled and separated by a vast distance.

At some point, I conduct a measurement on the first particle, causing me to split into two different versions of myself that have measured two different states. Now, the effect of that measurement spreads outwards from the site of the measurement, so I may affect other things based on the measurement, etc.

My problem is that if a measurement is done on the other particle, long before any message from me could possibly have got there, then that measurement and the effects it has on its local environment and the effects they have are all also dependent on my measurement.

I imagine this as a wave of universe splitting spreading out from the site of my measurement and the entangled particle despite the separation.

Now, perhaps this is a nontechnical use of the word nonlocal, but I'd describe that as a nonlocal phenomenon.


> My problem is that if a measurement is done on the other particle, long before any message from me could possibly have got there, then that measurement and the effects it has on its local environment and the effects they have are all also dependent on my measurement.

Imagine that your friend Joe writes the same word on two pieces of paper, puts them in envelopes and hands one envelope to you and mails the other envelope to Paris. At some point in time, you open up your envelope and read the word "popcorn". At some other point in time, Pierre open up his envelope and reads the same word.

The word that Pierre reads is not "dependent on your measurement". Nothing nonlocal occurred.


That's a hidden variable theory.


No, it's an analogy, not a theory. There are no hidden variables in the Many Worlds Interpretation, but it will appear to us as if there are.

The Bohm Interpretation is the experimentally indistinguishable dual of MWI that proposes hidden variables rather than the absence of wave collapse.

Despite the hidden variables in the Bohm Interpretation, there is nothing wrong with it, modulo Occam's razor.


Sure, I used the word 'theory' incorrectly, I should have said 'explanation'.

As I understand it, hidden variables are fine if you're happy to throw away locality, which is exactly what Bohm interpretation does and exactly what we're talking about here.

Since your analogy is purportedly showing me how MWI is local, showing me that something that is significantly different (i.e. is a hidden variable explanation) is also local doesn't help a lot.


Here's a more thorough explanation:

http://plato.stanford.edu/entries/qm-manyworlds/


then that measurement and the effects it has on its local environment and the effects they have are all also dependent on my measurement.

I think that's your trouble there. Those are totally independent of your measurement, the waveform splits at the measurement of the other particle the exact same way, regardless of what happened to you locally.


except I can never be in a universe where it split at my measurement site in one way and it split at the other measurement site the other way. That means that I am entangled with a measurement that happens nonlocally to me and everything that it interacts with.


You're not entangled with the measurement, you're entangled with the entire wave despite the inconceivable number of measurements that happen within it. The same way my friends are still my friends despite us being in different places. I won't know instantly if they die or something, but I react differently when I hear about what happens to them than I do with ordinary people. If one of them gets into a car accident outside my light-cone and I'm more sad when I hear about it than about the car accident of a stranger, that doesn't mean that I have a non-local interaction with the car accident.


Correct if I'm wrong, but I think that Many Worlds Interpetation does not explain a thing. It's like religion - one can think it's possible, other - impossible. But it can't be proved or disproved


I'm pretty sure that, in theory, all the major interpretations of QM are experimentally distinguishable from each other. The feasibility of ever being able to perform some of these experiments, however, is rather questionable.

Two exceptions to this are (1) that the Many Worlds Interpretation is experimentally indistinguishable from the Bohm Interpretation, unless you want to commit suicide many times with a quantum revolver; and (2) the Copenhagen Interpretation doesn't define what a "measurement" is, so scientifically determining whether it's correct or not is impossible.

None of this implies that any of this is "like religion".


Be specific. What experiment do you have in mind that could distinguish between interpretations? Note that if the "experiment" involves dying to see what's on the other side then it doesn't exactly make it seem less like a religion.


Well, all the theories of quantum mechanics that rely on waveform collapse all predict that waveform collapse happens, while those that don't don't. Observing that larger and larger objects still have tinier and tinier but still existent quantum effects is evidence against collapse. And if you can demonstrate quantum effects in people, then you've pretty much ruled out the Copenhagen and von Neumann interpretations. Those experiments are inconceivably difficult with our current level of technology, though.


So you're saying, if Quantum Mechanics turns out to correctly describe larger and larger systems--in other words, unless we discover some new physics which makes QM break at a certain scale--then your pet interpretation will turn out to be true?


No, I'm saying that if Schrödinger's equation describes larger and larger systems that means that Quantum Mechanics is Schrödinger's equation rather than being Schrödinger's equation plus waveform collapse. Maybe you could try reading through the Interpretations of Quantum Mechanics Wikipedia page[1] to get a better sense of the issues being talked about here? Whether or not waveforms collapse at a certain scale is precisely the most important issue of disagreement here, and it actually is subject to experiment in theory. And experiment could probably even distinguish between the various families of waveform collapse too. Not that this will distinguish between all interpretations, but it would at least cut the possible number in half.

Remember, waveform collapse or "some new physics which makes QM break at a certain scale" was actually the assumption of the people who started quantum physics, and the default assumption of popular writing including the article we're discussing. It wasn't until much later that Everett proposed that it might be an unnecessary hypothesis like Maxwell's aether.

[1]http://en.wikipedia.org/wiki/Interpretations_of_quantum_mech...


You might as well say that the Bohm interpretation is "like religion" then, rather than saying that the MWI is.

If your assertion is that we might never know which is true, the Many Worlds Interpretation or the Bohm Interpretation, then you might be right. But, if it comes down to these two, most scientists are going to go with the Many Worlds Interpretation by Occam's razor.

There are many other theories that we reject by Occam's razor that we can't disprove, and yet we don't typically consider these rejections to be a matter of "religion".


You might as well say that the Bohm interpretation is "like religion" then, rather than saying that the MWI is.

I might except MWI clearly has more True Believers.

If your assertion is that we might never know which is true

My assertion is that since all interpretations lead to the same exact predictions for every imaginable experiment, it doesn't make sense to say that in any meaningful way one interpretation is true and the others aren't. You might as well be arguing whether the electromagnetic field is weaved by tiny angels out of their hair or not.

The Copenhagen interpretation is the closest to "no interpretation" in that it's the most direct way of translating the math into actual predictions.

Quoting the guy who said you shouldn't multiply entities without necessity, to justify the introduction of infinitely many parallel worlds, is an interesting rhetorical maneuver.


> My assertion is that since all interpretations lead to the same exact predictions for every imaginable experiment

This is not the case. Other than MWI and Bohm, the major interpretations (not including Copenhagen) make different predictions that are, in theory, distinguishable. It's just that it is completely infeasible to perform the experiments at this time. Maybe in 100 years or in 1,000 years, we'll have the technology to perform these experiments.

When the time comes that we can perform these experiments, one of the possible outcomes is that we narrow it down to MWI/Bohm. Once that happens, however, we will have no way to scientifically determine which of the two is the correct interpretation, except to the degree that we trust our intuitions about Ockham's razor. But that is certainly not going to let us know the answer for sure.

> The Copenhagen interpretation is the closest to "no interpretation" in that it's the most direct way of translating the math into actual predictions.

If we ever want to actually do experiments to determine under precisely which situation the probability wave collapses, then we have to do better than the Copenhagen Interpretation, since it doesn't define what a "measurement" is. Without such a definition, there's no way to test whether it is correct or not.

> Quoting the guy who said you shouldn't multiply entities without necessity, to justify the introduction of infinitely many parallel worlds, is an interesting rhetorical maneuver.

It's not a "rhetorical maneuver". I'll just refer you to the Stanford Encyclopedia of Philosophy for more info:

It seems that the majority of the opponents of the MWI reject it because, for them, introducing a very large number of worlds that we do not see is an extreme violation of Ockham's principle: "Entities are not to be multiplied beyond necessity". However, in judging physical theories one could reasonably argue that one should not multiply physical laws beyond necessity either (such a verion of Ockham's Razor has been applied in the past), and in this respect the MWI is the most economical theory. Indeed, it has all the laws of the standard quantum theory, but without the collapse postulate, the most problematic of physical laws. The MWI is also more economic than Bohmian mechanics which has in addition the ontology of the particle trajectories and the laws which give their evolution. Tipler 1986 (p. 208) has presented an effective analogy with the criticism of Copernican theory on the grounds of Ockham's razor.


This is not the case. Other than MWI and Bohm, the major interpretations (not including Copenhagen) make different predictions that are, in theory, distinguishable. It's just that it is completely infeasible to perform the experiments at this time. Maybe in 100 years or in 1,000 years, we'll have the technology to perform these experiments.

So, what exactly are those experiments that allegedly could distinguish between interpretations?


Also, this is a marvel of a sentence: "Other than MWI and Bohm, the major interpretations (not including Copenhagen)"

"Other than Android and Windows Mobile, all major cell phone operating systems (not including iOS)"

The long and short of it is, for purely philosophical reasons you don't like the notion of the state vector collapse. You freely admit that there is no way to experimentally distinguish between your favorite interpretation and the Copenhagen interpretation. You just declare that it's not even a contender, using arguments which have nothing to do even in principle with the outcome of any experiments.

Saying that something is not precisely defined sounds to me totally like grasping at straws. Nothing's ever precisely defined in science, you could criticize any theory including Newton's mechanics by saying that it doesn't define precisely what a measurement is. Which never stopped anyone from measuring things and comparing the values they measured with what the theory predicted.

You quote someone who made an analogy with Copernicus. The Copernican theory simplified the calculations right away, whereas with Quantum Mechanics, the calculations stay exactly the same no matter what story you feel like telling yourself so that you can take the outcome of these calculations and compare them with the real world.

Let me make this clear that I'm not against MWI. I care about MWI exactly as much as about the Copenhagen interpretation (which is not very much.) I am however opposed to pretending that one of the two exactly equivalent ways of saying something is "more true" than another.

Staying within Quantum Mechanics, there are two ways of writing the equations of motion: the Heisenberg picture and the Schroedinger picture. In the former the state vector is constant but the operators are a function of time, in the latter the operators are constant and the state vector evolves with time. The two formulations are equivalent, sometimes it is convenient use one or the other for a specific calculation and often you use a mix of both (so called interaction picture.) Nobody argues that say the Heisenberg picture is "really true" as opposed to the Schroedinger picture. If someone did, that would be inane, even if they invoked Copernicus and Occam (even though the analogy with Copernicus would be maybe better, since the calculations actually are different depending which picture you choose.)


> The long and short of it is, for purely philosophical reasons you don't like the notion of the state vector collapse.

That's not the long and the short of it. I didn't say anything of the sort.

You ramble on apparently oblivious that to the fact that different interpretations of QM can and do make different predictions about what causes wave-function collapse. E.g., GRW makes different predictions from Penrose's interpretation. Why you are oblivious to this fact, I cannot fathom, as I've mentioned this fact several times now.

   http://en.wikipedia.org/wiki/Ghirardi-Rimini-Weber_theory

   http://en.wikipedia.org/wiki/Penrose_interpretation
In theory, someday we'll be able to design experiments that determine which one of these interpretations, if either, is correct.

So, as you should be able to see, this is in no way comparable to different forms for equations that make identical predictions.

As to the measurement problem

   http://en.wikipedia.org/wiki/Measurement_problem
Newtonian mechanics has no such measurement problem, so it is you who are grasping at straws. If a QM Interpretation is going to be considered a complete theory, it has to answer certain questions. For instance, if were were to train cats to operate quantum measurement devices and to perform experiments for us, and we consequently determined that measurements performed by cats did not cause wave function collapse, would that confirm or deny the Copenhagen Interpretation?

A. Neither, because the Copenhagen Interpretation doesn't define "measurement", and so we have no idea whether or not measurement by cats counts as "measurement".

You don't like cats, substitute in nanoscale molecular robots instead.


I'm not familiar with GRW, Penrose's theory is obviously more than an interpretation, he postulates new physics which is experimentally testable. The MWI does nothing of the sort to they're not at all similar.

Looks like you are trying to confuse the issue: to deny that QM interpretations are not experimentally testable, you bring into discussion a bunch of things which are not interpretations, but actual scientific theories, and call them "interpretations." You might as well argue that the ancient Greek religion was testable (you could climb Mount Olympus and see whether Zeus was there or not), therefore religions are testable, therefore the existence of God is a scientific fact.

Regardless of wonderful qualities of the GRW theory and the Penrose interpretation, the fact stands that there is no experimental way, even in principle, to distinguish between MWI and Copenhagen. If you teach cats or mice or West Highland Terriers to perform quantum mechanical experiments, they will still not be able differentiate experimentally between MWI and Copenhagen. And you will not be able to determine whether measurements performed by cats cause wave function collapse, because you will have to observe the cat to ask him what he observed.


> I'm not familiar with GRW, Penrose's theory is obviously more than an interpretation

I have no desire to get into a debate on terminology. I've just been using the terminology that was taught to me in an entire semester-long class I took at MIT on QM and its various "interpretations". GRW was called an "interpretation", as was Penrose's. The term "interpretation" is also the term used on the Wikipedia page for "Penrose's Interpretation".

> the fact stands that there is no experimental way, even in principle, to distinguish between MWI and Copenhagen.

That is not a fact; your assertion is false:

     http://en.wikipedia.org/wiki/Many-worlds_interpretation#Comparative_properties_and_possible_experimental_tests
See the text starting at "However, in 1985 David Deutsch published three related thought experiments which could test the theory vs the Copenhagen interpretation."


You know what, this is interesting. I would love to be proven wrong.

I searched for that paper but couldn't find it, nor any description of the experiment it proposes. I wrote an email to professor Deutsch asking him to send me a pdf copy. I will get back to you if/when he responds.

I still kind of suspect that the paper will make some assumptions that will effectively mean "if <magic> then we could test the MWI in the following way: ...".


He has a book. I haven't read it, but I'm sure he must discuss this issue in it.

In any case, I'm sure you can come up with a definition of "measurement" that might make the Copenhagen Interpretation experimentally indistinguishable from MWI, but why? MWI is and always will be a simpler theory, and thus preferred by Occam's razor.

The problem with the Copenhagen Interpretation is that it is NOT a scientific theory. It is not a scientific theory because it is not falsifiable. It also builds non-fundamental things like "measurement" right into the fundamental laws of physics, which is absurd. Copenhagen is not falsifiable, because if I were to attempt to falsify it by demonstrating that a "measurement" did not cause the wave function to collapse, you could always assert that I had used the wrong definition of "measurement".

GRW, on the other hand, can be seen as a sub-interpretation of the Copenhagen Interpretation because it rigorously defines the term "measurement". I.e., entanglement of the particles in question with a "large enough" collection of additional particles. "Large enough" here needs some experimental tuning, but some day we may be able to perform these experiments and attempt to falsify GRW. Because we can falsify GRW, it IS a scientific theory.

Most other collapse interpretations that I have heard of can likewise be seen as sub-interpretations of Copenhagen, in that they define "measurement".

One of those sub-interpretations is the Wigner Interpretation, or the "consciousness causes collapse" interpretation. Counter to your previous assertion, we could in theory experimentally determine whether this is true via trained rats: (1) Train a rat to perform measurements, (2) kill the rat before it has a chance to tell you the results, (3) check to see whether wave function has collapsed. We can do this because there are experiments that will tell you if two particles are entangled or not.

Animal consciousness not good enough for "measurement"; it needs to be human? Okay, Nazis could in theory perform this experiment, as could future evil alien overlords.

Now let's go back to MWI: I think that many people have a misconception about MWI, and perhaps this is due to its name. (If we had stuck to the name "Everett Interpretation" perhaps that would have been better.) MWI doesn't really imply multiple worlds. It implies one very complicated superimposed world. It is also the simplest theory, as it does not add the complication of wave function collapse. Furthermore, it is completely consistent with every bit of data that has ever been collected.

Another misconception is that MWI asserts that the "other worlds" that fall out of it are "real". This is not the case. MWI is agnostic on this issue. For instance, Stephen Hawking is in favor of MWI, but he doesn't like the name, because he thinks that asserting that the "other worlds" are "real", rather than just mathematical artifacts of the theory, is not something that we can scientifically know.

Executive summary: MWI is the simplest theory, and is consistent with all data. By Occam's razor, we are required to give this theory preference until we have evidence that contradicts it.

The counter argument to the above is that the ontological cost of all these many worlds (or maybe even the complicated superpositions of state) is too great, and that this somehow violates Occam's razor.

Well, first of all it doesn't, since Occam's razor these days is almost always taken to prefer the SIMPLEST THEORY, regardless of additional philosophical worries like, "It's just creepy to think that there might be so many other worlds."

Furthermore, this objection is based on a misinterpretation of MWI. MWI is completely agnostic about the ontological status of these "other worlds". It's just a mathematical formulation for making scientific predictions. There are many cases in the history of science where "creepy" things fall out of the math, if we were to grant them the status of being ontologically "real", and yet we don't reject the theories because of this. E.g., virtual particles and advanced waves. Sometimes scientists at some point decide that mathematical artifacts of theories are "real". E.g., virtual particles. And at other times, they remain just mathematical artifacts. E.g. (maybe), advanced waves.

Are the other worlds in MWI "real"? You tell me! Science cannot answer that question. This does not imply that MWI isn't the best theory.


OK I got hold of the paper, which is a very nice read, I can email it to you if you want. It's from a talk David Deutsch gave at some conference, I bet he is a very entertaining speaker. My summary won't do it justice, but anyway. The experiments are the following:

Experiment 1:

Measure the current time and call it t1. Note that you are conscious at the time you are observing the value t1. Wait. Check your watch again. It is showing a different time t2 now. You are still conscious and your consciousness is in a different state than at t1. Therefore you've detected experimentally a superposition of distinct states of human consciousness, since there exists a formulation of Quantum Mechanics in which time is a regular operator like any other observable and you've observed two different values of it.

Experiment 2:

Consider a computer which is so well isolated that interference can be observed between it's computational states. The computer is programmed to perform an algorithm which takes one bit of input, and produces one bit of output. The algorithm is very computationally expensive and takes a long time T to complete. We communicate with the computer via two observables, I (for input) and O (for output).

Prepare the initial state as a superposition of both input values 1/sqrt(2) * (|I=0> + |I=1>). After the time T, the computer will be in the state 1/sqrt(2) (|I=0,O=f(0)> + |I=1,O=f(1)>), where f(n) is the output the algorithm produces for the input n. So by measuring I and O at this point we will either learn the value f(0) or f(1), but not both. But say we are really interested in f(0) XOR f(1). Classically, it's impossible to calculate it without computing both f(0) and f(1) so it has to take the time 2T. But with our computer, which is in a quantum superposition of states, and with the help of some clever algebra, we can construct another observable R. When we measure R one of the two things happen with equal probabilities: either we get the correct value of f(0) XOR f(1), or we lose any hope of learning it from our system. We know which one happened, i.e., with probability 0.5 we will have the correct value for f(0) XOR f(1) and we will know for sure it is correct.

Since we obtained f(0) XOR f(1) in half the time, clearly there existed two parallel worlds in which two versions of the computer calculated f(0) and f(1).

(It is noted that some members of the audience objected that Experiment 2 is conceptually no different from the two-slit interference experiment. The author allows that this may be so in a sense, since indeed, the two-slit experiment alone should be enough to make it obvious that the Everett's interpretation is right, however Experiment 2 makes it even more obvious.)

Experiment 3 (simplified version):

Consider a system consisting of a spin 1/2 particle and a quantum computer running a simulation of human consciousness. Prepare the spin in the state |→> = 1/sqrt(2) (|↑>+|↓>), i.e., measuring the spin along the x axis will always show it's pointing to the right, which means that measuring the spin along the z axis may give up or down with equal probabilities. Have the conscious being in the computer measure the spin along the z axis and communicate to the outside world the fact that he/she observed one of the values 'up' or 'down' (without saying which one). Then undo all the transitions the combined system underwent, i.e., revert it to the original state (which is in principle possible for a system consisting of a quantum computer and a microscopic system). Then measure the spin of the particle along the x axis. If it shows 'right' every time (we need to repeat the whole procedure many times), then the Everett interpretation must be true, since otherwise the fact that a conscious being observed 'up' or 'down' would have caused the particle to collapse to a state in which 'left' and 'right' are equally likely.

(The original formulation of Experiment 3 was more complicated, with three spins not one and some more clever algebra, the purpose of which if I understand correctly is to prove that it's possible to communicate to the outside world the fact that a measurement along z axis was taken, without losing the ability to revert the system to the original state.)

So, there. I'll let you form your own judgement.

Counter to your previous assertion, we could in theory experimentally determine whether this is true via trained rats: (1) Train a rat to perform measurements, (2) kill the rat before it has a chance to tell you the results, (3) check to see whether wave function has collapsed.

It doesn't work that way, because even if the rat doesn't cause the wave function to collapse, it interacts with the system and causes a transition, from the original state to a state which is a superposition of states corresponding to various values the rat might have gotten from the measurement, each of these states individually looking exactly as if the rat collapsed the wave function, with the coefficients such that the probabilities for each value come out right. And killing the rat afterwards does not undo it. So you will observe that the wave function has collapsed. Same if you use a mechanical detector in place of a rat.


Thanks muchly for that excellent summary!

Yes, I would like copy of the paper. Please send it to doug at alum dot mit dot edu. Thanks!

> And killing the rat afterwards does not undo it.

Yes, sorry; it's been a long long time since I've thought about this sort of stuff in any detail. This is what I should have written:

Train a rat to perform measurements on an observable with two possible outcomes and have it press lever A for outcome A and lever B for outcome B. Put the rat into a sealed box to perform the measurement. A dial on the outside of box will read either A or B once the rat has performed the experiment and recorded the result.

You can now come up with a complex observable on the whole system, i.e., the original observable being measured by the rat, plus the rat, and the box, that will give two different results on different occasions if the rat did not collapse the wave, but will always give you the same result if the rat did collapse the wave.

The problem with this complex observable is that for it to work, you must consider every molecule in the rat, every molecule of air it interacts with, etc., etc., etc. Miss a single molecule and the results are randomized.

Are we ever going to be up to this task? Not any time soon! But it could be child's play for the aforementioned evil alien overlords.

One complication, I can imagine, is that for this to work, you'd need to have a perfect model of the rat's biology and cognition in order for you to come up with the right observable. In the face of not yet being sure how collapse works, this might be very difficult. But then again, I'm sure that evil alien overlords are up the task.

David Albert talks about doing these sorts of experiments on p. 88 of Quantum Mechanics and Experience, and it was this that I was thinking of. Only Albert's examples don't have a trained rat, but rather other, simpler measuring equipment for which we are trying to determine whether or not it causes collapse.

As for Deutsch three experiments, it looks like Experiment 3 has two huge advantages over my trained rat system: (1) Since it's all contained inside a quantum computer, it seems a lot more feasible without the help of aliens. (2) If I understand correctly, the reversal stages means that you end up with a very simple observable, rather than the unfeasibly complex observable that you would need for my trained rat system.

As for Experiment 1 and Experiment 2, I don't understand #1 at the moment. And #2 seems to me so obvious as to go without saying. But it doesn't seem to prove anything that we didn't already know. Of course an uncollapsed wave can compute more than a collapsed wave!


I want to reiterate here that MWI is a much simpler theory than Bohm's, even if the consequences of MWI might seem more complicated. At times, there have been scientists who want to apply Occam's razor to the consequences, but most scientists these days would apply Occam's razor to the theory itself.


BTW, while I'm not entirely sure what this article is saying, here's one of the best explanations of "Quantum" I've ever seen. Unlike most, it doesn't start out from physics, but from a mathematical/probability framework ... that allows for negative probabilities. Yeah.

http://www.scottaaronson.com/democritus/lec9.html

I'll quote a bit:

    -----------
So, what is quantum mechanics? Even though it was discovered by physicists, it's not a physical theory in the same sense as electromagnetism or general relativity. In the usual "hierarchy of sciences" -- with biology at the top, then chemistry, then physics, then math -- quantum mechanics sits at a level between math and physics that I don't know a good name for. Basically, quantum mechanics is the operating system that other physical theories run on as application software (with the exception of general relativity, which hasn't yet been successfully ported to this particular OS). There's even a word for taking a physical theory and porting it to this OS: "to quantize."

But if quantum mechanics isn't physics in the usual sense -- if it's not about matter, or energy, or waves, or particles -- then what is it about? From my perspective, it's about information and probabilities and observables, and how they relate to each other.

    Ray Laflamme: That's very much a computer-science point of view.

    Scott: Yes, it is.
My contention in this lecture is the following: Quantum mechanics is what you would inevitably come up with if you started from probability theory, and then said, let's try to generalize it so that the numbers we used to call "probabilities" can be negative numbers. As such, the theory could have been invented by mathematicians in the 19th century without any input from experiment. It wasn't, but it could have been.

    -----------
Greatly recommended. I can't say I understand it all, but the concept of a "qubit" and why it's so different from a regular bit is a lot clearer to me.



Looks like you're a firm believer, so I ask, is MWI by your definition falsifiable?

Do any experiments past, current or future purport to test predictions made from MWI?

While MWI may make sense in a universal reading of QM, I'm not sure we can simply extrapolate this onto the macro level, despite it's pragmatic successes over the last century. IMHO QM itself, still feels like a temporary hack (shim), when compared alongside ER.

Until we have a TOE, it may be too presumptuous to be making positive assertive claims on the correlation between the MWI model and reality.



If decoherence is "untestable" relative to collapse, then so too, collapse is "untestable" relative to decoherence.

Of course! Nobody questions that. The two are equivalent. That means, neither is more true than the other. You just happen to be in the minority which feels that thinking about multiple worlds makes their head ache less than thinking about the state vector reduction.

What if the history of physics had transpired differently—what if Hugh Everett and John Wheeler had stood in the place of Bohr and Heisenberg, and vice versa?

My guess is, we wouldn't have had to wait until 1957 for someone to say: "Fuck that noise, let's just forget about the multiple worlds and pretend the wave function collapses when I make a measurement." They would be like, "All right buddy, I'm sure you're totally right that there are really multiple worlds and all that, I'll just act like it's merely a wave function collapse for a moment, just to get on with my work of actually doing something as a physicist." My guess is that would happen within the year.


I've always found classical Newtonian physics, the Special Theory of Relativity, and (to a lesser extent) the General Theory of Relativity to be understandable at an intuitive level, but quantum phenomena just baffles me, even when I think I "understand" it.

PS. IvoDankolov: I find it difficult or impossible to think intuitively about quantum phenomena.


Let me make it worse.

Ever heard of chaotic systems? Any dynamic system that shows extreme sensitivity to initial conditions is chaotic. For instance smoke curling from a cigarette, planetary orbits over time, weather, water dripping from a tap - all of these show extreme sensitivity to initial conditions.

Here is the fun thing. In quantum mechanics everything evolves linearly. Therefore extreme sensitivity to initial conditions is entirely impossible. We only think that we observe that. Yet the world is full of cases where we can demonstrate such sensitivity!

See http://www.iqc.ca/publications/tutorials/chaos.pdf for some of the attempts to reconcile observed classical facts with what we think are true quantum truths.


Thanks. A perfect example of the outright weirdness of quantum phenomena.

I read the first section of that paper ("Why quantum chaos?"), and its proposition makes no sense to me. None other than Poincare PROVED (proved!) ages ago that the motion and position of just three little points attracted to each other according to Newton's formulas are extremely sensitive to infinitesimal changes in their initial motions and positions (i.e., the system is chaotic in the classical sense). And as you point out, the world is full of cases that demonstrate such hyper-sensitivity. Yet, according to this paper, such a system is impossible in nature. I don't get it.

Instinctively, I have to believe there must be a more fundamental underlying explanation for this and other apparent contradictions... it's just that at the moment no one knows what this explanation might be.


>Here is the fun thing. In quantum mechanics everything evolves linearly. Therefore extreme sensitivity to initial conditions is entirely impossible

I'm not sure if that is true. You can have dense sets created by Linear Operators if you're in a infinite dimensional setting (which you _are_ in QM). That's what Hypercyclic Operators are.

In any case, although the wave function evolves linearly, the _square_ of the wavefunction obviously does not (by the product rule). Since all observations are going to be based on square of the wavefunction, or the product of it with it's gradient, all observable quantities will typically evolve nonlinearly.


Don't hit yourself too hard, even Feynman said once: "I think I can safely say that nobody understands quantum mechanics."


In what terms do you think you "understand" it?

What do you make of this problem with distant entangled particles? The double slit experiment and interference in general? The Heisenberg uncertainty principle? (Or as I'd like to call it, Heisenberg's horribly mislabeled-in-order-to-confuse-students principle)

A shot in the dark - many of the problems with coming to terms with quantum mechanics arise from trying to impose on it that it should somehow behave like classical mechanics, or that somehow we humans stand above it and look down upon it, and heaven forbid that we're part of a qunatum system).


Exactly, we humans try to learn new things by relating to what we already know. This leads to trouble when we encounter new subjects that have no connection to previous experiences, because we have a hard time relating to it. However, the problem with people finding QM esoteric and weird is a bit more nuanced than that.

In most of the other areas of knowledge we can make progress because they do resemble reality in ways that we are familiar with. For example rotation in classical mechanics or special relativity can be somewhat confusing to a beginner, but if we think a little bit deeper than what we are used to then we can see that the results make sense and match what we experience. And also this experience is consistent at different scales. Now you move to QM and the first thing that hits you is that reality "breaks" after some point in the size scale, beyond that everything is different, with uncertainties, probabilities and a bunch of other odd properties. Learning QM is an exercise in mind-stretching, even for the most capable of us. For me the problem is not that it is different, but why is different. Why do we have such a gap between the large scale and the small scale? That is what baffles me.


Quantum mechanics is very simple and intuitive, after you remove the annoying physics (and the long history of misunderstandings that courses in the subject seem duty-bound to cover) from the mathematics. See http://www.scottaaronson.com/democritus/default.html


Here's the paper in question:

http://arxiv.org/pdf/1110.3795.pdf

(Man, I love arxiv.org. It's so nice.)


Here's the original paper:

http://arxiv.org/abs/1110.3795

[pdf] http://arxiv.org/pdf/1110.3795v1

So let's clear up some misconceptions: First, this is not a problem specific to Copenhagen or any other interpretation of QM. Second, it is not just the EPR paradox. Third, it is not the first time someone has proposed a system utilizing quantum entanglement for superluminal communication; that title goes to Karl Popper:

http://en.wikipedia.org/wiki/Poppers_experiment

Popper's experiment was a very clever way to get around the no-communication theorem. It was eventually performed, but no FLT communication could be observed [some suggest we need to keep looking]. I presume that this new paper is another clever way to avoid the NCT, but I don't have time to read it at the moment.


If you all want a good, intuitive explanation of quantum mechanics that really captures the "deep" aspects of the subject (as opposed to the shallow, rote memorization style of McQuarrie, et al.), see:

http://lesswrong.com/lw/r5/the_quantum_physics_sequence/

It's kind of weird and has some asides about consciousness and what not but other than that, it gives you a really good mindset for thinking about the subject.


Having read this very carefully, I can't see how quantum mechanics as I always understood it can ever be either non-local or break relativity. The following was probably written down by someone else a long time ago, but I don't know by who or when.

When you entangle two systems, they have to actually be in the same place/with slower-than-light communication distance. Thus, they each store "secret" information in some deterministic way which is impossible to measure directly, and which we can probe only using repeated experiments and probability distributions. The idea is that while quantum systems are deterministic just like mechanical ones, they hide their determinism in black boxes whose mechanisms it is impossible to discover. The "uncollapsed waveform" has in fact been collapsed ahead of time, you just don't know what the answer is until you measure.

Back to our entangled particles, when you separate them the determined decision they made when they were together is revealed. They do not need to communicate, as they already have their story straight.

It seems this entire area of research is a misnomer - if you let the prisoners talk to each other just after arrest, of course their stories will be complimentary later, even if one is in Alcatraz and the other in Siberia. You just have to assume that particles so small we can only "see" them in the way they interact with other particles have some internal structure/state that we also can't see, and thus like the prisoners, can remember the story.

(of course, declaring an entire area of research irrelevant is usually a sign that I've missed the point wildly - if anyone would like to correct my mistake I'd be interested to hear!)


Usual disclaimers: I am not a scientist and my knowledge of quantum mechanics is very minimal.

However... "Repeated experiments have verified that this works even when the measurements are performed more quickly than light could travel between the sites of measurement"

What interests me at this stage is not 'why' entanglement works, but rather how it can be used. The article indicates that repeated experiments have verified it appears to be a real process, where actions on one particle can influence another particle (seemingly) no matter how far away, so why can't we use it for communication? To use a simple example, if I encode data as movement in the 'sender' particle, and the movement can be monitored in the 'receiver' particle, then you can have a system for sending messages.

Seems to me the need to control the internal state of the particles to communicate may be unnecessary. Perhaps to explore further, can anyone here the types of measurable phenomena that have been seen to be mirrored in entangled particles (the ones you know of, anyway)? Would particle spin be one example? Thanks!


Are you aware of any attempt to explain quantum superposition (and entanglement) on the information theory basis? Is it possible that superposition is an optimization that allows the Universe not to compute stuff that does not have any observable effect? Maybe classical Universe would be much more computationally expensive?



Not a physicist (theoretical or otherwise), but to me the introduction of the concept of "hidden variables" reminds me of the addition of Phlogiston to the body of knowledge.

Unless... they're using it as a placeholder for as-yet undiscovered ways that entangled particles can interact, and it's known that it's a placeholder.


"Hidden variables" have been proven to not be a satisfactory explanation for quantum mechanical effects for some time now.

http://en.wikipedia.org/wiki/Bell%27s_theorem

Though, I should add, what makes this paper interesting is that they try and find loophole around Bell's theorem. This would be a big deal if true.


Local hidden variables. Bell's theorem does not address non-local variables.


So Satan was the angel that made reality work by using global variables. He was cast out by god for violating the coding standard. All the evil you see in the world is the result of errors caused by the use of global variables and a lack of sufficient testing prior to release that was the result of an unreasonable schedule that was pulled out of God's ass simply because he wanted to prove he could make the angels ship something in six days.

We need an update to Time Bandits.


Hm I don't think the issue with Phlogiston was that we didn't know what it was, but that rather than try to use the scientific method to make predictions about it, they just wrote down "when a, do b, Phlogiston does x".

In this case they are doing experiments where they expect the hidden variables to do a very specific job (transmit information) and are trying to test whether they do what they expect them to do or not.

That is science.


This reminds me of the Ansible, a communication device allowing instant or near instant faster-than-light communication. It is seen in many science fiction works and often justified as quantum entanglement in practice.

https://en.wikipedia.org/wiki/Ansible


I didn't realize that Ursula Le Guin coined that term. My first recollection of it was from Orson Scott Card and later, Dan Simmons. In any event, entanglement certainly doesn't look like it will lead to an ansible yet.


Yea, my first encounter was in Ender's Game. :)


I loved that book as a kid. It looks like they are finally going to release the movie version in a year from now. They've been teasing us with movie announcements for the better part of a decade.

I wonder if they'll directly mention quantum entanglement?

http://www.imdb.com/title/tt1731141/


The paper is behind a paywall, so I cannot see if they researches addressed this already, but doesn't Everett interpretation preserve locality and not need faster than light propagation? Are the results incompatible, or was it simply disregarded to make the paper look more impactful?


Bohmian mechanics [1] is a deterministic interpretation of QM that's empirically equivalent to traditional quantum theory. However, it requires non-locality (instantaneous action across vast distances).

From a recent paper [2]:

"Bohmian mechanics reproduces all statistical predictions of quantum mechanics, which ensures that entanglement cannot be used for superluminal signaling. However, individual Bohmian particles can experience superluminal influences."

This paper [3] referenced in the Ars Technica article shows that finite superluminal velocities (c < v < inf) can be exploited to achieve superluminal signalling.

Very interesting result. I assume BM is still consistent with this. The paper does mention BM:

"Bohmian mechanics and the collapse theory of Ghirardi, Rimini, and Weber [...] reproduce all tested quantum predictions, [however] they violate the principle of continuity mentioned above (otherwise they would not be compatible with no-signalling as our results imply)."

The principle of continuity described in the paper:

"In both cases, we expect the chain of events to satisfy a principle of continuity: that is, the idea that the physical carriers of causal influences propagate continuously through space."

"Clearly, one may ask whether infinite speed is a necessary ingredient to account for the correlations observed in Nature or whether a fi nite speed v, recovering a principle of continuity, is sufficient."

Actions can happen instantaneously under BM ("infinite speed"), so BM is still consistent with QM and doesn't allow FTL communication.

[1] http://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory

[2] http://arxiv.org/pdf/1207.2794.pdf

[3] http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys...


My grandma puts a donut in one of two boxes, closes both, gives one to each of us. You leave for Mars and, at an agreed-upon instant, I open my box. Now I know what is in your box, and you know what's in mine. No faster-than-light, no mystery.


I hate this article, because locality is an important property that we think reality does have.

It's like saying "Gravity violates conservation of energy!!!" No it doesn't; quit playing fast and loose with well-understood principles.


Locality in this sense refers to the presence of "hidden variables" as first put forth by bohm and later suggested by other people. It was believed that the various CHSH, HOM, and other bell-inequality inspired experiments would have put a nail in the coffin but critics always came back and say "but you didn't control for X." This experiment is just another in the long line of experiments started by Aspect and continued by others to put bounds on this.


:/ the more I learn and read about quantum mechanics I'm starting to realize the FTL quantum communication seems to be an SF pipe dream hoped and imagined on some possible theories but they seem to be less than likely. What a PITA. Ah well, maybe we can still use singularities or something to punch holes in space/time and shoot info through singularity routers :) Though that does seem even farther away then making quantum mech doing communication for us. ah well


The way that I best understand quantum entanglement (and maybe someone like jessriedel can correct me if I'm wrong) is thus:

You may be familiar with the Schrodinger equation (HΨ = ih' dΨ/dt) or the more accurate time-dependent Dirac equation. In each of these equations is a function called the wavefunction (denoted with Ψ). This function represents the "quantum state" of your system -- in other words, all the information that exists within a system. Ψ evolves deterministically in time. You can apply an operation to this function, and when you do, you get the original function back multiplied by some value. There are different operators, each corresponding to an "observable" (the thing you measure in the lab). For example, you can measure momentum, position, energy, spin etc... and each of these observables has a different corresponding mathematical operation that you perform on Ψ to get XΨ, where X is the mean value of the observable.

Now in QM textbooks, you'll frequently see Ψ written as Ψ(r, t), where r is a position vector and t is time. r denotes the position of whatever particle constitutes your system. So what if you have a two particle system like hydrogen (proton and electron) or positronium (positron and electron)? Well, a QM textbook will write the state of your quantum system as Ψ1 * Ψ2 and completely gloss over the fact that this approximation does not apply in all situations. It is physically inaccurate to say that Ψ of two particles is two individual Ψ's multiplied together. For some systems, it is a good approximation and makes things easy to calculate, but in reality, there is really only one wavefunction Ψ and it is a function of all the particles in the universe.

So now you can see where entanglement comes in. If Ψ is a wavefunction of all extant particles, then surely there will be correlations between every measurement of Ψ that you take! Now most of the time, you can't find correlations -- too many particles are affecting too many other particles (decoherence). But if you prepare two of them together and keep outside particles from interfering with them, then you can observe the correlations between two particles no matter what the distance!

So now for the whole "information transfer" business. I said earlier that applying an operator to Ψ gives you the value of an observable -- what you measure. The weird thing though is that what you measure isn't always exactly this value. Instead, the mean value of many measurements will be this value. You can also compute the standard deviation of these measurements using Ψ, but that's about it. Nobody knows where the random "noise" in measurements comes from. So far, it seems as though our universe just has some randomness inherent to it (and we're quickly ruling out all remaining superdeterministic theories; Gerard t'Hooft seems to be a hold-out: http://physics.stackexchange.com/questions/34217/why-do-peop...)

So you can't control or predict the individual measurements to as much precision as you'd like. Sucks, huh?

Anyway, you can plot and analyze this data, and what you'll notice for two entangled particles separated by thousands of miles or more is that there are statistical correlations between the two sets of data (again, data you can't control -- if you can't control it, you can't send information with it).

Obviously, you need both sets of data to notice that there are correlations.


Being requested by name is too ego-boosting to pass up, so let me take a crack at clarifying at least one apparent confusion.

> For example, you can measure momentum, position, energy, spin etc... and each of these observables has a different corresponding mathematical operation that you perform on Ψ to get XΨ, where X is the mean value of the observable...

>The weird thing though is that what you measure isn't always exactly this value. Instead, the mean value of many measurements will be this value. You can also compute the standard deviation of these measurements using Ψ, but that's about it.

A good QM textbook will actually say something much more precise. It says that (a) the set of possible outcomes of the measurement is equal to the spectrum of the observable being measured and (b) the chance of getting a particular outcome is given by the squared inner product of the wavefunction with the appropriate eigenvalue. (The whole business of calculating means and standard deviations is confusing unless you understand that; unfortunately, this is allowed to happen often in into QM courses.) This means that QM doesn't just predict some statistical properties of the outcome distribution, it completely specifies the distribution.

Also, I figure you know this, but I want to mention that when you say

> you can plot and analyze this data, and what you'll notice for two entangled particles separated by thousands of miles or more is that there are statistical correlations between the two sets of data

it's important to emphasize that these are non-local correlations (in the Bell sense). You can generate mere local correlations using everyday classical systems.


Great, thanks for the clarifications!


I'm starting to believe that we confuse the mathematics of quantum mechanics with being the actual underlining physical phenomena, to the point where we start to believe that "wave function collapse" are real, rather than just a way of looking at something via probabilities; or being the models, or abstractions, or the artifacts of the maths involved.


Take that one step farther and you'll arrive at the conclusion that if we take QM seriously, then there is never a collapse, and the many worlds interpretation is a logical consequence.


It would be interesting to see the result of experiment where one electron in entangled pair collides with a positron.

I suspect that this might give some interesting results.


That very adorable kitten is way too distracting, and made it very hard to focus on the article.


Better not tell this to Mitt Romney.


If you were able to construct a rigid beam 25 million miles long wouldn't you be able to transmit data (push a button) on the other end faster than the speed of light?


It is impossible to construct a perfectly rigid beam, in the sense that you are thinking of. A "rigid beam" would be made up of matter and, as you push on one end, you would trigger a wave that would propagate from neighbour to neighbour until it reaches the other end.


Nope. Pushing on the end of the beam sends a pressure wave through the beam. The button gets pushed when the wave reaches the other end. Wave travels significantly slower than light.


If you push on the beam and the other end does not move right away doesn't that imply that the beam has compressed and is not rigid?


> doesn't that imply that the beam has compressed and is not rigid?

Yes it does. And in fact that is the case: The beam has compressed. A fully rigid beam is impossible.


What you're proposing is an effect mediated by physical forces. Specifically, on a fundamental, atomic level your beam is made up of atoms. And when you push it, you're bringing atoms closer together, that feel an electrostatic repulsion. And the beam moves.

So, in the best case, this method of communication is equivalent to communicating with electromagnetic waves. And they have a speed limit: the speed of light :)

So this wouldn't be faster than light communication.


Even if you create a rigid beam (ignoring the fact that it isn't possible), once you start turning it with a certain speed, its end will start reaching speed of light and start acquiring mass. As the speed aproaches speed of light, the mass of the beam will be approaching infinity.

To think about it, if you have a sufficiently long beam you won't be able to give it any angular momentum at all, regardless of how much force you apply. It will be fixed in space. Perfect starting point if your hobby is moving stars with a combination of chains and pulleys.


It is possible to conceive of violating the known laws of physics if you start by conceiving of an object that violates the known laws of physics. A perfectly rigid beam 25 million miles long would be such an object.

A beam made of matter is not rigid (as we think of the word) on that scale, as others in the thread have pointed out.


Relativity says that you can't construct a rigid beam 25 million miles long.


I think that's going a little too far. If you can construct a "rigid beam" five feet long, I see no way that relativity countermands a "rigid beam" 25 million miles long. It's just that "rigid" doesn't mean what grandparent thinks it means -- info still travels through the beam via pressure waves.


The pressure waves are ultimately manifestations of the exclusion principle, expressed through forces that are constrained to travel at or below c.

So yes, relativity dictates that you can't create a rigid beam 25 molecules long, much less 25 million miles long. There can be no such thing as a perfectly rigid beam of any length.


> If you can construct a "rigid beam" five feet long

Except you can't. A existence of sound traveling through the beam proves without a doubt that the beam is not rigid.


Exactly, you can only construct beams at any scale that seem rigid at some scale. A 2 foot long piece of re-bar looks pretty rigid, until you look closer.


There is another explaination: http://discovermagazine.com/2010/apr/01-back-from-the-future

The future causing the past.


What?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: