I'm sorry, but this video did not make anything clear to me. I really liked the visuals but there was absolutely no point in the video that I could find. I feel the narrators could have done a better job explaining what quantum computation IS, rather than going on about superposition. Are these parallel computations going on? What?
Here's a java simulation I wrote about 10 yrs ago showing visually how a single quit behaves under various operations. Didn't quite get to completing the two quit case (which is where entanglement enters the picture and he's more interesting).
Unfortunately, that video makes a lot more sense only if you understand quantum mechanics or quantum computation.
The video doesn't realy describe what quantum computation is but it rather describes what feature of QM makes quantum computers different from classical computers (superposition and quantum correlations) and why is it so difficult to build or use a quantum computer (decoherence, and the fact that you can't observe or clone the "registers" at intermediate stages in a computation).
The video addresses the notion of quantum computing quite obliquely. I would expect this to be the first of a multi-part series where this is explored in more depth (especially given the attention it's receiving).
It is - well its part 2 of an N-part series. Apparently they are being released as they are being finished, the last one having come about about a month ago. The link is in the text a handful of pixels below the youtube embedding.
Unfortunately, I was lost as well. I have no background in quantum mechanics. I can't help but feel that was needed as a prerequisite to appreciating this video. Sort of like a beautiful analogy. It falls part of you don't understand the concept to begin with.
Anyone know the answer to this question?
OK...so, if multiple wave functions which are overlapped make up the concept of super-position, and we can measure the wave function at any given time, why can't we take enough measurements to formulate a wave theory that will predict where the wave will be at ANY given time?
> we can measure the wave function at any given time
We can't. Nobody knows how to measure a wavefunction. Instead, what you measure is one of the eigenvalues of a linear operator that acts on a wavefunction (which particular eigenvalue? You can't always predict this; hence the randomness in QM). For each "observable" you want to measure (position, momentum, spin), there is a specific linear operator associated with it. For instance, in position-space, the position operator is just the identity operator.
If you haven't taken linear algebra, I can clarify this further.
I have a suspicion what you are trying to tell me is at the root of this. But, no I do not remember any complex math. Yes, I took multiple calculus classes in college, but I don't remember any of it. That was decades ago.
You don't measure the wave function, you measure the state of the underlying system (qubits or whatever).
When a measurement is made, the wave function, which gives the probability of finding the system in any particular state, "collapses" so all the probability mass is on the state you just measured.
If you keep measuring the system quickly (relative to the time it would take to change significantly) you actually stop if from evolving the way it would have if you'd left it alone, as if you dropped a ball and kept staring at it and it dangled in the air instead of falling.
We can predict the behavior of a continuous signal, since it's an ongoing stream of similar events. And we can predict the averages that we can expect to see, but for a single, individual emission of one photon, when we provoke or stimulate the emission of that photon we can't be sure EXACTLY how the photon was emitted, until it bombards the detector, and changes the detectors state. THEN we can infer from the detector, the original state.
The reason it's difficult to know the precise situation of a photon, in-flight, is because it's been emitted from a moving target, whose precise state is difficult to control perfectly: a live atom.
Without having perfect control of the atom that sends off the spinning, oscillating particle, we have a hard time discerning the exact circumtances of the spin/oscillation imparted upon it. We can hazard guesses, but when we're right, it's only because we get lucky. Not because we controlled it.
EDIT: The reason is there are two factors driving the emission:
1. The external behavior of the electon shells
2. The internal nuclear behavior of the protons and neutrons (which do not behave simply as a clump of spherical marbles, the nucleus is it's own volatile, nebulous construct)
And, oh yeah...
3. Any other random cosmological events that might sail in from outer space (literally), as fortuitous signals
Well, that makes the most sense to me. But, I've always thought this is how the universe worked. Infinitesimal forces acting on a given outcome make precision prediction impossible. I still don't get how quantum mechanics is "spooky". Maybe without a math degree I never will.
Opening a box containing a cat, a vial of poison, and a radioactive sample instantly causes the cat to become either alive or dead, where before the cat was in a superposition of states, i.e. neither alive nor dead. Not spooky?
The thing that's "spooky" about quantum mechanics is that it is NOT just infinitesimal forces causing precision to be impossible. That's more chaos than quantum.
Here's a bit from my own research. I start with a neutron whose spin is pointed up. I then use magnets to bend the neutron's trajectory depending on whether it's spinning to the left or the right. Since the spinning in the up state is a linear combination of spinning in the left or right states, it just so happens that I don't know whether the neutron bends to the left or the right.
For most of the neutrons, that's all I do. Another set of magnets pulls the beams back together. They get to the end of the instrument and I measure whether they're spinning up. Since I haven't done anything besides split the beam, they're all still spinning up.
However, for a small fraction of the neutrons, I measure which way they bent. I know know whether it is spinning left or spinning right. Let's say it's spinning left. Well, just as spinning up is a linear combination of spinning left and right, spinning left is a linear combination of spinning up and down. Since I know that the beam is spinning left, I no longer know whether it's spinning up or down. When I measure this neutron at the end of the instrument, it only has a fifty percent chance of pointing up.
This isn't a couple of photons hitting a particle and changing the velocity slightly. I've gone from all of my neutrons pointing up to only half of them pointing up.
You might immediately assume that the problem is that I did something to the neutron when I measured whether it was pointing left or right. The "spooky" thing about this is that I didn't do a single thing to that neutron to measure which way it was spinning. The way I perform the measurement is that, after I split the neutron beam, I put a wall in one of the beams. The other path, however, is just the same as it was before. I haven't touched any of these neutrons. Each of these neutrons goes through the exact same fields and paths as they did in the experiment where all the neutrons were pointed up, except no half of them are pointed down.
Quantum mechanics doesn't just say that you have to touch something to measure it and that will effect the outcome. It says that performing a measurement will effect the outcome, even if you don't touch it. That's where some people get spooked.
To preemptively answer a couple of questions:
1) No, you can't use magnets to bend neutron beams based on the Lorentz force. However, the neutron does have a magnetic moment which produces a potential with a magnetic field. This leads to an index of refraction change that causes a slight angular splitting of the neutron beam. Nothing ever gets separated by more than a micron.
2) No, I don't really put in a wall to measure just the left spinning neutrons. There's a solid material whose density fluctuations over the micro length scale serve as my effective wall. Thus, I don't actually get half the neutrons flipping at the end of the experiment, but rather a fraction proportional to how much information I get about the left right spin from my single wall.
Mostly, there are a number of unresolved questions which seem to defy measurement and experimentation, and so instead of getting elbow deep in real world experiments, in an effort to reliably re-create outcomes (experiments that predict what will happen next, when carrying out certain steps), many practitioners of physics have had to spend entire careers "at the drawing board", concocting mathematical theories, on paper, that "add up" to the unexpected, unreliable, confusing things that happen in the real world, even when vacuums, clean rooms, and massive radiation shielding (in the form of miles-deep mine shafts beneath entire mountain ranges sometimes) fail to exert control over.
Quantum Non-locality seems (seems) to produce scenarios where two particles are stimulated using ostensibly unrelated mechanisms, but then also seem to exhibit cause-and-effect behavior where influencing one particle inexplicably influences the other particle, even though they may be (presumptively) incapable of influencing each other at the point in time and space when the correlating behavior is observed.
How did particle A affect particle B? Did you touch it?
What happened? No one touched it?!
This kind of interference pattern holds up, even at atomic scales, when diffracting x-rays through barriers of crystalline samples. Recently, there has been an experiment developed which may finally explain this behavior:
...but this would suggest that there are unseen, unmeasuable forces (from parallel dimensions?) influencing our particles in this universe, at all times, and in unexpected ways. If this is true, what does it all mean? Is the sky falling?! Who knows?!
These sorts of unexplained behaviors, made physicists very nervous in the early half of the 20th century, since maybe attempts to build city destroying weapons might have horrible, unpredictable triggers or uncontrollable, world-destroying, runaway chain-reaction outcomes, never to be understood (not unlike the black hole fear mongering surrounding the LHC's publicity). Keep in mind that these ideas were kicking around before an atom bomb was anything other than science fiction. The fears and impatience often strike a chord like Dr. Emmett L. Brown's fear of a universe-destroying temporal paradox in Back to the Future.
The video really doesn't do a good job of explaining how a quantum computer can work. They just hit the basic high points because this video is intended for a general rather than specific audience.
A qubit exists in multiple states until measured. When you entangle multiple qubits together, each one increases the power of the system exponentially. Quantum computers are only good for certain kinds of problems like factoring large prime numbers. Luckily quantum systems are good at things classic computers aren't, and vice versa. You have to state your "question" in such a way that it causes the overall state of the quantum system to resolve on your answer.
I know my answer isn't much better. I only have a high level understanding with none of the math behind it. There's good videos if you search and watch a few times to get the concepts.
I can't remember the exact scenario that was explained to me, but it was something like this. Imagine the Traveling Salesman problem. You want the best route. But with a regular computer, it might take thousands of years to calculate the best route. With a quantum computer, if you structure the measurement correctly (this is how you "program" a quantum computer) then only the best route is what the system decoheres to.
There was also something about light and how if you had the right filter, only the answer gets through. I really forgot the details of that one, but maybe it'll help.
To substantiate your point: Essentially, quantum computing is useful for problems that could take help from a large degree of parallelization. Quantum mechanics has an inherent parallelization at it heart. Otoh, simulating that amount of parallelization with classical resources will be exponentially complex.
This is a VERY common misconception, and is not a good intuition for how quantum computing is different than classical.
A better intuition, in my opinion, is computing with "un-flipped" coins, except with funny complex-valued probabilities of heads vs tails instead of real values between 0.0 and 1.0. Because the probabilities are complex-valued, they interact in ways that can seem a bit counter-intuitive, (it makes sense it is counter-intuitive, because how many quantities do we deal with day-to-day that are described with complex numbers?) These interactions can provide some algorithmic speed up on some nicely structured problems.
It's not always obvious what these problems are, and it doesn't relate to do whether the problem is highly-parallel. My intuition for "quantum-friendly" is if a problem involves some kind of periodicity, or Fourier-like frequency analysis, then perhaps you'll see a quantum speedup. But it's important to remember the coins don't somehow "try every flip possible" to find the answer you're looking for -- if they did that you'd be able to solve NP problems in constant time.
I agree that the superposition with complex probability amplitudes (fundamental quantum randomness) is different from just statistical randomness of trying every possibility. I fact, this is one of the notions which is being used to verify whether the D-Wave computer (as a black box) is "truly quantum".
1. I don't quite see what periodicity has to do with the speedup. Could you elaborate on that?
2. "...if they did that you'd be able to solve NP problems in constant time." Again, please explain.
In QM, the answer you calculate is actually the weighted sum over all possibilities... so the way I see it, it does cover every possible path. That answer might not be useful in telling you which is the shortest single path, but if you were an electron traversing a graph and wanted to optimize your travel, you'd do exactly what the quantum computer says and traverse all paths! The classical notion of "best path" might be trickier to deduce from a quantum computer. Maybe I'm addressing something tangential to what you're saying.
Re 1. Periodicity:
The quantum Fourier transform is an operation that reflects information about a quantum state into the the phases of the amplitudes. Quantum phase information is exactly where quantum computing differs from classical probabilistic computing, so it makes sense that this technique might show up in places where quantum computing beats classical. For example: Shor's factorization of integers makes direct use of the quantum Fourier transform. I mention periodicity only as an example of a sort of problem where the Fourier transform might be useful. This is as an alternative intuition to what quantum computers are "good at" to combat the notion that "quantum computing is good for parallel problems."
Re 2. "covering all paths":
I don't have a problem interpreting quantum superposition as inhabiting all possible states. However, classical probabilistic computing also can be interpreted this way: an un-flipped coin is both heads and tails until the flipping happens. But probabilistic computing doesn't give us faster-than-classical speedup, therefore it's not just the "existing in all states at once" that buys us the speedup: it's the unique kind of math we can do on these states because our amplitudes are unitary-complex, not positive-real.
You understand this distinction, so I perhaps shouldn't have adopted the tone I did (sorry!). The leap between "superposition involves a complex-valued combination of several states" and "tries all answers at once so is very fast" is a very common leap made in popular science articles and is the kind of misconceptions that I think the "quantum computers are good at parallel" intuition encourages. Just because a quantum state is a superposition doesn't mean we get to, for free, observe and evaluate all those states and pick out the one we like. We can sometimes arrange things in such a way that the quantum phases interact non-classically to leverage the structure of some problems to reveal information that isn't available to classical algorithms.
Periodicity information, via the quantum Fourier transform, is an example of one way to arrange things to extract information that would be more expensive to calculate classically.
Quantum computing is sortof like... destructively interfering all the non-solutions so the only likely measurement results are solutions.
The state of the computer is a vector of complex amplitudes (one entry per possible answer, n qubits -> 2^n possible answers). The operations of the computer correspond to unitary matrix multiplications against the state vector. An entry's probability of being the result is the squared length of its final amplitude.
Haven't seen the video yet, but I'll take a shot at your question. To my knowledge, the original motivation of a "quantum computer" was to simulate quatum mechanical systems (leading back to Feynman's lectures on computation). Simulating quantum (or other kinds of random) events with classical resources (conventional computers) is a hard problem in the sense that it's exponentially more complex than a classical problem.
Conversely, a quatum computer is very useful to solve such problems, think of: genetic algorithms, Monte-Carlo, etc. It is expected to do way better than classical computers. That doesn't necessarily make the simple problems easy on a quantum computer. Asking it the sort of question you did is not a measure of it's usefulness.
Since I'm not an expert on this, my aswer might be a little loose, but that's the basic idea.
This was my question too, I don't understand how you can begin to make a computer out of something with - as they explained - intrinsic randomness. And I didn't quite understand, how, if we can't see or ask the computer how it did something, how we can be certain of its answer or know that we've built something that does what we want.
You can know the probability that the algorithm will output the correct answer. If the probability of a correct response is > 50%, all you have to do is run the algorithm multiple times to get within, say, 99.99% certainty that the most-commonly reported answer is correct.
> but how can we know for sure that it's intrinsic randomness?
We can't be sure (even though Luboš Motl disagrees). The universe may not be intrinsically random if superdeterminism is true. However, this isn't a popular idea in physics; in fact Gerard t'Hooft is the only Nobel prize winner I can think of that supports this idea.
Local hidden variable theories have almost all been ruled out by experiment in the last few decades (the same kind of theory Einstein was trying to find). There's still some loopholes, but, eh...
Colbeck and Renner have written a pretty powerful paper that given the assumption that measurements can be chosen freely, no extension of QM can have improved predictive power than the current theory of QM, regardless of whether this randomness "comes" from somewhere. Here's the paper: http://arxiv.org/abs/1005.5173
Pragmatically, we're kind of forced to assume simplicity until we can show otherwise. "Maybe we just haven't figured it out" applies to everything. Maybe we just haven't figured out how to violate energy conservation, or time travel, or escape the matrix.
Quantum randomness has a quality that makes it hard to explain in other ways. The results of measurements can't be made up on the spot, because they need to correlate with remote measurements. They also can't be stored ahead of time, because of how the correlation depends on which measurement is performed.
Not dumb at all! Is a discussion that can be a little misleading, so be wary and don't blindly trust some random guy on a forum.
In the video they relate this randomness to the fact that qubits live in a superposition of states, in contrast to bits, which can be either 1 or 0, but not both. This means that when you measure a qubit, you would still obtain 1 OR 0, but you will obtain each with certain probability. So, how is this different to just picking a random bit and then measuring it? You could fabricate a way of picking those classical bit that yield the same distribution than the qubit.
We find a difference in one of the very first experiments that highlighted the quantum nature of the microscopic world; the double slit experiment: you have two screens, one with 2 small holes and the other is placed behind that. Then you send a bunch of classical particles through the punctured screen and you receive the counts on the screen behind it. Then you have a distribution of places where those particles hit. Then you do the same with quantum particles this time. You obtain a different pattern (this is important, is different to the classical pattern). This pattern is called "diffraction" pattern and means there was a wave like interaction between the particles. So now you try to do the same experiment with quantum particles, but sending them one by one, so no classical-like interaction could exist. Amazingly, you find the same diffraction pattern. This means one particle can "interact" with itself. The way physicists explain this is by allowing quantum states to exists on this superposition states which induces intrinsic randomness due to the fact that when measured, each particle could only have travelled through one of the slits, or each qubit (after measurement) can only be 1 OR 0, but not both.
This difference between classical and quantum description of nature goes deeper, and makes mechanisms as entanglement and teleportation possible. "Bell inequalities" are a great example of of some other, related ways in which both differ.
Not knowing the specific state - and being incapable of knowing - is the critical bit.
TL;DR: (edit) ignore the rest of this and just go read a lot on Bell's Theorem and the Double Slit Experiment. Both demonstrate concretely the essence of the weirdness.
(I'll leave the rest below just to keep me honest about my ignorance.)
it's a provably special state where randomness allows it to have the needed characteristics for the math to work at all. 
Also disclaimer: I'm not a PhD physicist, just a BS in engineering physics. This is merely how I cope with the idea.
Think of it like this: given time, chaotic systems become unpredictable and noisy. An electron whizzing around an atom doesn't need much time before it's state is utterly unpredictable at any measurable level (Heisenberg Uncertainty assures this must happen eventually, and pretty fast it turns out).
So what if we isolate that little electron from the outside world? Then even the universe becomes uncertain, and must assume ANY possible state is valid and happening at once, blurring them together based on how likely each is given the last known observation! Otherwise the next observation could be contradictory to expectations, and seg fault the universe (perhaps just locally). This way though, the universe can always say "well I thought it could have been that" and be correct, even if it only thought it really unlikely.
But until the universe interacts with the state then, once coherence  sets in only probability can hint at what state it is in. Thus it is random in the before-the-fact sense like the probability before rolling a die, as opposed to an after-the-fact sense like a coin that is already flipped yet hidden.
And - here's the catch - the math needs to treat it as unknown. To label the parts of an unlabeled system will break the algebra! And if a distribution of states is unknown in a mathematically precise ignorance, this is indistinguishable from true randomness. Such pure conceptual states are rare in our messy universe (at least observationally), but they do exist and can be proven to exist in the same way a one time pad can be proven to be cryptographically secure.
Thanks for being my duck, I needed to think about something interesting today!
 Perhaps the most amazing bit is that if it weren't random, then it would act differently, since the distribution of observations would vary from the expected random field after enough observations. And this has been done! Study the double slit experiment deeply, as it really is a fantastic empirical example of why interfering via observation adjusts our interpretation of the results.
This term is jargon here, like quark colors. It's not coherence like a laser's.