Hacker News new | comments | show | ask | jobs | submit login
Quantum Computing Explained (clerro.com)
232 points by animeshk 10 days ago | hide | past | web | favorite | 48 comments





We are currently living in an exciting time for quantum computing. Most leading companies like Google and IBM have 20 qubit devices. IBM has a 50 qubit prototype [0].

Google has plans to show quantum supremacy in the next couple of months: where a quantum computer will perform a task that cannot be simulated on a classical computer [1]. These near-term (5-10 year) quantum computers will likely be used for simulating quantum chemistry and solving optimization problems [2].

[0]: https://www.technologyreview.com/s/609451/ibm-raises-the-bar...

[1]: https://www.technologyreview.com/s/609035/google-reveals-blu...

[2]: https://www.nature.com/news/commercialize-quantum-technologi...


Number of physical qubits is not a meaningful measure of computational power, especially since fault tolerant computation has not been demonstrated. (In the limit of large errors, a physical qubit has zero computational power.)

D-Wave has a 2000-qubit machine.


Yes. The researchers at IBM don't use the number of qubits as a measure of the quality of the computer. They use the quantum volume [0]. I said 50 qubits because I was just parroting the press release. The reason I didn't mention D-Wave's machine is because it is not a universal quantum computer. It is a quantum annealer [1].

Google and IBM want their qubits to be of high quality (high coherence times e.t.c). One of the big obstacles right now is scaling up the number of qubits while making sure that their quality is high.

[0]: https://en.wikipedia.org/wiki/Quantum_annealing

[1]: https://www.research.ibm.com/ibm-q/resources/quantum-volume....


Here is another introduction that actually uses a quantum computing Python API to explain the fundamentals of quantum computation [0], geared to a programmer audience.

[0] http://pyquil.readthedocs.io/en/latest/intro_to_qc.html


Very well done. Short, yet covers all the necessary details.

Shameless plug, check out my book https://www.amazon.com/dp/0992001021/noBSLA for an in-depth view of the linear algebra background necessary for quantum computing.

If you know linear algebra well, then quantum mechanics and quantum computing is nothing fancy: just an area of applications (See Chapter 9 on QM). Here is an excerpt: https://minireference.com/static/excerpts/noBSguide2LA_previ...


I'm definitely going to check your book out, but in the mean time I was wondering if you could help me zort something out from this article. Author states:

Check these quantum states out as examples, which cannot be broken into a tensor product of two other states, they are unseparable -

Ex.1: 1/sqrt(2)∣00⟩ + 1/sqrt(2)∣11⟩ !=∣ψ1​⟩⊗∣ψ2​⟩

Ex.2: 1/sqrt(2)∣01⟩ − 1/sqrt(2)∣10⟩ !=∣ψ1​⟩⊗∣ψ2​⟩

So, in all previous 2 qubit examples he showed yielded 4 states (00, 01, 10, 11). Is the author saying that some two-qubit systems can be achieved such that not all 4 possible discrete states can participate in superposition? (i.e. in the system of ex. 1, states 10 and 01 are not possible and in ex. 2, states 00 and 11 are not possible?)


The four states |00⟩, |01⟩, |10⟩, |11⟩ form a basis so any two-qubit state can be written as a linear combination of these vectors: ∣ψ⟩ = a|00⟩ +b|01⟩ +c|10⟩ + d|11⟩. If a certain coefficients in the linear combination are zero, e.g., b and c for the 00+11 state, or a and d for the second one, this doesn't have any special meaning.

The classification of states as separable vs entangled refers to the existence of a local description for the two qubits. Remember that |00⟩ is shorthand for |0⟩⊗|0⟩, meaning the state of the two-qubit system when qubit 1 is in state |0⟩ and qubit 2 is in state |0⟩.

Separable states can be written in the form (α|0⟩+β|1⟩)⊗(γ|0⟩+δ|1⟩) = ∣ψ1​⟩⊗∣ψ2​⟩. Note there is a clear local description for the first qubit ∣ψ1​⟩ and and a separate local description of the state ∣ψ2⟩. If Alice prepares her qubit 1 in the state ∣ψ1​⟩ and Bob prepares his qubit 2 in the state ∣ψ2⟩ then the combine description of their two qubits is what's shown above. The state of the combined system is describable as the tensor product of two separate local descriptions.

Entangled states, on the contrary, are states that cannot be described as the tensor product of two local descriptions. Specifically, there exist configurations a,b,c,d for a two-qubit quantum system such that

      a|00⟩ +b|01⟩ +c|10⟩ + d|11⟩   ≠   (α|0⟩+β|1⟩) ⊗ (γ|0⟩+δ|1⟩)
no matter what choice of α,β,γ,δ you make. The phenomenon of entanglement is a quantum-only thing, that can't really be understood via classical analogies, since classical systems can necessarily be described as the combination of two local descriptions. The examples given are two of the four Bell states, see https://en.wikipedia.org/wiki/Bell_state , but there are many more entangled states. Here is a concrete physics example https://en.wikipedia.org/wiki/Singlet_state#Singlets_and_Ent...

Interestingly, many of the quantum computing experiments perform involve the manipulation of entabgled states because they serve as proof that something quantum is going on...


Holy smokes - that is an amazing response! I can't claim that I'm fully grasping the mathematics of entangled vs separable states at this moment, but what you've written seems very clear and I think if I go through it a few more times with the links, I may finally get it after all these years of bewilderment. Thank you!

Hey, love your book. I don't remember how far back I purchased it, but it was back in the days where it wasn't done and we were getting updates pushed via email.

Keep up the good work!


This is an excellent introduction to quantum computing. I didn't see where I could sign up for updates. I'd like to follow this!

So take what I'm about to say with a grain of salt but I believe that current generation of quantum computers (quantum digital) is fundamentally flawed. Basically all architectures I've seen still use bits (qubits are still bits and use entanglement) as opposed to the superior signals. The class of computers I'm talking about is called continuous-variable quantum computers (https://en.wikipedia.org/wiki/Continuous-variable_quantum_in...). Unlike DQ, it doesn't use entanglement.

They are similar to the old school analog electric computers. They have some interesting properties and I can actually imagine programming one unlike DQ.

There's a fourth class, continuous analog with entanglement which are superior to both, DQ, and continuous-variable quantum computers but right now we should really be looking into the continous variable ones.


There is a serious misconception in your claim. Yes, analog computers, whether quantum or classical solve even NP-complete problems in polynomial time. No, they can not be constructed in the real world because analog computing does not permit error correction, and in the real world you have to deal with noise. Only very small analog computers (nothing scalable, nothing solving general problems) can be constructed before noise becomes an issue.

Three good references:

Book on quantum and classical computing: Aaronson's "Quantum Computing since Democritus" for gentle-for-newbies but rigorous discussion

Very old paper on classical computing (analog vs digital): Von Neumann's "Probabilistic logics and the synthesis of reliable organisms from unreliable components" (pretty advanced)

Newish (old for the field) paper on quantum computing: Calderbank's "Good Quantum Error-Correcting Codes Exist"

Edit and addition: I work at Yale's Quantum Institute and we are some of the biggest proponents of "continuous variable" quantum computing. We use the continuous variables to encode a discrete "qudit" (with a "d") representation for the information, for all the reasons mentioned above (noise and error correction).


I love Aaronson's book but it's not "gentle-for-newbies". Not because it's not gentle! It just explicitly skips over a lot of the material. You're expected to already know about quantum computing since he doesn't feel he can add to existing authors on the subject. Instead, it's really a survey of quantum complexity theory, with some sampling of background material where the author felt he had something new to say.

What's a good book for Quantum Computing? When I took Aaronson's class last semester I got so wrecked by everything, and it just felt like everything was so conflicting from many different resources

I learned from a coursera course some number of years ago. It was actually quite good. (This should also give you the correct impression that I am not an expert here and you should take my suggestions with a grain of salt.) I think it was this one: https://www.edx.org/course/quantum-mechanics-quantum-computa... (Taught by Aaronson's advisor.)

To add to the answer already given, keep in mind that it depends on what you want to learn.

Theory from the point of view of physics (as in, less focus on algorithms, but some focus on building hardware): see Preskill's notes or Chuang&Nielsen's book.

I would suggest quickly skimming them once before trying to read them cover to cover.

And some lighter online introductions might help as a first step.

You definitely will need very good knowledge of college level linear algebra (linear operators, basis, diagonalization, eigenvalues).


Unfortunately I have nothing yet to add to your discussion with parent, but I was wondering if the two of you might peek at the following paper, and let me know if and where it fits into the picture:

[1] Jose Luis Rosales, Vicente Martin, "Quantum Simulation of the Factorization Problem", 9 Nov 2016. (https://arxiv.org/abs/1601.04896)


Disclaimer: I only skimmed the article.

This is pretty cool, but I think the main point of the article is to show an amazing link between number theory and the theory of quantum mechanics, not to suggest a practical quantum algorithm. Although, for certain hardware implementations what they are suggesting might be easier than Shor's algorithm, just because of technical reasons.

Either way, this is not "continuous variable" or "analog" in the sense discussed above. If you are interested about the distinction, you can also check out Adiabatic Quantum Computing, which also seems continuous on first sight, but it is equivalent to the typical model of quantum circuits (the trick being the existence of a "gap" between the ground state which we are adiabatically evolving and all the other states of the system).

I threw a bunch of terminology without defining them, but I would have to leave googling those to you. I would be happy to attempt to answer questions though.


They aren’t misconceptions, I’m questioning some currently held assumptions. You are still talking about electric right? Why are assuming I’m unaware of your claims? I’m aware of e.g. error corection problems but I would definitely not say it’s impossible.

Photonic systems have plenty of error correcting schemes.


Does not matter whether it is electric or photonic (and many systems are in between). Scalable error correction (i.e. the error approaches zero in the limit of a large system) is possible only with digital systems, whether classical or quantum.

There are things we can do to suppress errors and make a slightly bigger analog computer (classical or quantum) but there is no way to make an analog system scalable (i.e. not just lower errors to some floor, rather completely eliminate errors).

The book cited in my previous comment explains the details in a fairly understandable fashion, but it takes up a whole chapter. The gist of it is, you need some minimal "logic distance" between your data "levels" in order to be able to distinguish them and correct deviations. This requires digitization.

If somebody finds a way to error-correct analog representations they will have a way to solve NP-complete problems for instance. The Nobel price will be the least of their recognitions.


Have you ever heard the expression if an expert tells you something can be done, s/he's prolly right. If s/he tells you it can't be done it's not quite certain.

Im not convinced that our understanding is quite there to say it can or can't be done.


Certainly, this is a good point. However would you use that expression if what I said was "you can not make a perpetual motion machine"? Or if I had said "NP probably does not equal P"?

A scalable analog computer goes against some "first principles", not mere technicalities. If you want to claim that it is feasible to build it, you need a way to address those first principles.

For instance, the existence of scalable analog computers implies that we can solve NP-complete problems easily. This is a claim as crazy as "perpetual motion machines". It would be great if either of those claims actually become feasible, but there is a gigantic wall of "first principles" that have to be addressed - mere optimism is not enough.

This is what I want to stress - do not trust people when they rely on technicalities to shoot down your argument, but if they are pointing out fundamental laws of nature as impediments, maybe it would be interesting for everybody if we try to learn and discuss those fundamental laws.


Re: P != NP, I would say that question assumes a certain architecture, so while it might hold for Turing machines, the real question is is a HyperTuring/Turing machine really all there is? I assume you are familiar with Real Computation? https://en.wikipedia.org/wiki/Real_computation I did note the "if real computation were physically realizable" clause, however, I'm not going to be placing bets.

> A scalable analog computer goes against some "first principles", not mere technicalities. If you want to claim that it is feasible to build it, you need a way to address those first principles.

I find your argument to rely on classical logical reasoning as opposed to constructive (intuitionistic) logical reasoning. There is quite a lot of excluded middle in all of these questions that you aren't accounting for. "First principles" are nice and all but, my fundamental problem right now is that I'm having a hard time verifying the "first principles" for myself.

> but if they are pointing out fundamental laws of nature as impediments, maybe it would be interesting for everybody if we try to learn and discuss those fundamental laws.

You are assuming that I'm not aware of those principles. I'm not saying I have solutions, I just don't feel like anyone's given it a proper shake quite yet.


>Re: P != NP, I would say that question assumes a certain architecture, so while it might hold for Turing machines, the real question is is a HyperTuring/Turing machine really all there is?

No, P and NP are defined by an architecture, there's nothing to assume about it. E.g., P is defined as "problems solvable in polynomial time on a deterministic Turing machine".


I think that you misunderstood what I meant by assumed. If I replaced it with implies would that clear the confusion?

If your only response to a reasoned argument is a dubious aphorism, you’ve already lost the discussion. You are literally just arguing from disbelief.

I don’t agree that certain things are necessarily impossible. I don’t have a proof yet. Wait ten years.

Why do you think using two-level systems is fundamentally flawed? Even those are analog devices that span a finite dimensional space, but they’re nonetheless dense vector spaces over C^2^n. You still get theoretically infinitely parameterizable operations, though we know some discrete subset of those is sufficient for universal computation.

If one were to work with an infinite dimensional system, you’d still be employing finite truncations of infinite dimensional operators, leading you back to, more or less, a finite dimensional subspace.

Digitization—or rather, discretization—is important, and the reason computers have managed to be so successful. And programming a system like a universal gate-based quantum computer can be done now, with languages like Quil and libraries like pyQuil [0].

[0] http://pyquil.readthedocs.io/en/latest/


Because you lose integration and differentiation in hw. You can also model signals "natively".

You actually get those benefits, differentiation and integration, in the usual classical analog circuits, but, except for a few kinds of analog tricks, they’ve not outperformed their digital counterparts in precision, accuracy, or speed. You can make for neat demos, like wiring up integrators to solve the Lorenz attractor equations to make a nice oscilloscope plot, but the circuits fall short practically for anything more difficult.

I'm well aware. Those that you are talking about are electric though which brings a whole class of issues. I think that photonic might work very well.

Terry Rudolph has an excellent paper arguing for continuous-variable quantum computers [1]. CV still uses entanglement. You can think of CV states as infinite dimensional qubits. So in a way it is still "digital". Something about quantum measurement is fundamentally digital.

[1] https://arxiv.org/abs/1607.08535


IBM lets you poke around one of their quantum machines here for free:

https://quantumexperience.ng.bluemix.net/qx/experience

They also include a brief tutorial on how to program for it too.


yes! I've been playing around with this while writing this guidebook, and I highly recommend all readers to try it out too, once you've gotten familiar with quantum gates. I'll very likely be adding quantum circuits and their results (from IBM), in an upcoming guidebook discussing some really cool quantum algorithms.

AFAIK this is a simulation, not real quantum machines.

They have both a simulator and real machines you can use. The "ibmqx5" is a 16-qubit device. The problem is that noise rates are very high. For example, just to swap two adjacent qubits (using three CNOT gates), it looks like you will incur at least 10% error. It is hard (though not impossible) to run meaningful experiments on such a noisy system.

https://quantumexperience.ng.bluemix.net/qx/editor


Oh, thanks. You are correct that it is very noisy. Just a single CNOT is giving me 15% error.

Here are the numbers for their latest 20-qubit device. CNOT error rates are a bit lower. I don't know if they plan on making this publicly accessible.

https://youtu.be/T-8uuq7Izl8?t=26m36s


Maybe a stupid question but does anyone here know how quantum computing would effect bitcoin/crypto in general?

Unless you can find a way to implement a way to reverse a hash efficiently using a quantum algorithm, I wouldn't hold my breath.

Would a quantum computer be capable of brute forcing private keys?

Quantum computing is way beyond me but I found this video by Kurzgesagt really interesting https://youtu.be/JhHMJCUmq28

Will quantum computing kill Bitcoin by rendering the underlying cryptography obsolete?

I understand there are two ways Bitcoin is affected by a crypto quantum-computer:

1) A QC is able to derive the private-keys for a wallet's public address, allowing for the theft of bitcoin

2) A QC able to perform the proof-of-work algorithm to mine new blocks at an order-of-magnitude faster rate than currently possible.

Fortunately for 1) (I think) it currently takes 2^512 (?) operations to break the private/public algorithm which is unfeasible to brute-force on normal hardware but a QC brings it down to 2^128 - but that's still on-the-order-of unfeasible - and in the event it ever does happen the blockchain could be changed overnight to use a new keying algorithm. And for 2) it would cause the blockchain difficulty to be pushed-up so high that people with QC machines would see the same ROI as today's industrial GPU and ASIC miners see - plus given that QC computers are horrendously expensive (think: billions of USD for a 50-bit general-purpose QC) it questions why you'd ever try to break Bitcoin as you'd already be a billionaire.


> Fortunately for 1) (I think) it currently takes 2^512 (?) operations to break the private/public algorithm which is unfeasible to brute-force on normal hardware but a QC brings it down to 2^128

Where have you got that info? Quantum computers can break ECDSA in polynomial function of 512.

> the blockchain could be changed overnight to use a new keying algorithm.

How?


> Will quantum computing kill Bitcoin by rendering the underlying cryptography obsolete?

No, as even if the current underlying crypto falls (for whatever reason), the ledger up to that point could very likely endure in a new form.

Outside of the blockchain concept of itself, the oft-ignored indirect gift of bitcoin's longevity is the way it is providing us a new digital mechanism for the international distribution of wealth as more people join, the ledger proceeds and mined coins are exchanged more widely.

Full-disclosure: no horses in this race yet


If it comes down to that, bitcoin will probably adopt some post-quantum signature scheme for new wallets.

Probably.


Whatever you do, don't invest in quantum computing.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: