But one piece of software I’ve been particularly interested in, and indeed contribute to, is Rigetti’s open source optimizing compiler called quilc [4,4’], which is a sort of a GCC/Clang for quantum computers. It’s the only open-source compiler of it’s kind. They also have a high-performance simulator , which is used sort of like a debugger. (Perhaps surprisingly, both are written in Common Lisp.)
What is very useful about the compiler is that it is retargetable, and can compile for even IBM’s architectures of tomorrow. You just give it an “isa” at the command line describing how many qubits it has, how they’re connected, and which gates they support, and quilc will happily compile and optimize your program for that architecture.
 Rigetti’s pyQuil https://github.com/rigetti/pyquil
 IBM’s Qiskit https://github.com/Qiskit/qiskit
 Google’s Cirq https://github.com/quantumlib/Cirq
 quilc https://github.com/rigetti/quilc
[4’] quilc at FOSDEM https://youtu.be/qwfMzzKJDXI
 quantum virtual machine https://github.com/rigetti/qvm
There other compilers/transpilers out there, e.g., ScaffCC and Qubiter. Also inside IBM's framework you have one: Qiskit Terra. You can find more in .
> What is very useful about the compiler is that it is retargetable.
Most, if not all, of them have this characteristic. Using Qiskit Terra you can definitely target Rigetti’s machines. You just need to provide the architecture specification, i.e., number of qubits and their coupling graph.
As a side note, one really annoying characteristic in Rigetti's quilc is the impossibility of turning off optimizations.
Optimizations may be completely turned off with the directive PRAGMA PRESERVE_BLOCK . Other optimizations can be selectively enabled and disabled with command line options or pragmas. But, as with GCC, the user does not have full control over every aspect of optimization and transformation, but rather just a set of coarse-grained controls. Otherwise, it wouldn’t be a compiler so much as a library/framework for program manipulation.
But one thing is true, quilc is intended to be fully automatic, not requiring (or even allowing) the user to list compilation “passes” or “transpilers”, which is typically what a software engineer expects when their primary concern is writing and running code. Many of the links you’ve listed are either “compilation frameworks” (like scaffcc, akin to LLVM), unitary factorizers (like qubiter, though I’m less familiar), and so on.
This turn off compilation itself. If you read the documentation you referred me to, you will notice that code inside the directive won't be guaranteed to be legal QPU code after compilation.
> Many of the links you’ve listed are either “compilation frameworks” (like scaffcc, akin to LLVM), unitary factorizers (like qubiter, though I’m less familiar), and so on.
I don't know what you mean by "compilation framework" nor where you are drawing the line of what is a compiler, but ScaffCC is a compiler. It is built using llvm (so it is akin to Clang, not llvm) and compiles Scaffold to QASM.
In new spaces that haven't been fully explored, lisp is awesome for letting you iterate on fundamentals of a language design before you put the work into a full language.
Microsoft Q# was open sourced this spring, I believe
Yes, you can scale up vacuum tube computers to solve meaningful tasks, but it was barely more cost effective than human "calculators" except for specific applications. I think the same analogy applies for modern day quantum computers and classical computers.
From the article
Quantum supremacy, we don’t use [the term] at all, said Robert Sutor, the executive in charge of
IBM’s quantum computing strategy. We don’t care about it at all.
The article also mentions practical considerations such as "quantum advantage", fault tolerance, and other initiatives competing with IbM.
True. Another way he could have responded is: We don't care if quantum computers actually do anything useful, only that we can use the PR to get positive stock price movement.
Why are you attributing some kind of cynical motive?
Quantum computers are a kind of "because it's hard" problem. Some useful discoveries might come out of it by accident even if quantum computers never become practical.
And, uh, it's not your money, nor is it mine, so why complain? In the grand universe of wasted resources, quantum computers are probably not where I'd begin making cuts.
My point with the LHC isn't just that it has produced useful things, but that we didn't know exactly that it would do so when it all started. Planning & construction on projects that would eventually form part of the LHC began as early as the late 70's, early 80's, roughly 40 years ago. That too was based on research & engineering that started even earlier. So if we're comparing the LHC and quantum computing, the tech behind the LHC has had more time to be developed and refined. When quantum computing gets to the same level of maturity, it may have just as many positive effects. Even so, arguing about time lines is probably besides the point: different research can take longer or shorter to have a beneficial impact.
Maybe there are other things that would benefit more from the same resources, but it's also not something we can know in advance. Plenty of early promising research is chased for years with no result. Some research takes decades for its significance and practical impacts to be felt. We have to pursue a lot of things or we'll never get much of anywhere.
I'm also curious though: The argument that quantum computing isn't doing enough, fast enough, has some merit. It's not quite zero sum, but there is a limited amount of money to go around, and maybe it could be better spent. What would be your pet project to advance in place of quantum computing? Personally, I'd want greater focus on energy research. I'm not looking to criticize your choice, I'm genuinely curious.
https://arxiv.org/abs/0908.1921 looks like one such application.
I don't think they have just yet. There is a simulated QC on conventional hardware with around 50 qbits. We are close, though, it might still happen this year.
TLDR quantum computing is going nowhere (excluding the low probability of a breakthrough in the field)
If quantum computers happen (which is still a big if) then mostly two things will happen:
1. Cryptographers will have to switch many of their algorithms and things like TLS will get a bit slower and will need a bit more data transmitted.
2. It's interesting for science, e.g. for simulations.
For the regular dev I'm pretty sure it won't have any meaningful impact.
Also even if quantum computers happen they won't be on our desks any time soon. It'll probably more like "you can rent a QC for specialized tasks in the cloud, but it will be relatively expensive and if you don't really need one you'll use something else".
This isn't true - only methods based on Discrete Log are at risk, which includes RSA and ECC, but there are a lot of PK algos that are not at risk, like NTRU.
Here's a bunch of stuff related https://en.wikipedia.org/wiki/Post-quantum_cryptography
Based on your statement (I don't recall offhand), it might be said that it is widely believed RSA and ECC algorithms are vulnerable in a world where quantum computing is plausible. The safety of other systems would require a more rigorous proof; the absence of known vulnerability does not imply safety.
I am not an expert in this field, but the understanding I have from the last time I spent an afternoon reading about it might be summarized as:
* Known/Suspected vulnerable systems use "classically-hard" problems with vulnerability based on small key (or subkey) size.
* Most of the 'theoretically more secure' systems make the working set size large enough, and complex enough, to eclipse the scope of stable quantum computers.
* Thus, the two attack vectors to consider for 'post quantum' cryptography are currently unthinkably large systems or weaknesses in algorithms that allow for attacking a subset which does fit within a quantum system.
If my understanding/memory is incorrect updates would be very helpful.
And many systems do not reduce to this.
But if you prefer, you could imagine that I said "all modern PK crypto [that is actually in use today] is broken". Because, at the end of the day, that's all that really matters.
[As an aside, for RSA the security is provided by the hardness of factorisation which also breaks under QC attackers, not discrete log (it's discrete log for DH, and EC discrete log for ECDH).]
Classic McEliece (https://classic.mceliece.org) was created in 1978 and is in the 2nd round of NIST's PQ-crypto contest ( https://csrc.nist.gov/projects/post-quantum-cryptography/rou...).
The PQCrypto conference started 13 years ago: https://pqcrypto.org/conferences.html
Chrome ran large scale experiments with NewHope support 3 years ago: https://security.googleblog.com/2016/07/experimenting-with-p...
I wrote a decently cited paper summarizing all this when I worked in quantum computing research around 2005, on arxiv. Track it down if you’re interested. I’ve followed the field for decades, giving talks once in a while on it. So I do know a bit of the details.
The computational power of QC is very well studied. It is certain that it cannot crack certain (most) problems any faster than a classical computer. Thus algorithms not isomorphic to the few problems QC is better at are just as secure on both classical and quantum computers. There is consensus on this - there is no question about it.
>all modern PK crypto [that is actually in use today]
NTRU is an IEEE standard, and is in wolfSSL, which is on many platforms. NTRU is also used in many commercial products from chat to relays.
Other non DLP methods are also used in commercial products and standards.
But is it not the case that the original Shor algorithm was specifically about using periodicity for integer factorisation (and the wiki page for "RSA problem" also refers to integer factorisation, not discrete log).
> It is certain that it cannot crack certain (most) problems any faster than a classical computer.
Yes, I was already well-aware of that. My point was that the overwhelming majority of PK crypto used today depends on the hardness of problems that are exponentially accelerated by QC algorithms.
> NTRU is an IEEE standard, and is in wolfSSL, which is on many platforms.
Okay, maybe then it has some non-academic use. But the overwhelming majority of PK crypto in use today (TLS, PGP, the vast majority of E2EE systems) is not secure against quantum attackers. Can we agree on that at least?
Periodicity is discrete log.
Discrete log is defined as given a mathematical group G, a generator g in G, and an element h in G, find the power of g that gives h. This is a logarithm, over a discrete set, a finite group. This is equivalent to finding the power of h so h^r = 1, since given either, the extended euclidean algorithm gives an efficient way to get the other number. This number r is called the order of h, and there is no (in general) known classical efficient algorithm to do it.
This can be easy or hard, depending on the group.
Shor period finding is taking a the integers mod N (which is a group, called an abelian group, about the simplest class of groups), and an integer a, and finding the power r of a that is so a^r is 1 mod N. This is exactly the discrete log problem. You found the order of the element h.
Shor is a special, easy case of the more general problem - he solves it efficiently for Abelian cyclic groups, but it has been extended to many, many other groups.
In fact, researchers realizing this was exactly the general problem Shor solved is why all discrete log problems are vulnerable to QC. His method of using a quantum Fourier transform (which has a nice generalization to groups) to find period (which FFT does nicely) is directly applicable to any group. Shor is not some odd outlier algorithm; it is exactly the core (and almost the only) algorithm QC is exponentially faster at.
>But the overwhelming majority of PK crypto in use today (TLS, PGP, the vast majority of E2EE systems) is not secure against quantum attackers. Can we agree on that at least?
Yes, but that is a far cry from your original claim. We only use those systems because they are not widely broken, and like all crypto, if one gets broken, we simply replace. This is not new.
Each is easily replaced by algorithms that quantum cannot touch (unless classical can too), so much so, that many places already use QC resistant algorithms to prevent nation-states messing with them.
RSA's security is based on the presumption that factorization is hard, which is a different problem than finding the discrete log. To attack RSA you would use Shor's algorithm, and to attack ECC you would use Grover's.
See page 2 of this paper:
To attack ECC it is directly discrete log, where quantum is exponentially faster than classical.
Grover reduces unstructured search from O(N) to O(sort N), only a quadratic speedup, insufficient to weaken crypto more than 1 bit out of a key length, which is too weak to be a threat.
Page 2 of your link does not state how QC attacks problems; it dies say ECC is discrete log, though.
Both systems break precisely because QC solve DLP in these cases exponentially faster than classical, and this is all that’s needed.
I'm not aware of any symmetric cipher that the industry in broad consensus would consider "better" than AES-256.
As to whether it will be usable within 20 years, time will tell.
 - https://www.nature.com/articles/nature23879
 - https://ai.google/research/teams/applied-science/quantum/ - see 'Near-Term Applications'
I don't really know much about the topic but my understanding is that it could potentially be more dense then the theoretic limit of current transistor sizes.
Current sizes of quantum bits: cavities are cm to mm scale; transmons and photonics can be much smaller, but nowhere near the size of modern transistors; ions need to be trapped in something and usually the scale is not better than the other options.
There is the hope that qubits can be made in silicon and use lithographic technologies, but then at best we will reach the density of current CPUs (but such a thing is science fiction at the moment).
Science fiction has hypothesized about quantum computing that involves macroscopically-sized bits of carefully constructed matter running a number of qubits best described with scientific notation, but it's not clear how one would set up problems or read anything out of such a thing.
Wait.. patent pending. There's probably an unreasonably large some of money to be made being able to sell that label to unwitting trend-hoppers.
I think the biggest change will be that quantum computing can really kill at machine learning, upping the numbers of nodes in neural networks with an exponential relationship to the number of qubits.
It might turn neural nets into a neat toy that Facebook uses to show us sort-of good ad predictions to something that really cuts into a wide range of things that only humans are capable of doing.
This is not true. Some parallelizable tasks will benefit from massive speed ups but most won't.
Isn't that a contradiction?
I think however that once the hardware part is solved, the software part will move quickly, since people have been working on the software since long before the hardware existed.
yet here we are, still using the same old hard disks. we have SSDs now, at least.
One key aspect of that is that all of the qubits are entangled. The qubit isn't in more than two states, but in 2 states at once, and N qubits are in 2^N states at once. It's not quite as simple as that: you can't really just set up any algorithm in a quantum computer the way you would with a classical one. But for those problems amenable to quantum computation (including, notably, prime factoring), they can be solved very fast.
With 2^53 qubits we should be able to factor some very, very large prime numbers.
"Pretending to factor large numbers on a quantum computer (2013)"
"Of course this should not be considered a serious demonstration of Shor’s algorithm. It does, however,
illustrate the danger in “compiled” demonstrations of Shor’s algorithm. To varying degrees,
all previous factorization experiments have benefited from this artifice. While there is no objection
to having a classical compiler help design a quantum circuit (indeed, probably all quantum
computers will function in this way), it is not legitimate for a compiler to know the answer to the
problem being solved. To even call such a procedure compilation is an abuse of language."
>Instead, it came up with an answer to the "order-finding routine," the "computationally hard" part of Shor's algorithm that requires a quantum calculation to solve the problem in a reasonable amount of time.
They weren't the first team to do this, either, they just miniaturized parts of the hardware.
If I'm wrong and IBM's latest "quantum computer" can actually do the Shor algorithm on the number 15 using 4608 gate operations, I will publically eat one of my shoes like Werner Herzog. Assuming I can get someone else to take the other side of the bet, of course.
> the chip itself didn't just spit out 5 and 3. Instead, it came up with an answer to the "order-finding routine"
O noes! The quantum computer only did the "order-finding" part of Shor's algorithm! ... But wait. Here's the Wikipedia page for Shor's algorithm:
> Shor's algorithm consists of two parts: 1. A reduction, which can be done on a classical computer, of the factoring problem to the problem of order-finding. 2. A quantum algorithm to solve the order-finding problem.
So, the article GP cited says that a quantum computer did the part of factoring the number 15 that actually uses a quantum computer. Nothing wrong with that.
The article's link to the actual paper is broken, which makes it harder to tell whether as you say they cheated somehow, but here's another article about factoring 15 with a quantum computer https://science.sciencemag.org/content/351/6277/1068 which so far as I can see claims to have actually done the whole of (the relevant part of) Shor's algorithm on actual QC hardware.
Can you explain this a little more? I've seen similar references to this elsewhere. If it means what I think it means, it's kind of a really big deal. But it may mean something else.
The paper I linked to says, e.g., the following:
> Subsequent multipliers can similarly be replaced with maps by considering only possible outputs of the previous multiplications. However, using such maps will become intractable, [...] Thus, controlled full modular multipliers should be implemented.
So in at least one case they are explicitly not taking a particular shortcut because it doesn't scale to factorizing larger numbers. If you say they are taking other shortcuts that don't scale by "leaving out the gates to the wrong answer", then I think you owe us an actual explanation of what shortcuts they are taking and how you know you're taking them, rather than just a link to a paper from 1996 that says how to take some shortcuts.
It's not my fault you believe in press releases without understanding what they mean.
The paper I linked to (1) doesn't cite Preskill et al and (2) explicitly claims not to be taking shortcuts that don't generalize to numbers other than 15; as well as the bit I quoted earlier, they say "for a demonstration of Shor’s algorithm in a scalable manner, special care must be taken to not oversimplify the implementation—for instance, by employing knowledge about the solution before the actual experimental application" and cite an article in Nature decrying cheaty oversimplifications of Shor's algorithm.
I don't see anything in their description of what they do that seems to me to match your talk of "deleting the gates that lead to the wrong answer".
(The Kitaev paper they cite also doesn't cite Preskill et al, unsurprisingly since it predates that, and also doesn't contain anything that looks to me like cheaty shortcut-taking.)
It is, of course, possible that that paper does take cheaty shortcuts and I've missed them. It is, of course, possible that its authors are flatly lying about what they're doing, and trying to hide the evidence by not citing important prior papers that told them how to do it. If so, perhaps you could show us where.
Otherwise, I for one will be concluding from the surfeit of bluster and absence of actual information in your comments so far that you're just assuming that every alleged "factoring of 15" is indulging in the dishonesty you think they are, and that you aren't interested in actually checking.
(You don't, indeed, owe anyone anything. It's just that if you want to be taken seriously, that's more likely if you offer something other than sneering and bluster.)
Pointing this out is apparently necessary; I don't know why it triggers people so to point out that virtually the entire field up to the present day has consisted of grandstanding quasi-frauds. And that you apparently have to understand things and read extremely carefully to notice, because whatever honest workers there may be don't see it as in their interest to point such things out as it may upset their rice bowls.
You have someone else in another thread insisting that annealing can factor giant prime numbers which is equally bullshit. Do you expect me to patiently, precisely and (somehow) dispassionately point out every line of bullshit in every quantum computing paper published? The mere fact that the field is pervasive with bullshit, publishes papers and announcements that are known to be bullshit, and promises all kinds of pixie dust bullshit on a regular basis ought to give you some slight skepticism, some Bayesian prior that the grand pronunciamentos of this clown car should be treated with a bit of skepticism.
I'm all in favour of pointing out bullshit. But there's a boy-who-cried-wolf problem if you just indiscriminately claim that everything is the same kind of bullshit without checking it.
Of course that doesn't mean that you're obliged to check everything. You can say "I expect this is bullshit of the usual sort but haven't checked". But if you say "this is bullshit of the usual sort" without checking and it turns out that that isn't the case (it looks to me as if the paper I linked to isn't the kind of bullshit you describe) then you take a credibility hit that makes all your bullshit-spotting much less useful than if you were more careful.
Some people laugh at the chicanery of scumbags who claim fully autonomous vehicles are right around the corner. I laugh at the frauds and mountebanks of "quantum information theory" and the muppets who believe everything they say. De Gustibus.
QC is about setting up interference patterns between the qbits, it is fundamentally unlike classical computing. For some problems algorithms can be designed that use those interferences to compute things, like the famous Shor's algorithm for polynomial time prime factoring.
QC speeds things up only in the cases where an asymptotically faster algorithm can be designed this way, it is not a general purpose parallelization mechanism.
And we don't know many things it speeds up for sure! For example: we don't actually know that there isn't a classical polynomial time prime factors algorithm we haven't found. Here is a recent example of a quantum algorithm leading to the discovery of a classical algorithm that is equivalent to the quantum: https://www.quantamagazine.org/teenager-finds-classical-alte...
Not really. From an information theory perspective, holding two states is the most efficient way of computing. Or at least, there is no speed-up gained from doing computations in any other base than 2.
Quantum computers can be in both states at the same time (as far as that interpretation of quantum mechanics goes). So, if you keep the qubits all in this superposition state, you can calculate multiple things at the same time.
Assuming the cost of operations on each digit is a multiple of its number of states, the most efficient base for computing with arbitrary numbers is 3, which is the closest integer to the optimal base e:
From a physical perspective, holding two states is the most efficient way of computing. I.e. the electronics required to compute in bases other than 2 involve higher power and present difficult challenges.
Explains to 5 different levels of CS knowledge.
>> that can hold more than 2 states
--> that can hold more than 2 states AT THE SAME TIME (in weird quantum conditions)
So you're not trying 00, 01, 10, 11; you're trying [00|01|10 |11] at the same time, as well as other state in between 0 and 1.
So, how much faster are they? A million times faster? A trillion? A googol times faster?
Back of the envelope calculation:
The best classical algorithm takes 10^34 steps. The fastest computer on earth _Summit supercomputer_ runs at 200 petaflops (2.5M cores, ~10^6x faster than your desktop computer)
Actual quantum computers perform at 1 million operations/s per qubit
for 2048-RSA, Shor's algorithm needs 4096 non-noisy qubits (we are far from that) and run in 10^7 steps
So for this particular problem:
Best classical computer (as of today): 1 billion years
An (hypothetical) quantum computer: ~10 seconds
Yeah, a trillion times faster...
I've read articles that debate the existence of any quantum computers. Are we past that now, or are there people out here that would question whether these are real quantum computers?
Sounds natural and obvious, so why haven't I heard this solution to Schrodinger's Cat many times before? Why is Schrodinger's Cat still considered a problem worth discussing ?
The result you cite: they don't even consider those early NMR "computers" to be quantum in any meaningful sense. They're avogadro computers.
you said the number 15 was never factored on a quantum computer - this is false.
You can run as many gates as you want - go ahead and run 4608 gates in qiskit right now - the measurements will be random but you can do it.
IDK where you got that equation for the number of gates but it's probably for the general case - doesn't take into account the fact that different gate sets can be used to reduce the total.
Also confused on the error correction part - the whole point of error correction is to make the coherence time independent of the number of 'gates' in your circuit - so yeah with error correction you get more gates but you also get an effectively infinite coherence time...
I guess it depends what you mean by "factored" and "quantum computer" -using the generally accepted definitions, the number 15 has never been factored on a quantum computer using the Shor algorithm.
Yes, I am talking about the general case where you don't leave out the gates for 2 and 7 being factors of 15. That's what most people mean by factoring. Stating the answer because you know it already isn't useful. LARPing by running the algo through the "right" gates also isn't useful.
And you failed to consider quantum annealing as an alternative to Shor.
Besides, the empirical data shows otherwise. It takes 12 qubits to factor 15. We're up to 53 now.
With quantum annealing, a 20 bit number has been factored with 97 qubits. Not on a real quantum computer yet, of course.
So I have no idea what you are talking about.
Erm, OK. I guess we agree that nobody has factored the number 15 on a quantum computer yet.
Maybe you should read the paper I helpfully linked you above.
I was referring to the quantum annealing example, because no 97 qubit quantum computer exists yet.
Did you read the nature article, or just the press releases?
Both methods requires 𝒪(log2(𝑁)) qubits in total, where N is the number to be factored. The novelty of our demonstration of quantum annealing for prime factorization is based on the reduction in quantum resources required to execute factoring and the experimental verification of the algorithmic accuracy using currently available hardware. As a proof-of-concept, we have demonstrated these methods by factoring integers using the D-Wave 2000Q quantum annealing hardware, but these methods may be used on any other quantum annealing system with a similar number of qubits, qubit degree of connectivity, and hardware parameter precision. Assuming that quantum annealing hardware systems will continue to grow both in the number of qubits and bits of precision capabilities, our methods offer a promising path toward factor much larger numbers in the future.
And there is this too:
Are you just going to sit there and lob lame insults or do you have anything meaningful to contribute?
"RSA cryptography is based on the difficulty of factoring large integers, which is an NP-hard (and hence intractable) problem for a classical computer."
That is incorrect: there is no proof that factoring is NP-hard. Anyway, you can hardly expect me to take anything they say after this seriously.
I just... I can't.
The premise of your question is a bit misleading. Quantum computers do not run classical algorithms faster, rather they run algorithms specifically designed to take advantage of the quantum coherence. For many tasks this does not provide an advantage. For some very special (and occasionally useful) tasks, the advantage seems to be exponential. Simulating quantum mechanics (as in chemistry, material design, drug design) is one of these tasks. Some linear algebra tasks are also faster. Lastly, some researchers are hopeful that novel gradient-free optimizations algorithms exist for quantum hardware, but this is a bit more controversial.
(tldr: noone knows)
Any genuine interpretation of QM will agree on what a quantum computer outputs, because by definition, interpretations are ways of changing the words you drape around a calculation, not the result of the calculation itself. So quantum computers don't support or oppose any genuine interpretations. They only oppose strawmen.