Hacker News new | past | comments | ask | show | jobs | submit login
Breaking RSA Security with a Low Noise D-Wave 2000Q Quantum Annealer (arxiv.org)
111 points by adulau 22 days ago | hide | past | web | favorite | 51 comments



This is pretty impressive, they were able to factor a 17 bit number (103459 = 337 * 307) using 1635 physical qubits (arranged as 73 logical qubits) using QUBO. In this configuration they were able to achieve an average time to solution of just under 100ms.

That said, they did pick an 'easy' number to factor, just like most other quantum factorization results. This number could easily be factored using Fermat's factorization method (only the first step is required). It would have been very interesting to see a list of all primes and the time required to factor them (since they claim it only takes 100ms to factor the 17 bit one, such an experiment should be able to run in a day...).

For an extreeme case of this class of 'cheating' compare to the largest number factored on a quantum computer (1,099,551,473,989 =1,048,589 * 1,048,601) which only actually takes 3 qubits to calculate [1]

They also estimate that in order to factor a 2048bit number it would require just under a million logical qubits using QUBO, but didn't present a physical implementation. Worse yet, this is still not a general algorithm and is generally predicated on picking an easy number to factor in the first place!

[1]https://quantumcomputing.stackexchange.com/questions/9204/th...


Glancing at the paper, I do not find anything that is impressive. AFAICS the algorithm they implement is of asymptotic exponential complexity, even when run on a quantum computer. I cannot imagine how this work would have any effect on the field of asymmetric encryption algorithm research.

To me it looks like they took the factorization problem, which has a lot of well-known (relatively) low-complexity algorithms and imlemented it by mapping it onto circuit-SAT [1] (the multiplication table corresponds to a multiplier circuit which they try to solve in reverse). Circuit-SAT is a proven NP-complete problem. There is currently no known quantum-algorithm that solves NP-complete problems in sub-exponential time.

Note that in the complexity hierarchy NP-complete problems are "harder" than the factoring problem. These are generally bad problems, were not even a quantum-computer helps (and I know of no asymmetric encryption algorithm that is NP-complete when trying to break it, but maybe the quantum-resistant encryption algorithms are better in that regard).

And that's even before we start talking about the limitations of quantum-annealers and the kind of speedups they can gain over classical computers. For a discussion of that, maybe one should start reading Scott Aaronson's blog [2].

[1] https://en.wikipedia.org/wiki/Circuit_satisfiability_problem

[2] https://www.scottaaronson.com/blog/?p=2555


This.

Essentially, all the quantum annealing stuff that D-Wave does follows the patter of mapping a problem onto a spin glass and then finding the ground state with annealing.

The only really interesting theoretical aspect is finding good maps form hard problems to spin glasses.

This is a nice reference: https://arxiv.org/abs/1302.5843


"asymmetric encryption algorithm that is NP-complete"

No such cryptosystems are known, and in fact such a system would also have implications in complexity theory.


But circuit sat is not equivalent to solving a multiplication table like this. If you're doing it that way, you're making it way too hard.


"103459" in 100ms is a horrible, horrible result. There are like 64 primes to try. That's 64 32bit divisions - around 30cycles per division, make it 50 with all the checks.

Run it at 5GHz, that's 10ns or so for divisions. Less than 1us for 64. We are talking 5 orders of magnitude.


I don't think that's a very important distinction, the goal is of course factoring the 2048bit number without using exponentially more hardware.

To go from the 17bit to the 2018 bit they need to use a million qbits instead of 1600-something. That's a tiny scale up of under 1000x.

To factor a 1024 bit number you need a huge set of resources, and instead of needing "a million CPUs" for the 2048 bit numbers, what you need is a million times more resources for the 2048bit number, than you needed for the 1024bit number.

It's not like academic results are unimportant until they are actually faster than classical results.


> To go from the 17bit to the 2018 bit they need to use a million qbits instead of 1600-something. That's a tiny scale up of under 1000x.

What makes you think that the time, energy and space costs of keeping n qubits coherent and executing a circuit on them is linear in n? No one has ever done it.


I have no idea - but the whole point of the endeavor is (presumably) that it at least scales better than a classical CPU. The goal of the research was hardly to achieve what can be done on a smartphone in one second.


Yes the number of Qubits used to represent the problem scales polynomially with the bit-length of the input integers.

However, annealing here is used as an approximation of a brute-force search. If you demand a sufficiently exact (i.e. non-approximate) result, run-time is still going to grow exponentially with problem-size, thus for 2048-bit numbers the D-Wave machine will maybe not finish by the time the universe ends :)


No it's not, from what I understand of quantum computing, with more qbits you would be able to factor larger numbers in the same amount of time (100ms), when with your 5GHz it would take longer than the age of the universe.


Factoring is not a constant time problem for quantum computers.


Ok, my bad, the time complexity of the Shor algorithme use in quantum computers is O((log n)^3) while your algorithm has a complexity O(n log log n). So quantum computers should be able to outperform classical computers in this task in the future.


The number could have been factored by hand.


q: 103459

evaluates in 4 microseconds in a 12 year old intel chip running a single thread in J


>> This is pretty impressive, they were able to factor a 17 bit number (103459 = 337 * 307) using 1635 physical qubits (arranged as 73 logical qubits) using QUBO

What? I was implementing CFRAC on my Atari 800 trying to factor a 100 bit number on its 6502. I had found enough congruences saved to disk but never wrote the matrix reduction code to spit out the answer.

17 bits. Pffft

P.s. not kidding about cfrac on atari.


I think the really cool thing is to consider whether there exists a clever classical algorithm that uses some sort of anealing approach and some clever hacks based on the structure of integer multiplication to solve the multiplication table in sub exponential time. I like the approach in the paper of moving from binary to blocks, to try to create subproblems.

Since 2012, I've wondered whether creating equivalences between tables in different radices, or using some of the statistics such that any choice of a factor radix unit, limits the possible values of the intermediate products adjacent to it, or that a delta of change in any of the intermediate products would produce a cascade of changes along its intersecting row column, and other things like that, to break the symmetry and kind of hack out a little bit more information.


Well, for those looking at the comments first:

No; they do not actually factor any 1024 or 2048 bit numbers. The largest is 17 bits. Although they point out that representing the problem can be done in a quadratic size of the input, the datapoints in the paper don't give any reason to believe that the time to finding the factors won't just be exponential in the input.

Also, no, this is not a record for Quantum Computing; as this is a DWave Quantum Annealing machine which needs to be evaluated by a different standard.

This seems to be more interesting if you look at it from the perspective "How can I treat prime factorization as a optimization problem" rather than "Does it look like Quantum Annealing Machines can factor RSA in the near term".


What is record for PF on quantum computer? I only found some really old results from 2014 and current records must be higher.


The record for Shor's algorithm is still 21 = 7 x 3

You can instead do "big" numbers like in this article with machines that can't run Shor's algorithm, but there is no good reason to imagine these would ever be competitive with a conventional computer like your laptop or phone.

Shor's algorithm (and similar algorithms) are definitively faster, if you can make a quantum computer to run them. That's the hard part, and as you observe the result is no clear progress in almost a decade.


Quadratic in size of input is just the multiplication table of the radix units in some base (in their case binary).

It's interesting and I think promising approach to factoring is to use annealing over such a multiplication table trying to resolve the intermediate radix unit products and carries. It's a good way to map the problem to a solution space that is obviously a solution landscape that can be explored.

It's true that the number of variables they end up with is still exponential in the bits of input, but then again, that's not really a problem for true quantum computers.

It definitely seems factorization is closer to being publicly broken.


This study considers seven semi-primes (without justification why these seven) and reports an average(?) runtime of factoring each with D-Wave. No conclusions should be drawn on so few data-points. The proposed "Block Multiplication Table Method" will only affect the constants and thus has no effect on the asymptotics. The embedding from logical to physical qubits appears to have an exponential gap (but again, we shouldn't draw conclusions from so few data-points). However, if this is indeed true then even a polynomial runtime for annealing would still result in an exponential runtime for factoring. All in all, it eludes me why authors conclude that the obtained results are promising.


IBM and Google both claim to have 53 qubit computers, which I thought was the edge of the technology at the moment. This machine has 1635 "physical qubits". I'm guessing these are not the same thing. Can anyone comment on what the difference is?


D-Wave is building "Quantum Annealers" which are not universal quantum computers. They cannot run quantum algorithms such as Shor's algorithm and there is serious doubt whether they can achieve any significant speed-up over classical computers, even in principle.


D-wave has a history. They've been mostly behaving better recently, but https://www.scottaaronson.com/blog/?p=1400 is a good summary, which emphasizes the differences you're asking about.


A single logical qubit is composed of multiple physical qubits in order to achieve some level of error correction and fault tolerance. Scaling the number of qubits is one of the hard problems with true quantum computers.


The authors give the following factorization with their algorithm: 231037 = 499 * 363. Not only is the product not equal to 231037, but 363 is also clearly not a prime as it is divisible by 3.

There is no discussion whatsoever about this failure, except for "The low noise D-Wave 2000Q factored correctly all integers N up to L_N= 17"


Seems like it's probably just a typographical error & should be 463*499.


They also were not able to actually factor that one, the biggest number which produced the correct answer was 103459 = 337 * 307.

Clearly they wrote the paper using the output of their annealer...


It is clearly a typo. 231037=499*463.


Layman question:

assuming one day RSA gets actually broken and a quantum computer or a new algorithm can factorize prime numbers fast: would there be any alternatives avaiable for public/private key encryption?

Or would the solution be, to use just a lot bigger prime numbers calculated also with a quantum computer? That sounds not too promising.



Oh thanks.

So the answer seems: maybe.


My impression from talking to post-quantum cryptographers is that the answer is "probably" or "it would be pretty surprising, though not radically shocking, if it turns out that any public-key protocol is necessarily quantum-vulnerable".


An RSA break only breaks key distribution. If you can securely get someone an AES key, you are all set.

That is why asset managers are experimenting with quantum communications channels [1].

[1] https://www.bloomberg.com/news/articles/2019-01-14/the-super...


https://cr.yp.to/papers/pqrsa-20170419.pdf

You may need to have 1TB of key storage (and signature) for post quantum RSA.


At what bit lenght is this approach faster than https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%... ?


That is apples to oranges, you're citing a lattice reduction algorithm? Maybe you meant the number field sieve https://en.wikipedia.org/wiki/General_number_field_sieve?

In that case the answer is: never. We've tried to generously interpret earlier factoring annealing results (there is nothing new in the top-posted paper) and had to conclude that the overall method just doesn't scale well: https://arxiv.org/abs/1902.01448


It's 2020, and I'm still waiting to see an impressive proof of work out out of a quantum computer that I can easily verify on my traditional computer (two different files with identical SHA-512, for example?).

Maybe a few years from now?


Finding an sha-512 collision is not a problem that quantum computers can help with, even in theory.*

Anyways, if you want something that can be verified on a classical computer but cannot be done on a classical computer, see google's recent Quantum supremacy experiment. However it is extremely contrived. I think the most likely non contrived thing will be simulating simpilish quantum systems (to help with chemistry experiments and the like), but its probably still going to be a while before we start seeing that i think (IANA quantum scientist).

*Edit: to clarify, quantum computers can speed the process up of finding a collision in theory, just not enough to help. Normally you would need O(2^256) operations to find sha512 collision (Birthday paradox), quantum (BHT) gets you down to O(2^170), which is better, but still way to high. Most crypto assumes anything greater than 2^128 is secure by a comfortable margin.


"Extremely contrived" is a little bit misleading since they did in fact run random algorithms from a general set of algorithms. So it's rather the opposite of contrived.. but I understand what you mean :)


Contrived means "Deliberately created rather than arising naturally or spontaneously" https://www.lexico.com/en/definition/contrived . I think the problems used to demonstrate quantum supremacy very literally meet that definition - they were largely created for the sole purpose of demonstrating supremacy; they are not problems we naturally want to know the answers to.


It's been demonstrated already using a random circuit. Two random strings that match SHA, are practically speaking just toy examples using random useless information. There's no meaning to the numbers except that when you apply an operation to them the have the same output.

Now, instead of the random string, you have a random circuit. And the output is the goal. The random circuit is as useful to you as the random string. But using a quantum computer you can compute its state when operating. With a traditional computer it is very hard.

https://www.nature.com/articles/s41586-019-1666-5 https://www.scottaaronson.com/blog/?p=4317


The keywords to google for when looking for such things are "quantum supremacy"


"Quantum supremacy", aside from dumb naming, as far as I understood from news is an attempt to find a possible problem to the solution they have invented (quantum device itself), to prove that their quantum device can do at least something which cannot be replicated with the same magnitude of speed (or at all) on "normal" computers. And it was contested even in such limited scope. OP asks for next step - do some useful computation. Reverse order - have the problem at start (preferably "real" problem) and compute solution to it.


Maybe the proof exists in a different reality? :)


I see a slight irony in how I perceive crypto conversations as very cryptic. It's pleasant to wander into a thread like this where you can't really tell up from down (as a relative layperson).


oh boy


[flagged]


This comment isn't relevant to the article.


Can't reply to OP but the RSA company setting ECDRBG as a "default" is not related to RSA the algorithm.


Perhaps, but now I know what the ‘Streisand effect’ is...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: