That said, they did pick an 'easy' number to factor, just like most other quantum factorization results. This number could easily be factored using Fermat's factorization method (only the first step is required). It would have been very interesting to see a list of all primes and the time required to factor them (since they claim it only takes 100ms to factor the 17 bit one, such an experiment should be able to run in a day...).
For an extreeme case of this class of 'cheating' compare to the largest number factored on a quantum computer (1,099,551,473,989 =1,048,589 * 1,048,601) which only actually takes 3 qubits to calculate 
They also estimate that in order to factor a 2048bit number it would require just under a million logical qubits using QUBO, but didn't present a physical implementation. Worse yet, this is still not a general algorithm and is generally predicated on picking an easy number to factor in the first place!
To me it looks like they took the factorization problem, which has a lot of well-known (relatively) low-complexity algorithms and imlemented it by mapping it onto circuit-SAT  (the multiplication table corresponds to a multiplier circuit which they try to solve in reverse). Circuit-SAT is a proven NP-complete problem. There is currently no known quantum-algorithm that solves NP-complete problems in sub-exponential time.
Note that in the complexity hierarchy NP-complete problems are "harder" than the factoring problem. These are generally bad problems, were not even a quantum-computer helps (and I know of no asymmetric encryption algorithm that is NP-complete when trying to break it, but maybe the quantum-resistant encryption algorithms are better in that regard).
And that's even before we start talking about the limitations of quantum-annealers and the kind of speedups they can gain over classical computers. For a discussion of that, maybe one should start reading Scott Aaronson's blog .
Essentially, all the quantum annealing stuff that D-Wave does follows the patter of mapping a problem onto a spin glass and then finding the ground state with annealing.
The only really interesting theoretical aspect is finding good maps form hard problems to spin glasses.
This is a nice reference:
No such cryptosystems are known, and in fact such a system would also have implications in complexity theory.
Run it at 5GHz, that's 10ns or so for divisions. Less than 1us for 64. We are talking 5 orders of magnitude.
To go from the 17bit to the 2018 bit they need to use a million qbits instead of 1600-something. That's a tiny scale up of under 1000x.
To factor a 1024 bit number you need a huge set of resources, and instead of needing "a million CPUs" for the 2048 bit numbers, what you need is a million times more resources for the 2048bit number, than you needed for the 1024bit number.
It's not like academic results are unimportant until they are actually faster than classical results.
What makes you think that the time, energy and space costs of keeping n qubits coherent and executing a circuit on them is linear in n? No one has ever done it.
However, annealing here is used as an approximation of a brute-force search. If you demand a sufficiently exact (i.e. non-approximate) result, run-time is still going to grow exponentially with problem-size, thus for 2048-bit numbers the D-Wave machine will maybe not finish by the time the universe ends :)
evaluates in 4 microseconds in a 12 year old intel chip running a single thread in J
What? I was implementing CFRAC on my Atari 800 trying to factor a 100 bit number on its 6502. I had found enough congruences saved to disk but never wrote the matrix reduction code to spit out the answer.
17 bits. Pffft
P.s. not kidding about cfrac on atari.
Since 2012, I've wondered whether creating equivalences between tables in different radices, or using some of the statistics such that any choice of a factor radix unit, limits the possible values of the intermediate products adjacent to it, or that a delta of change in any of the intermediate products would produce a cascade of changes along its intersecting row column, and other things like that, to break the symmetry and kind of hack out a little bit more information.
No; they do not actually factor any 1024 or 2048 bit numbers. The largest is 17 bits. Although they point out that representing the problem can be done in a quadratic size of the input, the datapoints in the paper don't give any reason to believe that the time to finding the factors won't just be exponential in the input.
Also, no, this is not a record for Quantum Computing; as this is a DWave Quantum Annealing machine which needs to be evaluated by a different standard.
This seems to be more interesting if you look at it from the perspective "How can I treat prime factorization as a optimization problem" rather than "Does it look like Quantum Annealing Machines can factor RSA in the near term".
You can instead do "big" numbers like in this article with machines that can't run Shor's algorithm, but there is no good reason to imagine these would ever be competitive with a conventional computer like your laptop or phone.
Shor's algorithm (and similar algorithms) are definitively faster, if you can make a quantum computer to run them. That's the hard part, and as you observe the result is no clear progress in almost a decade.
It's interesting and I think promising approach to factoring is to use annealing over such a multiplication table trying to resolve the intermediate radix unit products and carries. It's a good way to map the problem to a solution space that is obviously a solution landscape that can be explored.
It's true that the number of variables they end up with is still exponential in the bits of input, but then again, that's not really a problem for true quantum computers.
It definitely seems factorization is closer to being publicly broken.
There is no discussion whatsoever about this failure, except for "The low noise D-Wave 2000Q factored correctly all integers N up to L_N= 17"
Clearly they wrote the paper using the output of their annealer...
assuming one day RSA gets actually broken and a quantum computer or a new algorithm can factorize prime numbers fast: would there be any alternatives avaiable for public/private key encryption?
Or would the solution be, to use just a lot bigger prime numbers calculated also with a quantum computer? That sounds not too promising.
So the answer seems: maybe.
That is why asset managers are experimenting with quantum communications channels .
You may need to have 1TB of key storage (and signature) for post quantum RSA.
In that case the answer is: never. We've tried to generously interpret earlier factoring annealing results (there is nothing new in the top-posted paper) and had to conclude that the overall method just doesn't scale well: https://arxiv.org/abs/1902.01448
Maybe a few years from now?
Anyways, if you want something that can be verified on a classical computer but cannot be done on a classical computer, see google's recent Quantum supremacy experiment. However it is extremely contrived. I think the most likely non contrived thing will be simulating simpilish quantum systems (to help with chemistry experiments and the like), but its probably still going to be a while before we start seeing that i think (IANA quantum scientist).
*Edit: to clarify, quantum computers can speed the process up of finding a collision in theory, just not enough to help. Normally you would need O(2^256) operations to find sha512 collision (Birthday paradox), quantum (BHT) gets you down to O(2^170), which is better, but still way to high. Most crypto assumes anything greater than 2^128 is secure by a comfortable margin.
Now, instead of the random string, you have a random circuit. And the output is the goal. The random circuit is as useful to you as the random string. But using a quantum computer you can compute its state when operating. With a traditional computer it is very hard.