My understanding is that there is a significant work in theoretical quantum computing done by you and many people. Results presented by you seems impressive. Congratulations.
On the other hand - everything I read, hear and try to interpret about D-wave seems like a nonsense. They show approximation computations and compare it to classical algorithms that provide exact results. But classical approximation algorithms work much better than their quantum computers. I do not see how scaling this approach could lead to anything that you describe in your paper.
In this regard - is my understanding of D-wave correct or does it provide some value that is better than classical computers?
Can d-wave or IBM's creation be used to do computations described in your paper?
Is their approach worth anything?
 Of course I'm not asking about now - my question is whether there is anything connected to exact results (like factoring) and exponential speed-up in d-wave approach.
I could buy this, but they do not solve any problem yet that was not solvable with probabilistic classical algorithms before.
I would also give them a credit if they didn't solve any problem yet but showed real quantum computer at work. Eg. could do factoring of 50 digit number. Or are on the way to do this somehow and made understandable progress in the area. Even though they didn't fully succeed yet.
What they do to my understanding is:
* approach a problem that is hard in general.
* solve only a simple and already solvable case
* when asked that it may not be a quantum thay say that even if it was not quantum then ok because the problem is hard (while it is not)
* if asked about the problem being easy they say that maybe the problem is easy but this is quantum.
Do I understand this correctly? I would really appreciate more explanation
At no point in human history as this sort of "detailed theoretical wanking" turned into progress in science; the hardware comes first.
Our goal in this paper was to better understand the resources required to factor with a quantum computer. These estimates can then be fed into decisions about the rate post-quantum cryptosystems need to be deployed. We don't make predictions about the expected capabilities of hardware over time.
We do frame our results in terms of specific hardware assumptions (e.g. particular gate speeds and errors rates), but most of the improvements we make are about how to do arithmetic and so they generalize beyond these assumptions.
You do both, but the feasibility part is in the form of hardware assumptions that don't comment on the current or future state of hardware.
The time to factor one giant number is interesting, but so too is how this method applies to other, especially more common, problems.
If you were to, very inadvisably, use RSA with 128 bit long keys (maybe that's what they were asking?) then yes it would take minutes. But you can already break such keys in minutes using classical computers.
EDIT:  suggests D-wave could be 5640 by 2020, doubling approx. every two years - 12 doublings = 24 years from now.
We are several theoretical and engineering breakthroughs away from a scalable quantum computer with sufficiently many error-corrected, logical qubits to do something useful with Shor's algorithm. The current rate of error-correction needs to decrease by a factor of 10 - 100. The current state of the art in physical qubits needs to increase by a factor of at least 10,000.
If we assume the number of usable physical qubits doubles every year while the decoherence rate halves every year, it's plausible a quantum computer could be designed to break 2048-bit RSA in slightly under 20 years. To be generous, we'll also assume the implementation and engineering work doesn't meaningfully increase that estimate once the design is finished.
This is an optimistic forecast, to put it mildly. There are credible arguments from skeptics in the academic community that it's not actually possible to build a quantum computer capable of breaking RSA in practice. Likewise, everything I've mentioned is only in regard to the known unknowns we need to resolve. It's very probable there are a variety of unknown unknowns to contend with as well.
This is also all aside from the possibility (emphasized by the first report I cited) of a looming winter in quantum computing research. In order to actually reach a point where RSA can be broken, the field needs to start paying out the checks it's been writing. This means actually achieving quantum supremacy and developing legitimately useful quantum computers - scalable or otherwise - for industry applications.
Finally, D-Wave's progress isn't relevant here. They're building a quantum annealer, not a general-purpose quantum computer. All the foregoing is based on the idea of building a general-purpose quantum computer to implement Shor's algorithm. Annealing methods aren't applicable.
Generalized qubits I thought were smaller.
Interesting to see the doubling, I've been asking for the qubit size "moores law" projection for a while but people in the business keep pushing back saying its a bad measure, but even bad measures back in the MIPS and GIGAFLOPS days made some sense.
Is it really nonsensical to measure how "fast" we're achieveing stable qubits? Feels like it should be a thing.
The reason you get pushback is because Moore's Law exists within a historical context in which it was actually plausible for exponential increases to occur each year, year after year. The field of quantum computing is so nascent that such a context is utterly alien to it. We simply don't have the economies of scale, engineering capabilities nor even theoretical groundwork required to achieve and sustain those kinds of improvements. The other reason is because transistors and qubits are not directly comparable, and you shouldn't try to infer the growth trajectory of one from the other's.
So to answer your question directly - it's not nonsensical at all to measure and forecast the rate of physical qubit capacity increase. The report I cited in another comment does exactly that, which is how you can derive that 20 year estimate I made. But we don't have any evidence those kinds of annual doublings will be achievable anytime soon, so most of it comes down to educated guessing.
Computers did because process shrank. Most qubits are already 1 or a few subatomic particles. There’s zero feature shrinkage possible at those scales.
Moore's Law did not shrink support machinery much at all. It shrunk the lowermost information processing piece. That lowermost piece in this case cannot be shrunk in the same manner.
The fact the bottom cannot shrink is causing Moore's law to end as it is. There is no way around these quantum limits without a major change in physics, one that is unlikely to ever happen.
None of the pieces you list can scale the same way area did for Moore's Law to work for transistors.
Moore made his prediction in 1965. Transistor area in 1971 was a 10,000 nm process. Modern process is a ~10 nm process. This 1000 fold reduction in linear size (which is a 1 million reduction in area, 1 billion in volume) cannot within any reason happen to your items.
Money and power and count dropped in Moore's law only and precisely because feature size dropped, and this could only happen because the initial features were macro scale, not quantum scale. This cannot happen to qubits - they're already at the end of the quantum line.
We will likely someday get smaller quantum computers. They won't follow a Moore's Law from where they are now, however, for the same reasons nothing other than transistor based items followed a Moore's Law - the physics precludes it.
So the quantum equivalent of Moore's law is adding a single qubit every 18 months, which roughly doubles it's speed.
However, symmetric cryptography like AES or ChaCha20 (assuming that the best-case quantum attack is a brute-force search) only gets a square-root speed-up and thus doubling the key bits actually does alleviate the quantum threat.
My very vague impression of quantum computing is that making a quantum computer with more qbits is quite hard.
They also say later on in the paper that, given certain constraints, it should be possible to use multiple quantum computers with fewer qubits per computer - at the cost of a slight increase in the total number of qubits.
TL;DR: yes, just moving from 2048 to 4096-bit keys will not suffice. But, if you make a composite key out of many 4096-bit keys (such that the key file is now ~1Tb) then then can achieve a reasonable level of security. And yes, I do think the paper is (at least in parts) somewhat tongue-in-cheek!
That this lowered 20M requirement is just ploy to keep people using 2048 bits, not 4K as required by GNU. It is my understanding that 2048 qbits are already in reach to well-funded state agencies.
As for assymetric crypto, the ones we have now are susceptible to quantum attacks. There are a bunch of proposals for quantum-proof algorithms but nothing officially standardized yet. There is a NIST competition going on right now that is trying to find a new suite of assymetric cryptographic algorithms . You can find an awesome talk on the topic here .
If quantum computers are possible, we should switch to quantum proof algorithms now rather than later, because then we'd reduce the traffic that can be decrypted at a later point in time. It's quite dangerous right now considering that basically everything is protected by quantum-non-resistant ciphers.
Take your secret. Write it on a piece of grid paper. Fill EVERY FIELD on another piece of grid paper with random data. Transform every character on the first piece of paper based on the corresponding characters on the second piece of paper and write the result on a third piece of grid paper. Then burn the first piece of paper.
It's not possible to read the 3rd piece of paper without the 2nd now. You also obscure the length by filling the second piece of paper with data.
1. A secret key as long as the plaintext.
2. A consistent source of true randomness and a way of sampling it such that your secret key is truly random.
3. To never reuse a key once it's been used once.
Imagine the ramifications of retrofitting servers to use one-time pads for TLS. Moreover, essentially everything we take for granted in cryptography relies on constructions which use pseudorandom permutations and generators. Even if we resolved all these problems and forged ahead in a brave new world using stream cipher-like constructions based on one-time pads, we'd still have to rethink all of public-key cryptography.
This impracticality is one of the major reasons we moved on from information theoretic security to complexity theoretic security by the mid 20th century.
The version I saw (requires 2 quantum devices): http://www.henryyuen.net/fall2018/scribe2.pdf
New paper that claims to do it with only one quantum device: https://arxiv.org/pdf/1804.00640.pdf
Code is available for most of these proposals because NIST is currently running a PQCrypto Standardization CFP. However I strongly recommend against deploying your own post-quantum cryptosystem for a few reasons:
1. As I've cited elsewhere in this thread, this paper notwithstanding, most leading researchers in the field are frankly bearish about the prospects of 2048-bit RSA being broken in the next 20 years or so. This can obviously change, which leads me to the next few points of consideration.
2. Most of the post-quantum cryptosystems are not well-studied, relatively speaking. They are fundamentally unproven compared to classical systems - we just haven't had enough academic scrutiny yet. It's still premature to say which post-quantum cryptosystems will remain secure under academic scrutiny over the next few years. Many were dropped from consideration after Round 1 of the NIST standardization review.
3. Most (perhaps all) of the implementations are immature and not well-supported. The majority of them are proofs of concept for NIST, not production-ready code. Authors themselves will caution you against using them. We don't have a libnacl for post-quantum public-key cryptography right now, which means that you'd be substantially rolling your own interfaces to underlying primitive implementations. It's hard enough maintaining secure cryptography in production when everything has been done to keep you from footgunning yourself - you won't have such guardrails for post-quantum cryptosystems.
4. Unfortunately, all post-quantum cryptosystems are grievously inefficient in either time or spatial performance compared to classical cryptosystems. As a general rule of thumb, lattice and error-correcting code based cryptography tends to be on the faster side with very large key requirements, and isogeny-based cryptography tends to be on the slower side with lower key size requirements. But all are noticeably slower than classical systems across both dimensions.
You should wait until these cryptosystems have been proven out by academic and industrial research. Google began implementing lattice-based cryptography for TLS in Google Chrome in 2016. Adam Langley has a nice writeup which also includes a few performance concerns. He's also written a blog post to talk about the next round of implementations they'll start experimenting with.
This is still an area of active research. For instance, there are learning-with-errors-based cryptosystems: https://en.wikipedia.org/wiki/Learning_with_errors#Use_in_cr...
But such systems are based on the assumption that "this type of problem is probably hard even with quantum computers". I believe we don't have anything more "quantum-proof" than that..
And of course, there are cryptosystems with unconditional security. Such as OTP.