
Classical Cryptography Will Survive Quantum Computers - dnetesn
http://nautil.us/blog/-how-classical-cryptography-will-survive-quantum-computers
======
wybiral
Quantum computers will be good at a specific range of problems where
interference can be taken advantage of. It drives me crazy how many people
think they just go into a superposition and calculate all outcomes at the same
time. It doesn't work that way.

And we still have to scale them with proper error correction. Who knows what
the timeline for this looks like. Anyone who assumes "Moore's Law" isn't
taking into consideration the fact that scaling here isn't as simple as
multiplying the number of transistors in parallel.

To be useful they have to achieve entanglement between new qubits. Plus error
correction and isolation have to also be scaled with that.

~~~
wlesieutre
SMBC has a great comic on how quantum computers work [https://www.smbc-
comics.com/comic/the-talk-3](https://www.smbc-comics.com/comic/the-talk-3)

~~~
wybiral
That's a good comic, thanks!

"Wait a minute... That means a qubit corresponds to a unit vector in two-
dimensional Hilbert space!"

And quantum gates/operations on qubits are just unitary linear transformations
on those vector spaces...

~~~
asdlllkasdasd
An unintuitive but important fact is that n qubits do not correspond to n unit
vectors in two-dimensional Hilbert space (but rather to one unit vector in
2^n-dimensional Hilbert space).

~~~
tome
And that's how entanglement arises, presumably.

~~~
urgoroger
I think you have the right idea but the wrong terminology. A n qubit system
requires amplitudes (probabilities) for all 2^n possible basis states. For
example, the representation we can give to a 2-qubit system is just a
probability (technically an amplitude) associated with each of states 00, 01,
10, and 11. It's not really fancy or hard to see how this increases
exponentially.

Here's an example with classical randomness (and has a close quantum
analogue). If you have two closed boxes with coins in them and shake both
around so that the coins flip around in the in such a way that they land 50/50
on each side, you could represent that state of the system as [1/4, 1/4, 1/4,
1/4], that is, each possible state of the system of two coins (heads head,
heads tail, tail head, and tail tail), has 1/4 chance of being observed when
you open the boxes.

Note that entanglement refers to the idea that two bits can be correlated.
That is, the state of one is correlated with the state of another in some
sense. In the example I just gave, the coins are not entangled, since the
observation of one of the coins does not affect the probability of what the
other coin could be. In mathematics we say this is possible if the state of
the 2-coin system [1/4 1/4 1/4 1/4] can be written as a tensor product of the
constituent 1-coin systems [1/2 1/2] and [1/2 1/2], which it can (entanglement
refers to a state where this factorization isn't possible).

That said, we can achieve entanglement with the coins too, by somehow getting
a state vector like [1/2 0 0 1/2] (50% chance that both coins are heads and
50% chance both are tails), which cannot be written as a tensor product of two
1-coin states. Moreover, if you open one box and see its a heads coin, you
know the other box must also have a head coin. That's what is meant by
'correlation'. To product such a state of the coin boxes, I guess you would
need a third party or something to put the two coins in the boxes and
delivering them to you.

My conclusion is, entanglement doesn't have to do too much with the
exponential size of representation n-qubit states, and I hope I gave a good
example of what entanglement kind of really means. While (randomized)
classical analogues to quantum effects are not really perfect, I think they
demonstrate these effects more understandably.

~~~
asdlllkasdasd
Parent is correct — entanglement is exactly due to the fact that there exist
more vectors in Hilbert space than tensor products (~ tuples) of unit vectors.

------
openasocket
I'm a little surprised we didn't have decent quantum-resistant crypto even 20
years ago. All our public key crypto is based on the hardness of integer
factoring and the discrete logarithm problem, neither of which are known to be
NP-complete or NP-hard. Quantum computers can efficiently solve those two
problems, but they haven't been shown to be able to handle any NP-complete
problems. There have been previous attempts to make cryptosystems based on NP-
complete problems like the knapsack problem, but never became as popular as
RSA. Anyone have any guesses why we didn't see any popularly used crypto based
on the hardness of, say, 3SAT or the knapsack problem?

~~~
hannob
People tried and it didn't work very well, see e.g.:
[https://en.wikipedia.org/wiki/Knapsack_cryptosystems](https://en.wikipedia.org/wiki/Knapsack_cryptosystems)

It seems it's just really hard to create a new public key crypto system, so
people sticked to what works as long as the quantum threat didn't sound too
scary. Now it sounds scary enough that people look for alternatives.

~~~
tptacek
To add to this:

It's still sort of just scary in the abstract. There's no evidence that a
quantum computer that could attack even weak incarnations of DH or RSA are
within the realm of feasibility, and if things break in engineering's favor
over the next decade we still probably won't be there. It's possible that me
might never get there!

The big problem is that there's a chance (a low chance) that we might get
there within the (long) lifespan of our existing public key algorithms.
There's a mindset in cryptography engineering that we design for secrets that
must be kept for many decades. So: the "plausible" threat we face here is
decades-hence retroactive decryption.

It's useful to keep this in mind when doing _security_ engineering, because PQ
isn't "free": we have to trade things off to get it. It's possible that the
modern PQ schemes we're pursuing will have grave flaws that put them within
reach of _classical_ cryptanalysis within just a few years. Even more
importantly, it's not only possible but _likely_ that there are implementation
errors --- like using schoolbook RSA, or failing to validate a curve point ---
that we won't fully understand for a decade after we adopt a new PQ scheme.

And, just to make things more fun, we're not looking at 1 PQ scheme but at
3-4, some of them entirely unrelated to each other.

Against all this you should also keep in mind the other forces that push PQ to
the front of the field of cryptography's consciousness. Most serious
cryptography is done academically. PQ has created a gold rush of paper topics.
Naturally, it's going to seem super important if you pay attention to IACR
ePrints.

If what you're trying to protect are conventional financial transactions, or
the integrity of applications running today that will have been rebooted at
least once before 2050, there's a good chance that PQ is bad engineering for
you.

------
hackermailman
The Post-Quantum Crypto Summer School lectures cover this in more detail if
anybody is interested
[https://2017.pqcrypto.org/school/schedule.html](https://2017.pqcrypto.org/school/schedule.html)
specifically the Quantum Algorithms lectures and notes by Ronald de Wolf.

------
dsacco
For those who would like to get a better idea of the cryptography described in
this article, here is an overview of the usually encountered computational
hardness problems in lattice-based cryptography:

1\. Shortest Vector Problem (SVP) - Given a lattice with a basis _B_ , find
the shortest vector in the lattice.

2\. Closest Vector Problem (CVP) - Given a lattice with a basis _B_ and some
target vector _t_ , find the point _v_ which is closest to _t_.

3\. Shortest Independent Vector Problem (SIVP) - Given a lattice basis _B_ in
Z^( _n_ x _n_ ), find _n_ linearly independent lattice vectors [ _s_ 1, ...,
_s_ n], where _s_ i is in the lattice of basis _B_ for all _i_.

4\. Learning With Errors (LWE) - Given integral starting parameters _n_ , _m_
and _q_ , with a probability distribution _X_ on Z _q_ , distinguish between
an input pair ( _A_ , _v_ ) and a vector _e_ in (Z _q_ )^ _m_. _A_ is chosen
uniformly from (Z _q_ )^( _m_ x _n_ ) and _v_ is (typically) chosen uniformly
from (Z _q_ )^ _m_.

In other words, try to recover the secret key _s_ in (Z _q_ )^ _n_ by
examining a sequence of approximate linear equations. The error introduced is
what makes this computationally hard, otherwise it could be done in polynomial
time using Gaussian elimination.

5\. Learning Parity With Noise (LPN) - Given a secret length _l_ , a noise
rate 0 < _r_ < 0.5 and the number of samples _q_ , find the random _l_ -bit
secret string _s_ in (Z2)^ _l_ using _q_ samples. This is the search problem
version; there is also a decisional variant.

There are good surveys that cover lattice-based cryptography in general or
particular computational problems in greater depth; for example, try [1], [2],
or [3].

________________________________________

1\. Peikert, Chris: A Decade of Lattice-Based Cryptography -
[https://web.eecs.umich.edu/~cpeikert/pubs/lattice-
survey.pdf](https://web.eecs.umich.edu/~cpeikert/pubs/lattice-survey.pdf)

2\. Micciancio, Regev: Lattice-Based Cryptography (chapter in _Post-Quantum
Cryptography_ )

3\. Regev, Oded: The Learning With Errors Problem -
[https://cims.nyu.edu/~regev/papers/lwesurvey.pdf](https://cims.nyu.edu/~regev/papers/lwesurvey.pdf)

------
raw23
Correct me if I'm wrong but The article doesn't describe how 'classical'
cryptography will survive quantum computers. It merely describes how it will
break RSA and then lightly covers a potential alternative quantum proof public
private key cryptography scheme.

Misleading title?

~~~
dsacco
This is a good point. I found the article's content disappointing, to be
honest. I think it hit a bit of an uncanny valley - it was too shallow to be
comprehensive for a technical math/computer science audience, but too deep
(and emphasizing the wrong things) to cover the ground in a way that would be
appropriate for a non-technical audience.

If I were to write an article like this, I would probably choose a more
explicit audience from the outset, then cover either a depth-first or breadth-
first approach to the subject. A breadth-first approach would be good for a
non-technical audience: here are the general types of cryptosystems, here are
the ones threatened by quantum computers, here are the ones that are not, here
are the current proposals for post-quantum resistant cryptosystems.

On the other hand, were I writing for a technical audience I would assume an
understanding of why quantum computers threaten classical cryptography (and
why e.g. symmetric encryption is mostly safe), then take a deeper look at each
of the post-quantum proposals.

------
mitchtbaum
Which elliptic curves are quantum resistant?

[https://crypto.stackexchange.com/questions/35482/which-
ellip...](https://crypto.stackexchange.com/questions/35482/which-elliptic-
curves-are-quantum-resistant/35486)

~~~
tptacek
None of them, in the conventional sense. The ECDLP itself is breakable with
Shor's Algorithm more easily, in experimental simulation, than RSA.

Curves still have a role in PQ systems through _isogenies_ , which are
mappings between entire curves. But those schemes are very different than the
elliptic curves we use today; you can think of them as systems that exchange
whole curves rather than curve points.

------
sscarduzio
Why can’t we just keep on using RSA with a huge key?

~~~
UncleMeat
Classical computers take a polynomial amount of time (in key size) to perform
RSA but exponential amount of time (in key size) to break RSA. Quantum
computers can break RSA in polynomial time.

For classical systems, the cost to break RSA grows much faster than the cost
to perform it, so we can always outpace more powerful adversaries with a less
powerful computer. This is not true for quantum machines. I cannot produce a
key size that will be small enough that I can work with it but large enough
that it is secure against attack.

------
sigmaprimus
Interesting that sha256 is mentioned as approved by the government, didn't the
NSA invent this encryption? Also I seem to recall another article on HN this
year about the 1st sha1 collision being found? Maybe it won't take a QC to
break after all. My mistake just reread, it AES256 mentioned sorry.

~~~
dsacco
SHA-256 (SHA-2) is incomparable to SHA-1. The (first) successful SHA-1
collision attack executed by Google et al earlier this year implies absolutely
nothing about the computational hardness of SHA-2.

There is no evidence whatsoever that quantum computers will break SHA-256; I'd
be willing to personally stake essentially all of my assets on the bet that it
won't be broken within our lifetimes. I also cannot think of a credible
cryptographer off the top of my head who has voiced the opinion that quantum
computers pose a threat to the SHA-2 family (or SHA-3, for that matter).

At best I could see a complexity reduction, but nothing in polynomial time.
Cryptographic primitives have such attacks published against them (and survive
them) all the time.

~~~
sigmaprimus
Ok, first off I didn't say it would be broken by a QC but that is besides the
point. As a person who has been around long enough to remember when people
stated our computers would never need more that 640K, I would never stake all
my assets on what will come in the future, but maybe your not long for this
world..happy new year.

------
douglaswlance
The article neglects to mention the strange behavior of quantum computers that
allow them to seek ALL possible keys simultaneously.

~~~
ifdefdebug
"If you take just one piece of information from this blog: Quantum computers
would not solve hard search problems instantaneously by simply trying all the
possible solutions at once." \- Scott Aaronson

[https://www.scottaaronson.com/blog/](https://www.scottaaronson.com/blog/)

