

Matching cryptographic primitive strengths - moonboots
https://www.imperialviolet.org/2014/05/25/strengthmatching.html

======
pbsd
The argument that 128 bits of security are enough is quite reasonable [3].
There are, however, a couple technicalities with using 128-bit security
primitives and/or keys:

\- Multi-target attacks. In the case of symmetric ciphers, one has to take
into account that the attacker may have many messages encrypted under many
keys, and that success is achieved once at least one of them is broken. [1]
exemplifies this effect with AES-128: with 2^86 time, a machine of size 2^32
has near certainty of breaking at least one out of 2^10 keys. Larger key sizes
effectively render such tradeoffs moot. Note that multi-target attacks do not
affect the speed of elliptic curve discrete logarithm computation: while it is
possible to speed up a discrete logarithm after a previous one (by a factor of
~sqrt(2)), you still need to go through a full 2^128 computation first. So
pairing (say) a 256-bit elliptic curve with AES-256 is not entirely
unreasonable.

\- Quantum computers. The last paragraph mentions pairing AES-256, SHA-384,
and 128-bit McEliece as a balanced choice. Quantum computers clearly affected
this choice, with the AES key size chosen to thwart Grover search, SHA-384
being chosen to thwart the Brassard-Hoyer-Tapp collision-finding algorithm
(although it is no better than classical search due to crazy memory
requirements), and McEliece for its well-known post-quantumness. A little-
known fact is that Grover also speeds McEliece breaking considerably [2].
Thus, in a quantum world the McEliece key size would have to be increased to
get quantum balance (McBits's parametrization achieves 128 bits of classical
security).

\- Security margin and operational costs. This is covered well already by
Adam, but is worth emphasizing. With AES, the cost of going 256-bit instead of
128 is 4 more rounds. In the case of SHA-256, upgrading to SHA-512 actually
speeds things up in typical 64-bit CPUs. Except in the most extreme of cases,
this is a very acceptable cost to get a comfortable 128 extra bits of security
in case AES or SHA-256 gets wounded. In the case of RSA or Diffie-Hellman,
going from 128 to 256 bits of security gives us an approximate 25x slowdown,
not to mention the increase in key sizes: 128 bits of security margin in this
case is hardly acceptable. The case of elliptic curves is more murky: the cost
of an extra 128 bits of security margin is around 4x, which may or may not be
acceptable depending on the application.

[1]
[http://cr.yp.to/papers.html#bruteforce](http://cr.yp.to/papers.html#bruteforce)

[2]
[http://cr.yp.to/papers.html#grovercode](http://cr.yp.to/papers.html#grovercode)

[3] I made a similar argument not too long ago, on the specific case of
SHA-256's collision-resistance:
[https://news.ycombinator.com/item?id=7747545](https://news.ycombinator.com/item?id=7747545)

~~~
agl
Thanks for that. I've updated the post to mention multi-target attacks and the
quantum attack against McEliece (which I wasn't aware of, and is saddening
because it appears to make McEliece a much less attractive PQ system).

I picked SHA-384 as an example of a case where one might want to spend
resources on an "unbalanced" primitive because of the history of hash
functions failing to meet design strengths, rather than as a quantum
resistance.

Although, if you are looking to pick between SHA-256 and SHA-512/384 for speed
on modern CPUs, SHA-256 (in an interleaved tree mode) wins for speed:
[http://eprint.iacr.org/2012/476.pdf](http://eprint.iacr.org/2012/476.pdf).

------
gdavisson
The thermodynamic limit cited (Landauer's cost of erasure) to show that 128
bits is enough (barring algorithmic advances) doesn't strictly apply. Charles
Bennett[1] pointed out that any irreversible computation (i.e. one that
involves erasure) can be made reversible by saving all the intermediate
results (rather than erasing them), printing the result, then running the
computation backward and letting it eat up all of the saved intermediate
results. While this approach to computation isn't directly applicable here, it
does show that you can't count on the thermodymanic cost to keep you safe.

[1] C. H. Bennett, "Logical Reversibility of Computation" _IBM Journal of
Research and Development_ 17:525-532 (November, 1973),
[http://www.cs.princeton.edu/courses/archive/fall04/cos576/pa...](http://www.cs.princeton.edu/courses/archive/fall04/cos576/papers/bennett73.html)

~~~
pbsd
There is another more fundamental limit, credited to Margolus and Levitin [1],
which puts the maximum speed of a (quantum) gate at 2E / h operations per
second for a system of energy E and Planck constant h. Given the energy of the
sun quoted in the article, this puts the upper limit of bit flips per second
at 2^193. Seth Lloyd's 'Ultimate physical limits to computation' [2] goes into
further detail on this subject.

However, the notion of zero energy dissipation (scalable quantum computers
will likely need error correction, which implies bit erasure) on any real
system is ludicrous, and I think the Landauer bound models reality better than
Margolus-Levitin.

[1] [http://arxiv.org/abs/quant-ph/9710043](http://arxiv.org/abs/quant-
ph/9710043)

[2] [http://arxiv.org/abs/quant-ph/9908043](http://arxiv.org/abs/quant-
ph/9908043)

~~~
SAI_Peregrinus
Either way the limits are so far out from the computational capability of
humans in the forseeable future (next few hundred years) that they're both
massive overkill. When you really NEED that much security you have to worry
about the loyalty of your nation's military and how good all those armed
guards will be at keeping you from being waterboarded.

------
nightcracker
Another point on the "2 bits per increment", you can only flip one bit at a
time to get through all possible 128-bit states using Gray code.

~~~
danbruc
Or unary coding...bringing register width into new dimensions.

