
Ask HN: How will we handle security when computation gets even faster? - activatedgeek
The current encryption&#x2F;hashing algorithms are based on the premise that it is virtually impossible to generate all possible permutations of the character set to reverse a hash or decrypt a byte stream, in meaningful time.<p>How are we planning to address this in the future when the computational powers exceed way beyond what we have today? Just curious what is being done to address this.
======
BertMacklin
You're talking about a brute force attack, right ? That's not something our
encryption algorithms rely upon, but that _is_ encryption. I don't know much
about this, but I suppose if computation gets way faster we can just scale up
the encrypted values proportionally (larger key size/output text).

For e.g. we can generate larger primes for use in the RSA algorithm with the
faster computer and it would take proportionally more time for it to factorize
the numbers.

The problem would be when the required computations' (prime factorization in
case of RSA) order of complexity is changed. For e.g. the Shor's algorithm can
factorize prime numbers in polynomial time on a quantum computer:
[https://en.wikipedia.org/wiki/Shor%27s_algorithm](https://en.wikipedia.org/wiki/Shor%27s_algorithm)

~~~
activatedgeek
I'm sorry to be slightly short of knowledge here, but could you elaborate on
why encryption algorithms don't rely on the fact that brute force is
physically not meaningful?

And indeed, it goes without saying that if we follow the same computational
paradigm, we might just as well scale up the values that we currently use.

My question was more around what if computational paradigms change. As the
example you mentioned, the problem becomes solvable in polynomial time. What
could be the basis of the next generation of encryption/hashing?

~~~
BertMacklin
> could you elaborate on why encryption algorithms don't rely on the fact that
> brute force is physically not meaningful

I didn't mean that the infeasability of brute forcing an encrypted text is not
useful, I was rather suggesting that it is precisely the definition of
encryption. I was trying to differentiate between: 1\. Using brute force
method to convert an encrypted text to the original text without considering
the encryption method. This is an inherent property of encryption, and does
not pose a problem as you can scale up encrypted values to make brute-forcing
infeasible on the fast computers * 2\. Breaking the algorithm instead.

> As the example you mentioned, the problem becomes solvable in polynomial
> time

I think the issue in the example is not that the computing methods changed,
but that the particular algorithm we use for encryption (prime factorization)
has a polynomial solution on those computers. The point is that this shift in
computing does not break all the encryption algorithms, there are other
algorithms you can use that don't have a known solution (yet):
[https://en.wikipedia.org/wiki/Lattice-
based_cryptography](https://en.wikipedia.org/wiki/Lattice-based_cryptography).

* All of this is assuming that everyone has access to the same class of computers. Obviously if someone gets their hands on a drastically faster computer while everyone else is still using smaller keys/prime numbers, then you have a problem.

