Hacker News new | comments | show | ask | jobs | submit login

Doesn't that Schneier quote only apply to symmetric algorithms (e.g. AES)? I thought asymmetric algorithms (e.g. public/key systems like RSA, as in the original question) have completely different characteristics and rules.

"NIST key management guidelines further suggest that 15360-bit RSA keys are equivalent in strength to 256-bit symmetric keys."


Although you're right that a 256-bit RSA key is weaker than a 256-bit AES key, this is not an intrinsic property of symmetric vs asymmetric encryption. It has to do with how you search the key space. To break 256-bit RSA encryption, it is sufficient (not known whether it is necessary) to factor the 256-bit modulus n, whereas to break 256-bit AES, you have to essentially run AES with the whole keyspace of 2^256 keys.

If you are trying to brute-force guess an RSA key, the fact that the factors which produce the key should be prime (well, co-prime with respect to each other to be more precise) has the potential to greatly reduce the search space. For small keys it is practical to have precomputed what this limited search space is so of the 2^N possible keys you can be fairly sure that you only need to check a much smaller subset of them. You still have to use brute-force, but unlike with symetic methods your search space can be much smaller than the entire keyspace. For larger keys this pre computation (and storage) becomes impractical (with current tech). I have no idea off the top of my head where the point it currently becomes impractical occurs relative to 256 bit keys though.

I doubt this approach makes sense for any size prime. Let's assume a 50-digit number to factor. According to http://primes.utm.edu/howmany.shtml, there are over 10^21 primes less than 10^24. Let's assume you manage to store 10^3 of them in a byte of memory. Then, you would need 10^18 bytes, or a million terabytes of storage to store those primes.

That isn't practical, so let's scale back to primes around 10^20 (around 10^18 different primes) Then, you would need around a thousand terabytes (still assuming that you manage to compress your 64-bit or so primes to fit 1000 of them in 8 bits)

I'll assume you build the machine with those drives, and can do trial divisions in parallel, 1000 CPUs, a billion divisions per CPU per second. you still would need 10^6 seconds to do a complete search over all primes. Rounding down, that is a week.

For comparison, https://sites.google.com/site/bbuhrow/home claims:

"I've seen C90's factored in less than 4 minutes on an 8 core box.

According to what I understand, C90 is tech speak for "90 digits, product of two coprime numbers"

Then, you would need 10^18 bytes, or a million terabytes of storage to store those primes. That isn't practical, so let's scale back to primes around 10^20 (around 10^18 different primes)

Actually, a million terabytes (an exabyte) is really quite feasible. Not on an individual scale, but I would be very surprised if governments (all of them, really) did not have this sort of thing set up.

Do we actually know that it's not an "intrinsic property of symmetric vs asymmetric encryption"? It seems like finding an asymmetric algorithm that's comparably efficient to our symmetric algorithms is a long-outstanding problem in crypto.

Perhaps there's a fundamental limitation here?

Keep in mind that theoretical constructions of symmetric ciphers are nowhere near as fast as practical constructions like AES. The real issue is that no public-key algorithms are known other than those that are based on theoretical constructions.

Also, our knowledge of complexity theory is not sufficient to show that cryptography is even possible. I suspect that improvements in our knowledge of complexity theory will greatly improve our cryptographic primitives, in both security and efficiency.

Keep in mind that theoretical constructions of symmetric ciphers are nowhere near as fast as practical constructions like AES.

What I hear you saying is "AES was designed with a practical implementation in mind, whereas asymmetric constructions were more 'discovered' from theoretical work that's often unweildy when reduced to practice". This about right?

The real issue is that no public-key algorithms are known other than those that are based on theoretical constructions.

I dunno. http://en.wikipedia.org/wiki/Merkle%27s_Puzzles always seemed pretty down-to-Earth to me, but they're inefficient as heck too. :-)

While it is true that AES was designed with practicality in mind, that is not what the difference between a theoretical construction and a practical construction is about. The theory of cryptography is based on complexity-theoretic arguments (or in some cases, information-theoretic arguments) about cryptographic constructions, essentially showing that any algorithm that can be used to violate some security property of the system can be used to solve some hard problem. For example, in the case of the Goldwasser-Micali system, any chosen plaintext attack can be used to solve the quadratic residuosity problem efficiently. On the other hand, there is no such proof for AES; the evidence in favor of the security of AES is heuristic, based on a combination of statistical tests, resistance to known attacks on block ciphers, and other measures that have been developed over the past few decades.

This is not to say that AES should not be trusted. AES is a fine cipher, it is efficient, and unless someone can show us a practical attack the heuristic evidence is pretty strong.

Now, as for Merkle puzzles, that system is not considered secure by cryptographic standards. A cryptographic construction is not secure unless it requires the adversary to do work that is exponential in some parameter in the system (the security parameter), while parties that know some secret (such a key) only do work that is polynomial in all parameters of the system. In the case of RSA, for example, parties that are aware of the secret key must do work that is cubic in the security parameter, while the adversary must do work that is exponential in the cube root of the security parameter. Whether such systems actually exist is still an open question, as it turns out; a positive answer to this question would imply that P does not equal NP. Cryptographers generally assume certain truths about complexity theory, beyond the P vs. NP problem, and cryptography research has actually opened new areas of complexity theory that are based on such assumptions (such as the notion of knowledge complexity, which emerged from the work on zero knowledge proof systems).

Thanks for explaining all this.

The reason I brought up Merkle puzzles is because they depend on a symmetric cipher or one-way hash function, giving them one foot in that first category.

Using elliptic-curve cryptosystems you may be able to reduce PK key sizes to near symmetric crypto lengths (within a factor of 2-4).


But remember that before we build our Dyson sphere, we may get quantum computers, which makes our bruteforcing polynomial rather than exponential. Cryptsystems based on discrete log, EC discrete log or prime factorization (pretty much all existing crypto-infrastructure) are vulnerable.

More on this http://en.wikipedia.org/wiki/Post-quantum_cryptography

which makes our bruteforcing polynomial rather than exponential

I think that's wrong. Grover's algorithm let's us speed up any algorithm, but not exponentially. You just take the sqrt of the run time.


Shor's algorithm lets us factor numbers in polynomial time. This would break RSA, but not every algorithm in general. I think, but I'm not 100% sure, that elliptic curve methods have no known polynomial time quantum algorithms, for example.


A version of Shor's algorithm applies to elliptic curve crypto:


The person quoting Schneier said: "... will be infeasible until computers are built from something other than matter and occupy something other than space"

So are quantum computers not from matter and not in space? :)

They (and Schneier) were talking about symmetric encryption, which does not rely on the things quantum computing is going to affect.

But remember that before we build our Dyson sphere, we may get quantum computers...

Does quantum computing somehow get around the fundamental energy requirements?

Yes. The fundamental energy requirements come out of the destruction of information from a bit flip (or from a 2 bit -> 1 bit gate). Qubits don't undergo that transition - until the final measurement, they're in a superposition of both states, so the transformations are really just rotations and reflections of a state, which in principle are not subject to the same requirements. Also, the real power of quantum computing comes from the ability to 'test multiple answers' at once, at least in principle. In reality, we've only figured out how to do this for a small subclass of problems.

For symmetric cryptosystems search difficulty is halved (a 256 bit AES key under a quantum bruteforce becomes as 'hard' as a 128 bit key under conventional bruteforce).

For asymmetric systems, search is reduced to polynomial (if using the discrete log, EC discrete log or prime factorization).

Doesn't halving the search difficulty mean dropping 1 bit from the key length? So 256->255 rather than 256->128?

The numbers are correct, but it was written in an odd way. Grovers algorithm gives a speedup of sqrt(n), so the exponent gets halved.

For some public key systems, the search becomes polynomial. Luckily, we know of ones for which that is not the case (as far as we know), and so if a practical quantum computer could be built we would just deploy new cryptosystems (pricey, but not end-of-the-world pricey).

Absolutely, because the vector attacks are different.

Notice that the problem in question is: 'is a brute-force attack against a 256-bit AES encrypted document feasible?'.

But only that: there is no cryptanalysis in that. So read with the statement in mind.

Edit: And notice that any 256-bit key encryption algorithm is as safe as AES under brute-force.

On RSA, knowing ay of the factors used is enough to break the key. Thus, you brute force 256 bits RSA with the same number of tries that you brute force 128 bits AES.

Or, in other words, when you create a 256 bits RSA key, it's guaranteed that one of the factors will have at most 128 bits. Thus, it's only as strong as a 128 bits AES against brute force.

But anyway, that's a moot point, because criptoanalysis exists, and there are better than brute force attacks against both algorithms.

NIST citation: Page 64 of Recommendation for Key Management [1]

[1] http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_p...

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact