> I think Netflix just shows what's the "maximum" speed needed to deliver their service.
Exactly this. Assuming your connection is faster than 3 Mbps, the number they are showing is neither the bandwidth your ISP is providing you, nor the instantaneous rate that bits are flowing to your home network. If you have a faster connection, they will send you large chunks of bits at a rate higher than 3 Mbps (upto 35 Mbps from what I've seen), and then send nothing for several seconds.
It seems that Netflix tops out at 5.8 Mbps and 1920x1080 no matter how capable your network connection is -- that's the highest number this test will produce.
Although you're right that a 256-bit RSA key is weaker than a 256-bit AES key, this is not an intrinsic property of symmetric vs asymmetric encryption. It has to do with how you search the key space. To break 256-bit RSA encryption, it is sufficient (not known whether it is necessary) to factor the 256-bit modulus n, whereas to break 256-bit AES, you have to essentially run AES with the whole keyspace of 2^256 keys.
If you are trying to brute-force guess an RSA key, the fact that the factors which produce the key should be prime (well, co-prime with respect to each other to be more precise) has the potential to greatly reduce the search space. For small keys it is practical to have precomputed what this limited search space is so of the 2^N possible keys you can be fairly sure that you only need to check a much smaller subset of them. You still have to use brute-force, but unlike with symetic methods your search space can be much smaller than the entire keyspace. For larger keys this pre computation (and storage) becomes impractical (with current tech). I have no idea off the top of my head where the point it currently becomes impractical occurs relative to 256 bit keys though.
I doubt this approach makes sense for any size prime. Let's assume a 50-digit number to factor. According to http://primes.utm.edu/howmany.shtml, there are over 10^21 primes less than 10^24. Let's assume you manage to store 10^3 of them in a byte of memory. Then, you would need 10^18 bytes, or a million terabytes of storage to store those primes.
That isn't practical, so let's scale back to primes around 10^20 (around 10^18 different primes) Then, you would need around a thousand terabytes (still assuming that you manage to compress your 64-bit or so primes to fit 1000 of them in 8 bits)
I'll assume you build the machine with those drives, and can do trial divisions in parallel, 1000 CPUs, a billion divisions per CPU per second. you still would need 10^6 seconds to do a complete search over all primes. Rounding down, that is a week.
Then, you would need 10^18 bytes, or a million terabytes of storage to store those primes. That isn't practical, so let's scale back to primes around 10^20 (around 10^18 different primes)
Actually, a million terabytes (an exabyte) is really quite feasible. Not on an individual scale, but I would be very surprised if governments (all of them, really) did not have this sort of thing set up.
Do we actually know that it's not an "intrinsic property of symmetric vs asymmetric encryption"? It seems like finding an asymmetric algorithm that's comparably efficient to our symmetric algorithms is a long-outstanding problem in crypto.
Keep in mind that theoretical constructions of symmetric ciphers are nowhere near as fast as practical constructions like AES. The real issue is that no public-key algorithms are known other than those that are based on theoretical constructions.
Also, our knowledge of complexity theory is not sufficient to show that cryptography is even possible. I suspect that improvements in our knowledge of complexity theory will greatly improve our cryptographic primitives, in both security and efficiency.
Keep in mind that theoretical constructions of symmetric ciphers are nowhere near as fast as practical constructions like AES.
What I hear you saying is "AES was designed with a practical implementation in mind, whereas asymmetric constructions were more 'discovered' from theoretical work that's often unweildy when reduced to practice". This about right?
The real issue is that no public-key algorithms are known other than those that are based on theoretical constructions.
While it is true that AES was designed with practicality in mind, that is not what the difference between a theoretical construction and a practical construction is about. The theory of cryptography is based on complexity-theoretic arguments (or in some cases, information-theoretic arguments) about cryptographic constructions, essentially showing that any algorithm that can be used to violate some security property of the system can be used to solve some hard problem. For example, in the case of the Goldwasser-Micali system, any chosen plaintext attack can be used to solve the quadratic residuosity problem efficiently. On the other hand, there is no such proof for AES; the evidence in favor of the security of AES is heuristic, based on a combination of statistical tests, resistance to known attacks on block ciphers, and other measures that have been developed over the past few decades.
This is not to say that AES should not be trusted. AES is a fine cipher, it is efficient, and unless someone can show us a practical attack the heuristic evidence is pretty strong.
Now, as for Merkle puzzles, that system is not considered secure by cryptographic standards. A cryptographic construction is not secure unless it requires the adversary to do work that is exponential in some parameter in the system (the security parameter), while parties that know some secret (such a key) only do work that is polynomial in all parameters of the system. In the case of RSA, for example, parties that are aware of the secret key must do work that is cubic in the security parameter, while the adversary must do work that is exponential in the cube root of the security parameter. Whether such systems actually exist is still an open question, as it turns out; a positive answer to this question would imply that P does not equal NP. Cryptographers generally assume certain truths about complexity theory, beyond the P vs. NP problem, and cryptography research has actually opened new areas of complexity theory that are based on such assumptions (such as the notion of knowledge complexity, which emerged from the work on zero knowledge proof systems).
Comparative advantage is actually even more subtle than that. Even if we're both better at photos than widgets, it's still beneficial for me to produce the one I'm better at relative to you (say photos), and for you to produce widgets. Then we can trade with each other and both gain from trade.