> "Look at this computer," Vidyasagar says, gesturing at the mainframe. "Computers are getting more powerful, yes?"
> "Sure."
> "What is the most powerful computer that will be built? Ever. Not this year. Not this decade. What computer will be the most powerful? And how powerful will it be? And how big?"
> Hatt thinks on this for ten long seconds. He opens his mouth, but never actually forms a word. The scale of the question is beyond him.
> Vidyasagar says to him, "No matter what you say, you will look like a fool. Every statement about the future turns out to be foolish."
That's... a much less clever sentiment than it thinks.
> No matter what you say, you will look like a fool
That's because the question is nonsense, not because it's about the future. If the question is "What is the most powerful computer that will be built this decade?" it's answerable. No answer will be correct but there are certainly ways to answer that aren't foolish at all.
I could be wrong (my physics background isn't that strong either) but I think Dyson's "time without end" paper http://www.aleph.se/Trans/Global/Omega/dyson.txt shows that eventually the universe will cool down enough to make all keys brute-force-attackable despite Bremermann's limit.
Given a finite amount of attacker computation you're willing to defend against, you can get a real advantage from key stretching (though not from mere salting). If you want a password strength of 256 bits, you can memorize a password of 226 bits and require work equivalent to 2³⁰ key-hashing operations to derive the actual encryption key or crypted password. This is normally called a KDF; reasonable ones are scrypt, bcrypt, and Argon2, in ascending order of goodness.
If you make the work factor unreasonably large, you won't be able to use the password in practice, because you have to do that work every time you use it. For example, if you try to memorize 170 bits of password and use a 2¹⁷⁰ work factor in your KDF to reach the 340-bit security level, recommended here, you have to do 2¹⁷⁰ work on your laptop every time you log in. Assuming a trillion operations per second (a safe upper bound for current laptops) each login will take about 47 nonillion years, about a sextillion times longer than the history of the universe so far and about 50 times longer than the expected lifetime of the last galaxies (see https://en.m.wikipedia.org/wiki/Timeline_of_the_far_future). It may be inconvenient to wait that long.
For the same reason that a KDF is a safe way to derive keys for decrypting data at rest, in a client-server system, you can generally do this work on the client safely, so it doesn't pose a denial-of-service risk.
You can't, that's true, and in particular you can't get over 184 bits out of it. But that's also true of, for example, PBKDF1, which I think is the algorithm the term "KDF" was invented to describe. And bcrypt is often included in lists of KDFs, for example by OWASP and Wikipedia. Arbitrary-length digests are certainly highly desirable for a KDF, but I don't think they are uncontroversially part of the definition of the term.
A key (hah!) property of key derivation functions is that they allow you to customize the key that you get out, mainly because you may need a specific length (e.g. 512 bits) for whatever encryption algorithm you're using. bcrypt lacks this functionality: you only ever get 192 bits of hash.
There's an older version of this argument in Schneier's Applied Cryptography (1996). He also concludes that a 256-bit key is secure "until computers are made from something other than matter and consume something other than energy", IIRC.
However, despite things like Ed25519 using 512-bit curve points for 256-bit security (you lose a factor 2 off your exponent because math), this particular instantiation fails much harder if a quantum computer running Shor's algorithm ever becomes reality.
Meanwhile, 123456 still tops the password charts wherever it is allowed.
We can argue and argue about theoretical computers, and it all doesn't matter when clipboard espionage vs laziness is so effective. Also, on a similar subject, this is how auto makers think when they're trying to "hide" their automobiles.
This is probably a decent estimate, but there's a couple of routes of attack it fails to account for.
First it uses the current average temperature of the universe. Lowering the temperature can be done by just waiting a while before turning the machine on. I assume that powering a sufficiently powerful fridge is not an option, given the origin of the theoretical limit, but I can't quite point out why it wouldn't work.
Secondly it assumes that an unsuccessful attempt must flip at least some bits in an semi-permanent manner. This is obviously true of all current computers, but doesn't have to be true for all possible apparatuses. A specialized hyper-efficient password cracking system should be expected to get below this limit. Will we ever build one? Who knows.
Arguably this latter 'loop-hole' is just pointing out that quantum computers or more efficient algorithms could do better, so maybe we should absorb it into the definition of 'brute-force'.
If you wait for the temperature to drop, and the universe is expanding, distant galaxies will recede to the point that they are now moving away faster than the speed of light and their matter is no longer available to contribute.
Thank it is expanding now really means that it will expand forever? Are there not other physics at play that could stop that at some point, yeah I don't know much about any of this just wondering out loud.
This is generally unknown, of course. However it currently appears that the expansion of the universe is accelerating, i.e. it is expanding faster and faster.
Obviously it can't be ruled at that at some point this would stop and/or reverse. But there's no reason to think so, and if we're considering arbitrary future changes then we may as well consider that the universe might suddenly start heating up again in the future, or more mass will start appearing out of nowhere. Or god appears and hands out free decryption keys to everyone.
Gathering matter into one place heats it up, and prevents it from cooling (more or less - the outside can cool but thanks to the square-cube law that's basically negligible on universe-sized masses.)
Is this proof that the universe cannot understand itself? It seems weird that there can be a set of information in the universe that can be hidden from the rest of the universe.
Not entirely. Cryptographic primitives are based on hardness assumptions. For example, we assume discrete log cannot be computed in probabilistic polynomial time when we leverage things like Diffie Hellman. No one has yet proven whether this assumption (and many others like it) is true or false, but so many people way smarter than you or I have tried and not made much headway and so it's safe enough to rely on it.
If someone were to prove one of these assumptions is true, then I suppose the answer to your question is yes, but I wouldn't hold my breath waiting for such a proof :)
> Is this proof that the universe cannot understand itself?
What does “understanding” means when talking about an inanimate thing?
> It seems weird that there can be a set of information in the universe that can be hidden from the rest of the universe.
Why should the rest of the universe “know” anything about other parts of the universe?
Ascribing “understanding” and “knowledge” to the universe sounds questionable from the start, it doesn't seem weird to me that the universe doesn't have these properties.
I know you're making a pun, but I will say one big benefit of deadlifting is a lot of things that would cause back-pain no longer do so. You don't have to lift a whole lot either, just enough to start building back muscles, and all of a sudden I can do things that would normally produce endless lower back pain.
That's the closest link I could think of, but "deadlifting" is at best a very specific and small step toward the implicit goal of "become immune to all physical harm". It doesn't make sense, so I figure there has to be something I'm missing.
I can't say I understood and evaluated all the physics here (I skimmed parts) but I was pretty surprised by how small the estimate was. I would've assumed that, were we to have one or two thousand years more cryptographic history, we'd end up using ginormous keys (maybe on the order of 1 MiB?). But this suggests that 512 or 1024 bits might be all we need.
This is because exponential growth is counter-intuitive. A 256 bit key is not 2x more secure than 128 bit, it is 340282366920938463463374607431768211456x more secure.
Assuming fixed computing power and a perfect algorithm. Historically waiting for more computing power meant the exponential increases in compute per dollar, and often better attacks.
That assumes that the best you can do is brute force, but real encryption algorithms (even AES) are weaker than that, so the attacker can infer some bits of the key. More so for asymmetric encryption.
Correct me if I'm wrong, but if P is not equal to NP, then we ought to be able to derive as much entropy as we need from a key which is non-brute forceable (at least for symmetric encryption). Eg, if we believe 1024 bits of entropy cannot be brute forced, but our algorithm requires a 4096 bit key to provide at least 512 bits of security against cryptanalysis (plus a margin of safety), then we can derive our larger key from our smaller key without sacrificing any security.
But there's an implicit assumption here that all keys are equally strong, so this doesn't apply to asymmetric encryption. At least not as straightforwardly. And it's possible that P is in fact equal to NP. And there's a bunch of other assumptions here too, like that we really do have a secure source of entropy and really can share keys securely.
Anyway, if we take all these assumptions as read, this suggests that symmetric key lengths will saturate at a certain point (and not much wider than they are today). Big if true.
Speaking of physically immune schemes, I remember some protocol which relied on a gigantic amount of data present behind a link that was, on purpose, very low bandwidth (physically low bandwidth: not by software as in rate limitation. That was the whole point: the link was physically low bandwidth).
So the data was impossible to exfiltrate remotely: it simply wasn't physically possible to do remotely (it would be way too slow).
I forgot the name and what the data was used to protect/derive: maybe some authentication scheme?
Anyone knows what I could be talking about? I'm pretty sure I saw that posted here on HN in the past.
>I forgot the name and what the data was used to protect/derive: maybe some authentication scheme?
It's a variant of salted passwords, where the "salt" is a huge file that sits on a server somewhere. To validate any given password, it only requires accessing a few bytes that are deterministically but randomly determined. This isn't a problem when validating passwords, but if you're an attacker trying to exfiltrate the file (so you can run offline bruteforce attacks on it), the huge file becomes prohibitively expensive to exfiltrate.
The closest I can think of is ULF radio, which has two properties - extreme range and penetration, at the expense of extremely low bandwidth. My understanding is that it is used to communicate with underwater subs, but likely on the form of simple ascii or Morse code keywords.
It doesn't account for quantum computing? Cracking passwords seems like one of those things that should get an exponential speedup with quantum computing.
I suspect this can be bypassed with knowledge about the size of the target system.
Intuitively, there are a finite number of passwords that can be stored on earth, so a large enough system should be able to enumerate them? Whilst also existing in the observable universe.
By "target system" you mean "system from which the password originated", right? But unless you think true randomness is impossible, and also that all possible sources of pseudo-random input on Earth come from terrestrial sources and not, say, incoming cosmic radiation, then knowing the size of the "Earth" system is no constraint at all. A heuristic for focusing your search, maybe, if you think the password is likely to be something easily memorable for a human, but that's nothing to do with the size of the system, just commonly-transmitted information there.
The phrase "stored on Earth" is a red herring. You don't need to store all possible passwords for those passwords to be possible to generate here. And really, a consequence of the article is that if enumerating even a couple hundred bits is prohibitive, then enumerating all possible information that could be generated by and stored in an earth-size system, as you seem to be suggesting, is no better.
> By "target system" you mean "system from which the password originated", right?
Yeah that’s correct.
> The phrase "stored on Earth" is a red herring. You don't need to store all possible passwords for those passwords to be possible to generate here.
A password used to protect a system must be persistently stored inside that system.
The number of passwords that can be generated on earth is greater than the number that can be persistently stored on earth.
For example, an iPhone must locally store a user’s unlock PIN code. However, it could theoretically generate a 20TB password for an external site in chunks without ever storing the full password locally.
Energy is a binding for password generation; but size is a constraint for password storage, which likely kicks in a lot earlier.
The password being persistently stored is not really a requirement here. That depends on the cryptosystem involved, among other things. What if I encrypt a ciphertext and throw away/forget the key?
Anyway, the constraint you're proposing here is, only passwords that can be encoded in all possible configurations of matter making up the earth? And you do have to contend with all possible configurations, if all you know is the size of the system, or even the mass and composition. As tedunangst put it, that's a lot of bits. I think we'll hit the 300-400 bit computational limit first.
You're still essentially bound by having to consider all the passwords that could be generated. Let's say that passwords are limited to 2048 bits, but you can only store 2^128 passwords. The problem is that you don't know which 2^2048 passwords have been stored, so you have to go through them all anyway.
You only have to store a single 340-bit password (or something equivalent) in order to secure something with a 340-bit password. You can do this by, for example, writing down a 103-bit number on paper, which you can do on a business card with a pencil. Your argument seems to depend on the defender needing to store all possible 340-bit passwords, which they don't.
>An excerpt from a religious text with a trailing space:
>"I'd just like to interject for a moment. What you’re referring to as Linux, is in fact, GNU/Linux, "
Followed by the deadpan sage advice:
>Don’t use actual excerpts from pre-existing works as your password.
IOW don't try this yourself unless you make up your own religion. Established scriptures of all other kinds have been completely compromised long ago ;)
Bcrypt does not add entropy, it only adds "difficulty" and the problem with "difficulty" is that it breaks down over time. Improvements in technology and processes routinely undermine the difficulty estimates and corresponding factors used to tune KDFs like bcrypt.
KDFs are good at protecting "okay but not great" passwords used to gain online access, but they add no protection to extremely secure, unique passwords, and they don't add enough protection to extremely weak, common, or reused passwords. They are there to frustrate attacks, not make them physically impossible. Many credentials are time-sensitive, and many attacks are not targeted; KDFs are good in these common situations. You still need to pick a password that will take long enough to crack that an attacker moves on instead.
However, some data needs to be protected practically forever, and some attacks are definitely targeted at specific people or systems. In these cases, KDFs don't do very much. Taking a 256-bit key just from the raw bits of 32 random ASCII letters and numbers will already get you 190 bits of entropy and frustrate all practical attacks for the next several decades at the very least. Feeding that through a KDF first won't add any practical security. Even so, KDFs can be used for a different reason, enabling passphrases, which are long strings with low per-character entropy but high overall entropy. At least, assuming that the KDF preserves that entropy well.
Regardless of bcrypt, you should always pick security keys with sufficient entropy, where "sufficient" is measured relative to the importance of the thing being protected, how long it remains important and accessible by that key, and what attacks are viable now and foreseeable in that time.
I thought we had already established that the best way to beat that one is to have two passwords. First is Hunter2 and unlocks the keyboard for the second one: A variation of the Moonlight Sonata in off key.
> "Don't talk about the future," Vidyasagar says.
> "What? Why not?"
> "Look at this computer," Vidyasagar says, gesturing at the mainframe. "Computers are getting more powerful, yes?"
> "Sure."
> "What is the most powerful computer that will be built? Ever. Not this year. Not this decade. What computer will be the most powerful? And how powerful will it be? And how big?"
> Hatt thinks on this for ten long seconds. He opens his mouth, but never actually forms a word. The scale of the question is beyond him.
> Vidyasagar says to him, "No matter what you say, you will look like a fool. Every statement about the future turns out to be foolish."