In fact I think this would be more informative as a description of rainbow tables, so here goes:
Okay, so Bob in principle can write down a lookup table, but there's a problem -- that lookup table is going to be as long as the dictionary! What if he has to take it somewhere, or if he wants to make a copy for someone? That's serious work each time. We'd like to take the 100,000 words in the dictionary and convert them to only, say, 1,000 words.
Fortunately for Bob, and unfortunately for Alice, one-way functions come with a really simple compression method, where you can reduce the look up table as much as you want. This will make extracting the actual word slower, but you don't have to store a huge book anywhere.
Here is how it works. We work with 'chains' of one-way computations. See, our one-way function maps ORNATE → CEASE. Now we 'chain' the function by looking up CEASE and we find that it maps CEASE → SNIDE. We can "chain" it again to find SNIDE → JOYRIDE.
After a hundred of these chained together, we find that ORNATE ⇒ HOMOPHONE. In our lookup table we store the entry 'homophone: chain starts with ornate.' And I claim that now the lookup table can be about a hundred times smaller. Why? Because CEASE and SNIDE now do not need to start chains of their own! They are covered by the chain which starts with ORNATE. These chain tables are called "rainbow tables."
It might help to see how we use the rainbow table to look up some word near the end of this chain. Let's say that Alice has the word VEGAN and calculates that VEGAN → ORATION, so she sends us ORATION. Bob can't easily reverse this, but he can do the one-way function to ORATION and find some other word, ORATION → PERSECUTOR. But Bob can't find that in our rainbow table! So he does the one-way function again, PERSECUTOR → PAY. He can't find that either! So he does the one-way function again, PAY → CREATIVE. Nope, still no help! But then, CREATIVE → HOMOPHONE, and we find in our rainbow table that yes, we know how to make HOMOPHONE, because the rainbow table, remember, stored the fact that HOMOPHONE ⇐ ORNATE.
Now we use the one-way function again, starting with ORNATE → CEASE → SNIDE → ... → VEGAN → ORATION. Hey look, we found ourself back at ORATION, but now we know a word (VEGAN) which maps to ORATION! It's a long process of about 100 queries of the one-way function, but it's not so long -- because remember, the one-way function was fast.
There is a one-way function called SHA1 sometimes used to protect passwords. Today you can download a big file -- 86 GB -- which is just a rainbow table for passwords hashed with it. This rainbow table covers all passwords from 1 to 7 characters -- any combination of upper- or lower-case letters, numbers, spaces, special symbols, anything. That's about 95^7 = 70 trillion passwords! But it doesn't take ~70 trillion bytes, it only takes ~70 billion bytes, because it's been compressed by a factor of 1000 this way.
There's also a lookup table for 1-10 character passwords which are made of lowercase letters and numbers only, so about 36^10 = 3.6 quadrillion passwords. The file size is 588 GB, it's been compressed by a factor of 5,000 to make it possible to store on a modern hard drive. So if your password is all lower-case letters and numbers, it had better be a very long password, at least 12 characters long.
drostie: this is the most accessible plain-English explanation of rainbow tables I've ever seen, and it's short to boot. From now on, whenever anyone asks me how stolen lists of hashed passwords get hacked, I will point them to your comment. Thank you.
Of course a lot of hashed passwords have been hacked not because of rainbow tables, but by brute force because the site used a single round of a fast hash function.
They think they are good with a 4-byte salt and one round of sha-1, since that is effectively immune to rainbow-table attacks, but its' not immune to "I have a massively powerful processor in my computer called a 'video card'"
It's good. I wrote my own rainbow table code several years ago, and I had to read and re-read white papers and wikipedia articles for several days until it finally clicked. This would have helped a lot.
>Is the chain-length of 100 arbitrary, or is there an optimal length of these chains?
100 is the compression ratio. If you make chains of length 100 then you can compress the dictionary 100:1. If you make the chains of length 1000 then you can compress 1000:1 but you have to do 1000 operations for each lookup instead of 100. It's a trade off.
> How many times would you run ORATION through the one-way function and compare it to your rainbow table before you decide it's not in the table?
You presumably know the length of the chains your table used, so if you go through the entire length without finding it then you'll know it really isn't in the table (e.g. Alice just provided a random word instead of one which was the output of the hash function and it turns out no dictionary word leads to the one she provided).
These are good questions! There are a lot of subtleties here. One of them is that for real one-way functions you generally get a big number and you need some sort of reduction back to the password domain. Let's call it '¬' because most fonts have that character these days. Chaining actually looks like
password → number ¬ new_password → new_number ¬ ...
Second, both the one-way function '→' and the reduction '¬' are not one-to-one mappings. This process will give you a preimage—i.e. given H it can give some password P such that P → H and you are happy—but that password will not necessarily be the original password! Usually any preimage will do, thankfully, but it's still a problem because this means that two different chains can 'merge' into one. This ties very closely to the question you're asking.
As you have surmised, 100% coverage of the input domain is also really hard to achieve because you can't control which words the one-way function (and therefore the reduction) is going to choose! Many rainbow tables just settle for some inaccuracy -- so if you cover 99.9% of the input domain, you don't worry about trying to get all of the other 0.1%.
The reason these two completely different-sounding things are connected is that if you think about it, if you find one of those 0.1% and feed it to your hash function and then the reduction -- there's actually a 99.9% chance that you end up on a chain you already have! So you have these super-short chains which immediately merge into other chains!
If you were trying to compress by a factor of 1000, then you might choose a chain length of 2000, try for 99.95% coverage, and then store these missing 0.05% as a separate lookup table -- then between the two tables you have 100% coverage. Or, you might choose a chain length of 1000 with, say, a 99% coverage, and just accept a 1% failure rate on hashes. Either one is acceptable depending on how you store the chains and how much work you want to do to ensure coverage.
It may not always be possible to get perfect compression. Suppose for some reason you have the property that the hash for 10% of the words is "be" and the hash for a further 50% of the words is one of the words that hashes to "be." Using rainbow tables for this is not going to go well because all your chains are going to end up at "be" inside of a small number of iterations. But a hash with this property will also fail at its intended purpose because Alice could claim the hash for a word she doesn't know is 'be' and have a 10% chance of being right.
Good hash functions are specifically designed to make hash collisions like that as rare as possible, so in practice the compression ratio will be strongly correlated to the chain length, but you can still have instances where one hash value is slightly more common than another. The result is just that if you use a chain length of 100, the compression ratio in practice will be slightly worse than 100:1.
As to how to choose the heads of the chains, I suspect the answer for getting the absolute optimal choices will either be NP complete or will involve some application of linear algebra. If all else fails you could just keep adding random chains rooted in a value that isn't in any existing chain until you have full coverage. I don't know what people do in practice.
Using a salt would require the attacker to include an entry for each possible salt for each password candidate. If the salt is only one bit, this makes the attack take twice as long (or, requires tables twice as large). If the salt sufficiently long it makes the attack infeasible.
Thanks for this question! The answer is complicated. :D
Let me tell you first why it does not help! In order to build a rainbow table, we must be able to build a lookup table (since they're just compressed lookup tables). But a lookup table is basically a brute force attack on a certain category of passwords! So, if your password would fall to a modern rainbow table, it can be brute-forced by modern computational clusters. Salting does not change that. Intrinsically the password is no more secure than it was before. (To make it more secure, use bcrypt, scrypt, or pbkdf2. Seriously, even PHP has pbkdf2 nowadays, you have no excuse.)
Now let me mitigate that naysaying a bit. If you salt every password with a random 12-character base64 string, you do indeed prevent building a lookup table for the database. And that's actually not bad. Security is best measured not in bits or in operations, but in dollars. If the value of the information is much smaller than the cost to crack it, you'll defeat a rational attacker.
Rainbow tables can be built in a highly parallel way, with attackers pooling their resources and attacking multiple systems. So it will generally cost more to brute-force a password than it will to break it with a rainbow table: the rainbow table lookup can be done with one CPU; but brute forcing may require a botnet or lots of GPUs.
 Okay, there is one excuse: you might want to be able to do password administration from phpMyAdmin using only built-in MySQL functions. If you are stuck in this particular case the REPEAT() and SHA1 commands can have a similar effect:
Is cracking passwords even worth anyone's time anymore?
Most password salting is extremely unique from one website to the next. And as long as the salting increases the length of the string that will be encrypted passed 16 characters, it will be uncrackable.
It would be much quicker to social engineer your way into a system and just plant a script that honeypots unencrypted passwords.
»[...] that lookup table is going to be as long as the dictionary! What if he has to take it somewhere [..]« ... now he has to carry around the rainbow table and the dictionary. This didn't work out as expected. ^^