Hacker News new | past | comments | ask | show | jobs | submit login

I don't know how to answer the other parts of your comment, but your "double or even triple length" estimate might be off:

From https://en.wikipedia.org/wiki/Entropy_(information_theory)

> English text has between 0.6 and 1.3 bits of entropy for each character of message.

For comparison if you used a random string of alphanumeric characters it will have lg(26 + 26 + 10) = 5.7 bits per character.

So if your password is drawn from an english corpus, if the low end of the estimate is correct, it's only about as strong as a random password 9 times shorter (or 4 on the high end).

But of course we don't want a grammatical english password. Question is how much entropy does our meat-based random generator actually lose due to language bias compared to random word selection from an english dictionary (which I don't disagree with the analysis of as long as it's machine generated).




I wonder how much extra entropy you can add by introducing an extra language or two?

For instance, having a password like 'unterwasserboot-sparkle-mocidade-yogurt'.

It seems like multilingual folks would be at a distinct advantage here ... at least until you forget which of the words in your password was in which language, and you end up with 'submarine-faisca-jugend-yogurt' instead :)


Beyond grammar, the addition of intentional errors or symbols or Finnegans Wake can help increase the entropy drastically too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: