Hacker News new | comments | show | ask | jobs | submit login
Critical PRNG Bug in NetBSD Kernel (netbsd.org)
122 points by tshtf on Mar 22, 2013 | hide | past | web | favorite | 37 comments

    Due to a misplaced parenthesis, if insufficient GOOD 
    bits were available to satisfy a request, the 
    keying/rekeying code requested either 32 or 64 ANY bits, 
    rather than the balance of bits required to key the 
    stream generator.
I think this paragraph is a nice reminder, how hard crypto can be.

A misplaced parenthesis can corrupt output data from an ordinary programs too. But with crypto, severe problems have a much easier time staying silent through QA and interop testing, and widespread usage.

I think it's because the concept of "cryptographically secure" is essentially trying to prove a negative. That's hard enough in general, but especially hard to do about an intelligent adversary whom you may not even know anything about. You're trying to prove that no present or future attacker will be able to obtain any information which can allow him to unravel your secrets.

Crypto is about building sky castles full of really really long secrets floating on foundations of really small ones, and then tossing them all up in the air to yourself as you run down the street backwards with rabid weasels chasing you.

In particular CSPRNGs, because the output of a fatally flawed CSPRNG and a secure CSPRNG can look very similar.

This is also why CSPRNGs are a great place to hide backdoors.

Backdoor in the sense of leaking state?

And is there a case in the wild ( except the PS3 hack?)?

Best paper Usenix Security '12. Nadia Heninger, UC San Diego; Zakir Durumeric, University of Michigan; Eric Wustrow and J. Alex Halderman, University of Michigan https://www.usenix.org/conference/usenixsecurity12/mining-yo... "We find that 0.75% of TLS certificates share keys due to insufficient entropy during key generation, and we suspect that another 1.70% come from the same faulty implementations and may be susceptible to compromise. Even more alarmingly, we are able to obtain RSA private keys for 0.50% of TLS hosts and 0.03% of SSH hosts, because their public keys shared nontrivial common factors due to entropy problems, and DSA private keys for 1.03% of SSH hosts, because of insufficient signature randomness."

Ron was wrong, Whit is right Arjen K. Lenstra and James P. Hughes and Maxime Augier and Joppe W. Bos and Thorsten Kleinjung and Christophe Wachter http://eprint.iacr.org/2012/064

> except the PS3 hack?

To be clear, the PS3 problem was not a problem of randomness quality or a PRNG backdoor. It was illegal nonce reuse in the zero-knowledge proof-of-key-possession protocol embedded in DSA signatures.

Did you mean to use the word "illegal"?

Sorry 'illegal' in the sense of being contrary to the requirements of the crypto protocol, in the same way that you can have the concept of an 'illegal operation' in a CPU ISA.

Illegal as in "violating their own crypto policy".

The debian certificate instance is a good example too.


I enjoyed the Thanks To section:

    Thor Lancelot Simon for causing, finding and fixing the bug

The advisory says that reading from /dev/random is fine, but reading from /dev/urandom is affected. Shouldn't cryptographic applications be using /dev/random to begin with? I was under the impression that /dev/urandom is only for when low-quality randomness is acceptable.

/dev/urandom is supposed to provide cryptographic quality randomness. /dev/random does provide "better" randomness, in a sense, but may be blocking - it is typically used for the generation of long-term cryptographic keys (like GPG keys).

The only time that ought to make a difference in a modern properly-implemented CSPRNG is right after system startup when very little unpredictability has made it into the pool.

Consequently, system boot scripts are just about the worst possible place to generate new keys if they didn't already exist.

From the manpage of /dev/random "If a seed file is saved across reboots as recommended below (all major Linux distributions have done this since 2000 at least), the output is cryptographically secure against attackers without local root access as soon as it is reloaded in the boot sequence, and perfectly adequate for network encryption session keys. Since reads from /dev/random may block, users will usually want to open it in nonblocking mode (or perform a read with timeout), and provide some sort of user notification if the desired entropy is not immediately available."

Only if the seed file was was generated by a secure kernel during a clean shutdown and kept secret during that whole time. There's a lot to go wrong there, especially if you're an embedded system running on flash.

I also seem to recall host keys being generated on first boot. Perhaps some installers are smart enough to prime that seed file.

Then what are the requirements that make this necessary? Given security in layers tends to be the ideal, surely the dependence on generating at boot time could be considered part of the bug?

The basic requirement for a CSPRNG is that no one will ever be able to guess the values of a prior or future set of 200 bits with greater than a 1 in 2^200 chance of success (or pwning your kernel). It's not just a super-smart attacker you worry about too, sometimes all the systems on the internet collaborate accidentally to unmask each others' weak keys (see Heninger et al. linked in the thread).

Boot up naturally involves starting network services and daemons which need their keys, right? It's a reasonable thing to want to do. The hard thing here is if we say "you can't have any output from the system CSPRNG until we're totally sure it's fully preheated", we end up with a few inevitable situations.

Blocking read: "BUG: System hangs noticeably on boot" "If I change /dev/random to /dev/urandom my system boots 5 seconds faster! Woot!" and "So how long do I have to wait then? Why does boot take longer on some networks than others?"

Nonblocking: If the kernel returns 0 bytes of data from read(), some apps will just continue on processing with their uninitialized or zeroed buffer.

It's not so much that gathering entropy is hard, it's that making an accurate estimate of the entropy you gathered is hard (it's basically writing embedded code which attempts to estimate the capabilities of some unknown future black swan). Getting a platform full of developers to write code that correctly handles an error condition that never appears on their own developer workstation seems basically impossible.

> Due to a misplaced parenthesis...

I know this isn't the (main) take away message, but it does make me feel a little better about some errors I've made in the past to know that even heavily vetted code can have these types of errors.

I don't know if this code would qualify as "heavily vetted." Thor Lancelot Simon wrote it himself and imported it himself into the tree for NetBSD v6 and onwards. Probably hasn't seen that many eyes for review.

I just assumed kernel code would have at least someone else looking over it before it's merged in.

Maybe it does, but they won't be looking for misplaced parens.

Because in kernel mode it's impossible to make (and subsequently gloss over) simple mistakes? This hasn't been my experience.

Anyone have a link to the actual patch?


    Fix a security issue: when we are reseeding a PRNG seeded early in boot
    before we had ever had any entropy, if something else has consumed the
    entropy that triggered the immediate reseed, we can reseed with as little
    as sizeof(int) bytes of entropy.

The misplaced right-paren is in this line:

    rnd_extract_data(key + r, sizeof(key - r), RND_EXTRACT_ANY);

Does vim have a plugin to detect unbalanced parens/brackets/braces etc?

The parens were not unbalanced, that would very likely not even compile. They were misplaced, a much more subtle mistake.

Doesn't need a plugin: http://stackoverflow.com/questions/232274/how-do-i-get-vim-t...

There's also some info and some plugins listed here: http://vim.wikia.com/wiki/VimTip630

Although, a misplaced parentheses could still be balanced. For example,

    if ((!x < y))

    if (!(x < y))

Just in case anyone is curious, the error was having `sizeof(key - r)` instead of `sizeof(key) - r`

Rookie mistake!

I wonder, what sort of technology would have avoided this? The "key -r" bit seems a bit suspicious type wise

-Wsizeof-pointer-memaccess (Clang and GCC 4.8) is the only sizeof-related warning I know of, which warns about some cases where both pointer and size are immediately passed to a handful of builtin functions (in GCC, some among mem…, str…, s…printf). C's typing is much too weak for -Wsizeof-pointer-memaccess to be generally useful.

Without getting into typing, a simple

  warning: arithmetic inside sizeof
would catch this. I don't think it would catch much noise, I don't see any useful uses of pointer arithmetic immediately inside a sizeof. It's a bit specific though, are there more functions that would benefit as well?

Yeah, why would you care about the size of a temporary arithmetic expression?

Well... sizeof of an expression is mostly useful for not repeating yourself in cases when you need the size of a type you already have some way of referencing, as in

  a = malloc(sizeof(*a));
It's also useful for various hacks (stuff like compile-time assert macros), since it doesn't evaluate the expression, only its type.

Sure, I can think of plenty of ways it could arise via metaprogramming. I don't know if I would want such a warning enabled in my own template-heavy C++ code.

But it perhaps it could be useful for those following the NetBSD style. :-)

"Due to a misplaced parenthesis..."

Is this RNG written in Lisp!?

Sorry, couldn't resist ; ) (big fan of Clojure and elisp here btw)

              1         2         3         4         5         6         7
     Sorry, couldn't resist ; ) (big fan of Clojure and elisp here btw)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact