Hacker News new | comments | show | ask | jobs | submit login

Using /dev/urandom as password source is a very bad idea.



Using /dev/urandom as a password source is fine. It's a CSPRNG. It theoretically degrades if you exhaust entropy, but there's no current attack I know of based on that property. Also, RNG attacks are usually "online", meaning an attacker gets to continually interact with the RNG. This is a one-off offline use. In this scenario, you could probably survive with rand().


Agreed. /dev/urandom should be mixed with other sources of entropy (system statistics, epoch, low-level counters, cryptographic PRNGs like Yarrow) and then "combined" using a cryptographic hash. Such a principle is used in e.g. Fortuna.

See:

https://secure.wikimedia.org/wikipedia/en/wiki/Fortuna_%28PR...


Or just use /dev/random :)


Reads from /dev/random will block when the entropy pool is empty. You can see the number of bits of entropy available on a Linux systems via:

$ cat /proc/sys/kernel/random/entropy_avail

If you need more, better randomness, check out the Entropy Key:

http://www.entropykey.co.uk/


Blocking /dev/random when entropy is low is the correct behaviour, but it is a system-dependent behaviour. Darwin (Mac OSX) has the two sources behave identically.

The Darwin man page justifies this behaviour saying:

     /dev/urandom is a compatibility nod to Linux. On Linux, /dev/urandom will produce lower quality output if the
     entropy pool drains, while /dev/random will prefer to block and wait for additional entropy to be collected.  With
     Yarrow, this choice and distinction is not necessary, and the two devices behave identically. You may use either.
and then contradicts itself later by saying:

    Yarrow is a fairly resilient algorithm, and is believed to be resistant to non-root.  The quality of its output is
    however dependent on regular addition of appropriate entropy.


Care to explain why?


A counterpart to /dev/random is /dev/urandom ("unlocked"/non-blocking random source) which reuses the internal pool to produce more pseudo-random bits. This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. While it is still intended as a pseudorandom number generator suitable for most cryptographic purposes, it is not recommended for the generation of long-term cryptographic keys.

http://en.wikipedia.org/wiki//dev/random


On virtual machines, /dev/urandom contains very little if any entropy.

Basically /dev/random takes entropy from the system and feeds it to you.

/dev/urandom is a psudorandom number generator that reseeds from entropy as it gets it. Ie. if it has no entropy, your random numbers are anything but random.


This is a drastic oversimplification. Both urandom and random (on Linux; there's no difference between the two on BSD) are seeded from hard entropy sources. Both urandom and random extract entropy by updating pools with SHA1. The difference is that random has an estimator and will demand more hard entropy when it has serviced too many requests. But it's not as if urandom goes from producing "101010100101000101010100111001" to "111011011110111101111111110111" when entropy is depleted.

In any case, this is entirely irrelevant to the discussion at hand. You can absolutely use /dev/urandom to make a one-shot crypto key. You shouldn't wire /dev/urandom up into an online cryptosystem (don't use it to produce DH parameters, for instance), but even then, urandom isn't going to be how your system really gets broken.

In your case, experimenting with encrypting whole files with RSA instead of using RSA to exchange keys is what's really going to break your system. This is almost a decent example of how people obsess over the wrong things in cryptosystem design, and why perhaps generalist programmers should stay far, far away from this stuff.


"and why perhaps generalist programmers should stay far, far away from this stuff."

Could I adjust that to say "generalist programmers should stay at least enough in touch with this stuff to know how badly they'll screw it up on their own"?

I've had _many_ heated discussions with inexperienced devs who don't understand just how much you need to know (and how much you need to know that you don't know) before you can start ignoring the simple advice "SSL for data on the move, GPG for data at rest".


Virtual machines receive very little entropy from their environment, which is a real problem when entropy is required for the generation of cryptographic keys.

There have been many attacks based upon vulnerabilities which exist due to misunderstandings entropy, and the need for a secure random number generator, for example the mozilla ssl vulnerability and the debian ssh key vulnerability.

I would agree with you that /dev/urandom can be used for one shot passwords, however I would disagree with you that getting in to the habit of using a non secure random number generator as a source of secure entropy is a bad idea and should be discouraged.

I'd also like to point out that "the standard openssl RSA encryption function" last time I checked worked to spec, and does in fact encrypt a symetric key used for AES (By default), using RSA, including proper cryptographic padding of the key using PKCS#1.

I'm not exactly sure why you thought otherwise.

I do agree with your final assertion, though. Unless you know what you're doing, it's very easy to make a mistake.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: