BoringSSL just uses urandom—it's the right answer. (Although we'll probably do it via getrandom rather than /dev/urandom in the future.) There are no return values that you can forget to check: if anything goes wrong, it crashes the address space."
There is a meme that pops up from time to time on HN, that "/dev/urandom is okay for stretching out random data but not sufficient for random seeds", that I've never understood. tptacek seems to be consistent in suggesting, "Just use /dev/urandom. Period." And it looks like people who also have a lot on the line concur.
If this is the case, I'm wondering if we'll see a day when gpg might migrate from /dev/random to /dev/urandom?
As djb says;
Cryptographers are certainly not responsible for this superstitious
nonsense. Think about this for a moment: whoever wrote the /dev/random
manual page seems to simultaneously believe that
(1) we can't figure out how to deterministically expand one 256-bit
/dev/random output into an endless stream of unpredictable keys (this
is what we need from urandom), but
(2) we _can_ figure out how to use a single key to safely encrypt many
messages (this is what we need from ssl, pgp, etc.).
For a cryptographer this doesn't even pass the laugh test.
I'm not sure why we haven't seen the underlying CS-PRNG switched wholesale to Fortuna in Linux. For example, they are still using a SHA-1 based output hash, which seems like it might be time for that to go.
The higher level take-away is that trying to avoid the modest code complexity of the CS-PRNG by going directly to /dev/random is more likely to lead to an exploitable catastrophic failure than sticking with /dev/urandom.
However, on systems such as Solaris, the cryptography group (whom I know personally) specifically recommends the use of /dev/random in some cases. Those cases are largely limited to specific cryptography requirements such as a hard requirement for high entropy sources.
Regardless, all of this is largely moot at this point, as all *NIX-like systems (including Solaris) seem to be standardising on the getrandom() (in Linux first) and getentropy() (in OpenBSD first) which sidestep many of the more subtle, potential issues.
In particular: the Solaris documentation suggests that urandom is fine for nonces, and /dev/random for long-term keys. But that's the opposite of the normal threat model for randomness! You care most about high-quality continuous-availability randomness for the nonces, where biases can be devastating to security, and attackers get repeated continual bites at the apple.
If someone can explain the logic here, I'll update the page that AGL linked to in this article to account for that.
I think the Solaris people are wrong. I'm not sure, but I'd be willing to bet a small amount of money on it.
I have, and again, most of the reasoning is Solaris-specific.
The primary difference between urandom and random is currently higher quality re-keying for /dev/random, but there are a few internal implementation differences as well.
Fundamentally, it's about constraints placed on the implementation that currently make /dev/random more "secure". In addition, for Solaris /dev/random (and getrandom(GRND_RANDOM)) when the administrator explicitly configures a specific validation mode then the bits you get come from a validated DRBG (Deterministic Random Bit Generator). /dev/urandom is not affected by that validation mode.
They're really not in this case, but in fairness, you'd have no way of knowing about this without source code access and a few decades of knowledge about Solaris crypto.
You can find a summary from two years ago written by one of the primary Solaris cryptography engineers and architects here:
...they intend to update it soon for the next Solaris release.
From what I can tell, the situation on Solaris is very similar to that on Linux:
* there are two separate pools that service random and urandom,
* but both use very similar generators (in particular, both use CSPRNG DRBG constructions);
* some additional care is taken in random's case to ensure initialization (a good thing)
* but that care doesn't matter once this system is fully booted,
* and random is more aggressive about reseeding,
* but that only matters for post-compromise forward-secrecy, the need for which in this case implies you are completely boned anyways.
I'm standing by what I said before: use urandom, to the exclusion of all other generators, very much including Solaris.
Except /dev/random, which can use a different generator when the administrator configures a specific validation mode. Again, specific cryptographic requirements.
that only matters for post-compromise forward-secrecy
Not all cryptographic requirements are purely technical or even "ideal"; some reflect specific policy and/or customer requirements based on their specific cryptographic needs.
To you, this advice is based on "authority"; to me, it's based on my long-term friendship and working relationship I've established with them over the years and my respect for them as engineers/developers/crypto experts. These are people that would run circles around most "geeks" and have been doing crypto for a decade or more and they have both the pedigree and track record to show for it.
So in the end, you're of course free to believe what you'd like, and while I agree that your advice is generally reasonable, I continue to assert that it is wrong in specific cases for Solaris. As I already pointed out, and you seem to have ignored, the results you get from /dev/urandom are not the same as /dev/random in certain configurations.
As I said before, the Solaris crypto team intends to update the blog post I linked before with more details at a later date. Perhaps that will have the elusive information you're looking for.
I'm sticking with this argument because it is a common one. There's a widespread belief that there are low-quality and high-quality random numbers (or, if you must, "random numbers suitable for one kind of cryptographic application" and "those suitable for another"). I'm pretty sure this is an urban myth.
To me, in this case, the smoking gun is the Solaris team blog post that suggests urandom is appropriate for ephemeral and short-term secrets and nonces, and than random is appropriate for long-term secrets.
Unless they're trying to communicate that random is worse than urandom, and so it's safer to use it in offline scenarios, but not in demanding online scenarios, they have the threat model exactly backwards.
There are indeed such requirements; look up the requirements for FIPS validation. As I mentioned before, some requirements may not be purely technical in nature. You're free to feel they're not necessary, but some organisations clearly do.
They're not; I think it's just how you're personally interpreting the text. We're just going to have to agree to disagree.
tptacek has responded with specificity, actually detailing how the Solaris team is, in fact, wrong in their guidance.
From my perspective we have a two-way conversation in which one person is arguing about what their friends say, and the other is trying to talk detailed cryptographic requirements.
It might help if your Solaris friends could identify what those FIPs requirements are, and how /dev/random fulfills them, but /dev/urandom does not. That would be interesting.
After all - the Linux team, who presumably also consists of pretty smart people, also believed they were right regarding /dev/random versus /dev/urandom, and it turns out it wasn't the case with them either.
No, tptacek has responded with a view that suggests beliefs about how they are wrong in their guidance, but has done so without 1) access to the actual implementation 2) the decades of experience of implementing and architecting it.
tptacek is certainly free to express opinions; but respectfully, I will trust the individuals that have implemented, architected, who are considered experts in their field, and who have maintained it for the last decade or longer over someone who has not.
I pointed out very specifically why my assertions about Solaris are correct. When you configure the system in a specific way, the implementation for urandom vs. random can produce different results. There are also other subtle differences in the implementations.
Any further details that can be shared will likely be placed in that blog post I linked to when it is updated for a forthcoming Solaris release.
I'd also like to know what their specific arguments are on Solaris and for what use cases.
I like how you put that. Best way to because infrastructure reliability is way too important to sacrifice over questionable security gains. Puts a QED on the whole discussion.
 That is, before enough external entropy has been gathered.
In practice, that's rare, but some consumers have strict, hard requirements that can't be ignored.
Retrieving output bytes doesn't change anything.
The view "take three bytes output, lose three bytes entropy" is simply not correct.
All of crypto relies on being able to generate an arbitrary amount of good random bytes from a single 256-or-whatever-byte seed. Otherwise it wouldn't be safe to encrypt a long message with a short key.
Anyway, a number of write-ups (esp Huhn's) and people like Ptacek have been clearing up the issue well. Meme should weaken over time. Might help to straight-up delete the bad claims from whatever sources people are seeing them, though, where possible. Man page comes to mind if it hasn't been updated.
I guess Wikipedia would be the highest-profile source for most people.
The English-speaking one mirrors the Linux man page, the German-speaking Wikipedia goes further and outright advises against /dev/urandom for "high requirements". This used to be even worse, though.
Like say when sshd generates host keys!
With getrandom() on Linux it will block if you don't set GRND_NONBLOCK and the pool isn't initialized - http://man7.org/linux/man-pages/man2/getrandom.2.html - the desired behaviour.
(EDIT: I suppose one might try to replace /dev/urandom with some pipe-like thing running in userspace, but that seems error prone and rather contrary to /dev just being "devices".)
 Without just doing it at the kernel level, which the Linux kernel developers seemingly still stubbornly refuse to do.
If you leave things in to appease the box-checkers, you end up with, well, OpenSSL.
Whoa, who, let's not be so cruel to other crypto developers. You can appease the box-checkers quite a bit without being an utter, insecure mess like OpenSSL. There's libraries that do while making the box-checking stuff optional.
It would be true had you said: If the only thing you do is leave things in to appease the box-checkers...
Revocation doesn't really work yet. Is there deployed stapling-required yet?</strike>
See AGL's comment below.
For those wondering, that's because asm! is not stabilised and the stabilisation path for it is unclear, there's also been a proposal for an alternative asm!, Rust otherwise relies on LLVM and Brian Anderson reported having
> been told by LLVM folks that getting LLVM to do constant time code generation is essentially hopeless, and it should just be written in asm.
Nadeko aims to do exactly that, but because of the above only works on nightly.
 an RFC was created then retracted for discussions on internals
And now I wonder why there isn't already such a project, surely the number of necessarily constant-time functions is relatively low? Constant-time equality (of byte sequences), constant time conditional (`a if b else c`), constant-time comparison (<=) and maybe constant-time bytes copy?
It's a possible future goal of the project to end up API compatible to OpenSSL to do just what the GP was saying. I think the constant time functions are the biggest reason it can't happen yet, though there's probably more I'm just not aware of.
A part of me regrets that so much time and skill is being sunk into a swamp like OpenSSL though. Surely by now it'd have been easier for Google to produce a much more modern C++ based SSL toolkit that doesn't have this ridiculous litany of problems to resolve? An OpenSSL API emulation could then have been layered on top. I realise a lot of C/C++ apps rely on the OpenSSL API, but I hope one day the industry finds a way to rip off the sticking plaster.
Not really. OpenSSL exposes "internal" data structures in its API, leaking e.g. x.509 datastructures through. The only way to expose an OpenSSL API emulation is to be OpenSSL.
That's why the libressl project started libtls (née ReSSL) as a clean-slate abstracted API.
An OpenSSL API emulation could then have been layered on top.
Refactoring code like this is actually a good way of doing things!
A major problem with OpenSSL is the API. It's hard to use correctly and encourages bad code. Effort would be much better spent transitioning applications away from that API than trying to emulate it.
Maidsafe seems to be implementing it somehow for example: http://maidsafe.net/sodiumoxide/master/maidsafe_sodiumoxide/... (see comments about PartialEq)
Both projects seem to have a similar goal, that is, throwing the crap out of OpenSSL for security reasons.
Doesn't google's C code style require using signed ints for lengths?
As it should - it's inappropriate to use unsigned types for length because the overflow and underflow behaviors are defined, since they're not defined to what you want. When you have ubsan around, adding defined behavior you don't need just adds places to hide bugs.
- improvements in how they handle locks and effort to avoid locks + use of thread local storage.
- better protocol test suites (adopted from GO SSL implementation)
- very good they threw out support for TLS renegotiation; nobody understands that, in addition it is always a big source of bugs.
i am not sure that a good quality non blocking native random number generator is available on all client platform (other than linux), interesting that they also try to use BoringSSL in chrome (maybe here it would have been better to use the old crypto number generator where you only have to bother about a good random seed values)
( BTW do they have the good /dev/urandom in android ? Here they say the android has just /dev/urandom - no /dev/random at all, so this seems to be general google policy at google.
Interesting if this insecurity of the phone OS is a feature or a design requirement ...
However for use with Google's servers is all pretty much secure, this is certainly a design requirement.
I know this because I've dealt with a company that distributes files in Solaris-compatible DES format from an unencrypted FTP server and helpfully supplies Young's libdes to interface with it.
Well, at least it isn't EBCDIC...
"You can't turn off the debugging malloc but you can turn off sockets"
"If the size of socklen_t changes while your program is running, OpenSSL will cope"
EDIT to add:
Epic observation: "The good news is: if the size of socklen_t changes while your program is running, then... (other guy) OpenSSL will cope." Lol.
They clearly haven't lost their sense of humour despite working with this terrible material. Lesser men would turn cynical over half the things they have to endure :)
Does it means you can exploit more easily Google's backend since the code is opensource?
This doesn't detract from your point, since NSS is also open source, it's just a factoid.