That requires linking with libpthread, which a single-threaded program would not normally do. Otherwise, it's not a bad suggestion.
Still, on top of everything LibreSSL does to automatically detect forks, it should still expose a way to explicitly reseed the PRNG in an OpenSSL-compatible way, since OpenSSL has made guarantees that certain functions will re-seed the PRNG, and there may be some scenarios where even the best automatic fork detection fails (imagine a program calling the clone syscall directly for whatever reason, in which case pthread_atfork handlers won't be called). Since LibreSSL is billed as a drop-in replacement for OpenSSL, you should not be able to write a valid program that's safe under OpenSSL's guarantees but not when linked with LibreSSL.
pthread_atfork itself really should be moved into libc, however. (And POSIX should stop treating it as a redheaded stepchild: it's useful!)
Static mutex init can be emulated with call_once() function, with some limitations of its own of course.
That requires linking with libpthread, which a single
threaded program would not normally do. Otherwise, it's
not a bad suggestion.
Reading that thread it seems like direct calls to clone(2) can bypass at least glibc's pid cache (which would likely also break LibreSSL's approach).
Any idea if direct calls to clone(2) also bypass pthread_atfork?
Wow, great find! LibreSSL could avoid this by calling the getpid() syscall directly.
> Any idea if direct calls to clone(2) also bypass pthread_atfork?
They do, since atfork handlers are invoked by the userspace wrapper for fork().
I don't think user libraries should try to deal with users subverting the facilities on which they rely. There are defined interfaces to system functionality. Break or bypass these interfaces, and you're on your own. If you subvert the usual API semantics by calling clone(2) directly or bypassing the fork(2) wrappers, you should be cognizant of the implications.
Combine those two reasonable patterns with LibreSSL, and suddenly you have a vulnerability. This is even more likely when you take into consideration that LibreSSL is intended as a direct replacement for OpenSSL; callers are even less likely to examine the fine print of the documentation for undefined and unsupported behaviour.
Still, the LibreSSL work is commendable and should be appreciated. The real problem is a lack of good regression tests - and there may be a messy future of niggly issues because of that. I've already had to deal with some.
I'll give another tricky example. One of the earliest pieces of functionality LibreSSL ripped out was an in-library DNS cache. It was poorly documented and the assumption was that it was there as a crutch for shoddy OS-level DNS caching. But I think this cache also played another role; it helped certificate validation workflows function. Sometimes endpoints bind different certificates to different IP addresses for the same DNS-name that uses DNS-level load balancing. If you don't make the name resolve to the same IP address consistently, then what can happen is that the first connect() gets certificate "A" and some user-facing UI or validation process authenticates it, but then then another connect() gets certificate "B" and the caller logic gets confused.
Of course we could blame the caller; or the folks mixing certificates for the same name, but it doesn't really help; users still experience these problems. Just one example of why it is very hard to remove code in fully backwards compatible ways, even if the change seems very innocuous.
Edit: to be clear, none of this is an argument against using pthread_atfork - I just want LibreSSL to provide an explicit way to reseed the PRNG like OpenSSL does.
And there's a new tarball out:
If the PRNG is good enough (no visible correlation in the statistics tests you can imagine thrown at it), and it's properly seeded with true randomness, then isn't everything peachy?
I am much more afraid of the seeding part of it than the actual algorithm. The algorithms are well studied by smart people, the actual implementation and seeding aren't always.
There mere fact that one could reseed the PRNG makes me nervous. That could be used in devious ways. But I am not a cryptographer, not even a mathematician, so don't take my word for it!
Am I wrong here? Why?
Even if we end up with a list of linux-specific gotchas (and I don't think we will), it is more a case of ten steps forward, one step back.
But even while one isn't available, why is LibreSSL trying to use a userland CSPRNG instead of always reading from /dev/urandom and aborting when that fails?
I don't know why the rest of the function even exists. It's the kind of cruft libReSSL is trying to get rid of.
I am not entirely sure a PRNG should even exist in the library, and personally, I'd pass it onto /dev/urandom or /dev/random or the relevant syscall.
I agree with making it (154-156) a hard kill for a TLS library not to be able to get entropy.
And, this is great! This is exactly the kind of thing we're able to find now that some of the code isn't a hedge-maze.
EDIT to add: Frankly, it's a disgrace that Linux doesn't already do this, instead choosing to push the burden of getting all the details right to userspace where you can be vulnerable to all sorts of interesting timing attacks, FD-based DoS, etc.
Theodore T'so isn't completely opposed to the idea, it seems.
This problem is due to LibreSSL's internal architecture being too closely matched to OpenBSD, not a fundamental problem with Linux.
Secondly: It can fail with EINVAL (bad pointer) or EIO (>256 bytes requested), but does not fail even in a condition where file descriptors are exhausted. I don't know of its behaviour if it is called too early in the boot process to have been safely seeded, but I hope it either errors loudly or blocks.
Mind though that under the BSDs, there's no functional difference between /dev/random and /dev/urandom.
That's why you open /dev/urandom in advance of performing operations that require randomness. If that open fails, you don't go on to perform the operation that requires randomness.
You can absolutely rely on internal file descriptors not being closed. A program that closes file descriptors it does not own is as buggy as a program that calls free on regions of memory it does not own. A library cannot possibly be robust against this form of sabotage. The correct response to EBADF on a read of an internal file descriptor is to call abort.
The "close all file descriptors" operation is most common before exec. After exec, the process is a new program that can open /dev/urandom on its own (since, as I've mentioned previously, it's a broken environment in which /dev/urandom does not exist).
I've explained several times why you can't. The program that closes all file descriptors may be broken, but the big problem is that as long as the library has no safe way of reporting this to the caller without breaking the OpenSSL API, they are faced with either breaking a ton of applications or finding an alternative. And they've explained why this is not an alternative (in the copious comments in the soure):
> The correct response to EBADF on a read of an internal file descriptor is to call abort.
They have no control over whether or not this will result in an insecurely written core file that can leak data, and this is a common problem. If the person building the library knows that the environment it will be used in does not have that problem, it's one define to disable the homegrown entropy.
> The "close all file descriptors" operation is most common before exec.
I've seen it in plenty of code that did not go on to exec, to e.g. drop privileges for portion of the code.
OpenSSL is crufty in part because it's full of workarounds for ancient, crufty code. LibreSSL shouldn't repeat that mistake. LibreSSL does have ways to report allocation failure errors to callers. It shouldn't even try to work around problems arising from applications corrupting the state of components that happen to share the same process. That task is hopeless and leads to code paths that are very difficult to analyze and test. You're more likely to create an exploitable bug by trying to cope with corruption than actually solve a problem --- and closing file descriptors other components own is definitely a form of corruption.
> [LibreSSL has] no control over whether or not [abort] will result in an insecurely written core file
The security of core files simply isn't LibreSSL's business. The mere presence of LibreSSL in a process does not indicate that a process contains sensitive information. LibreSSL has no right to replace system logic for abort diagnostics. If the developers believe that abort() shouldn't write core files for all programs or some programs or some programs in certain states, they should implement that behavior on their own systems. They shouldn't try to make that decision for other systems. LibreSSL's behavior here is not only harmful, but insufficient, as the library can't do anything about other calls to abort, or actual crashes, in the same process.
> I've seen it in plenty of code that did not go on to exec, to e.g. drop privileges for portion of the code.
Please name a program that acts this way.
open+read+close + all the mess associated with exhausting file descriptors > getentropy
fairly obvious, isn't it?
Surely this is an obvious first question that all commentators are stepping over?
Also, it doesn't solve the issue, as /dev/urandom will also be unaccessible temporarily if you run out of file handles.
Couple that with the fact that whether or not the /dev/ is set up correctly is a distraction: You have no guarantee that you will be able to open and read any file, as they can not guarantee that there's nothing available on the server that can't easily be used to hit file descriptor limits for a suitable process or open file limits for the entire system.
So this problem is there regardless of whether or not you're willing to demand a correctly configured /dev.
The second issue is resource limits: low level components of LibreSSL cannot cope with entropy-generating functions failing. On OpenBSD, these functions cannot fail, but on Linux, they can. That's not a problem with Linux, but with LibreSSL's architecture. It's LibreSSL's responsibility to ensure that it allocates. Every call to LibreSSL's internal RNG is preceded by some kind of resource-allocating call that can fail. It's in this call that LibreSSL should obtain the resources needed to do its work. That it doesn't is simply a bug in LibreSSL, not a deficiency in Linux.
The issue is not that anyone expects it to do everything it promises if the system is misconfigured, but that if it should fail, it should take care to try to avoid failing in ways that could open massive security holes.
This is the issue here: The developers believe that as the existence of systems with unsafe core files is well established, their options are limited, as there is a risk of exposing enough state to less privileged users with a core dump to leave the system vulnerable. Someone building for a system they know has properly secured core files, can disable the homegrown entropy code, and the code will fail hard if /dev/urandom and sysctl() are both unavailable, and the problem goes away.
But what do you suggest they do for the case where they do not know whether failing will expose sensitive data? They've chosen the option they see as the lesser of two evils: Do as best they can - only as a fallback, mind you - and include a large comment documenting the issues.
If they had full freedom to design their own API this would not be an issue. They could e.g. have put in place a callback that should return entropy or fail in an application defined safe way, or many other options. But as long as part of the point is to be able to support the OpenSSL API, their hands are fairly tied.
For my part, I believe strongly that it is a deficiency if we have to go through all these kinds of hoops in order to safely obtain entropy, when the solution is so simple on the kernel end: Ensure we retain a syscall.
Resource management is not a "hoop". It is a fact of life.
Which is fine if you have a safe way of returning errors in all code paths, but as the comments points out, they believe they don't. Maybe they're wrong, but they seem to have spent some time thinking about it. They've also provided an easy define to change the behaviour to failing hard for people building it on systems where their caveats against failing hard does not apply (e.g. systems with secure core files)
If they can't fail early in a safe manner without potentially creating a security leak (as they potentially would if an unprivileged user could induce an unsafe core dump), and can't even be guaranteed that they're able to safely log the error (there's no guarantee they'd be able to write it anywhere), it's hard to see alternatives but to try to do something that is "good enough" as the alternative could be much worse.
Neither is a good solution.
Any application that does that is broken. Please stop trying to bring up this behavior as something a library needs to support. It isn't. If application try to close all file descriptors, then go on to do real work, plenty of things other than LibreSSL will break.
Would you go around calling munmap on random addresses and expect your application to keep working? Would you write a library that tried to guard against this behavior?
> if you have a safe way of returning errors in all code paths, but as the comments points out, they believe they don't.
That's an internal LibreSSL problem. There's nothing stopping the LibreSSL team from implementing the correct plumbing for telling callers errors about errors. AFAICT, there is no sequence of valid OpenSSL API calls such that the library needs entropy, but at no point in this sequence of calls can indicate failure.
The problems you highlight are not things libraries should try to work around. They're systemic issues. Libraries calling raise(SIGKILL) because their authors don't believe systems have sufficiently secured their core file generation is absurd and only makes the problem worse because it makes overall system operation less predictable. (Imagine a poor system administrator trying to figure out why his programs occasionally commit suicide with no log message or core file.)
These are not problems that require system-level fixes. They're problems that require changes from LibreSSL.
It doesn't matter that it is broken. It matters whether or not it is done and how to deal with it when it happens.
> If application try to close all file descriptors, then go on to do real work, plenty of things other than LibreSSL will break.
Plenty of things that applications that do this will have to have successfully dealt with. Effectively failing hard now will be a change of behaviour that makes LibreSSL incompatible with the OpenSSL API it is trying to implement, and possibly causing security problems in the process.
Either they do this properly, or they need to break compatibility with OpenSSL sufficiently that people don't accidentally start exposing sensitive data because they mistakenly thought LibreSSL was a drop in replacement (which it won't be if it does things like calls abort() in this case).
> Would you go around calling munmap on random addresses and expect your application to keep working? Would you write a library that tried to guard against this behavior?
Strawman. munmap() on random addresses is not something I have seen. Looping over all file descriptors and closing is something I have seen in lots of code. Code that works fine unless someone introduces a breaking change like suddenly holding onto a file descriptor the library previously didn't.
And when the risk is exposing sensitive data to a potential attacker, I would most certainly weigh the risks of failing hard vs. attempting a fallback very carefully.
> There's nothing stopping the LibreSSL team from implementing the correct plumbing for telling callers errors about errors.
There is: It would mean LibreSSL does not work as a drop-in replacement for OpenSSL. That may very well be a decision they have to make sooner or later, and may very well be the right thing to do, but there are big tradeoffs. They've chosen this avenue for now, with a large comment in the source making it clear that this is in part a statement about their belief that the best solution would be for Linux to keep a safe API to obtain entropy.
Note that this is even code that will never be executed when running on a current mainline kernel. It will break on systems where people have been overzealous about disabling sysctl(), or on systems moving to some future mainline kernel which we don't know when will be released.
> The problems you highlight are not things libraries should try to work around. They're systemic issues. Libraries calling raise(SIGKILL) because their authors don't believe systems have sufficiently secured their core file generation is absurd
It's not absurd when we know for a fact that this often happens and is a common source of security breeches.
For systems where this is not an issue, disabling the fallback is a define away for your friendly distro packager.
> and only makes the problem worse because it makes overall system operation less predictable. (Imagine a poor system administrator trying to figure out why his programs occasionally commit suicide with no log message or core file.)
I'd rather be the system administrator trying to figure this out, than the system administrator that doesn't know that various data found in my core files have been leaked to an attacker.
It will also only happen if: /dev/urandom is inaccessible and you're running on a kernel without sysctl() and you've chosen this alternative over the built in fallback entropy source.
> These are not problems that require system-level fixes. They're problems that require changes from LibreSSL.
They're problems that may not be possible to fix for LibreSSL without failing to meet one of it's main near-term goals of being a drop-in replacement for OpenSSL.
This is also likely to not only be a problem for LibreSSL - to me it raises the question of how many applications blindly assumes /dev/urandom will always be available and readable. It is a system-level problem when every application that wants entropy needs to carefully consider how to do it to avoid creating new security holes, when the solution simply is to retain a capability that is currently there (the sysctl() avenue) or implementing getentropy().
We're not likely to agree on this, ever. We're going in circles now, and just reiterating largely the same argument from different angles.
I won't comment any more on this, other than saying that for me, it's a matter of a basic principle: Assume everything will fail, and think about how to be the most secure possible in this scenario. To me, that makes the decisions the LibreSSL developers the seemingly only sane choice in a bad situation assuming the constraint of sticking to the OpenSSL API. Long term I think they ought to clean up the API too, but short term I think we'd get far more benefit out of them making it possible to safely replace OpenSSL first. And that may require sub-optimal choices to deal with the worst case scenarios, but then so be it.
If I have a choice of accommodating broken applications that close all file descriptors (and you have still not named one) and having a system in which libraries can retain internal kernel handles, I'll take the latter. LibreSSL already breaks compatibility with OpenSSL in areas like FIPS compliance. Compatibility with broken applications is another "feature" that would be best to remove.
> Strawman. munmap() on random addresses is not something I have seen. Looping over all file descriptors and closing is something I have seen in lots of code. Code that works fine unless someone introduces a breaking change like suddenly holding onto a file descriptor the library previously didn't.
There is no fundamental difference between a library-internal resource that happens to be a memory mapping and one that happens to be a file descriptor. Are you claiming that no libraries in the future should be able to use internal file descriptors because there are a few broken applications out there that like to go on close(2)-sprees?
If that level of compatibility is important to you, do what Microsoft did and implement appcompat shims. Did you know the Windows heap code has modes designed specifically for applications that corrupt the heap?
If you're not prepared to go down that road, please recognize that broken behavior is not guaranteed to work forever. There was a time when use-after-free was a very common pattern: on a single-thread system, why not use a piece of memory after free but before the next malloc? That pattern had to be beaten out of software in order to allow progress to be made. As it was then with memory, now it is again with indiscriminate file descriptor closing.
> There is: [proper error plumbing] would mean LibreSSL does not work as a drop-in replacement for OpenSSL
This claim is simply untrue. The OpenSSL API has a rich error-reporting interface. No compatibility break is required for reporting entropy-gathering failures. The only needed changes are inside LibreSSL, and its developers appear to be refusing to make these changes.
> for me, it's a matter of a basic principle
Another basic principle is that you can't get very far if you assume everything can fail. You have to have a certain axiomatic base. The ability to have private resources in a library is a perfectly reasonable basic assumption.
I believe the heart of the issue it that /dev/urandom will give you a string even if it has very low entropy at the time.
You can find all sorts of articles for and against /dev/urandom and I don't really know enough to comment on it's security, but I trust the that the team working on this fork more than I trust the OpenSSL foundation.
On OpenBSD, /dev/urandom does the right thing, unlike Linux. As per http://www.2uo.de/myths-about-urandom/ -
> FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
LibreSSL tries /dev/urandom first, then falls back on a deprecated sysctl() interface, then tries it's own "last resort fallback".
So BSD /dev/urandom is more secure in that it never gives bad random numbers for some baseline badness. He was not factually wrong about that, although he is wrong in stating that that was the reason the OpenBSD developers are dismissing it.
For what it's worth, the "exploit" requires (1) the user deny access to both /dev/urandom and the sysctl interface, (2) multiple levels of forks (a child never has the same PID as its parent, but a grandchild can have the same PID as its grandparent, and (3) the grandparent must exit before the grandchild (PIDs uniquely identify all running processes). It's not something that will happen by accident, even to incredibly careless programmers.
But I do agree with the BSD guys that Linux should have another way to get entropy in this case (note that they have a similar file for OS X for similar reasons). And I hope it's not named CryptGenRandomBytes().
For example, what of routers that have no means of entropy input but interrupt timing? What of Android libraries that just use libcrypto? These systems are usually free to exploit by determined attackers!
LibreSSL/OpenSSL doesn't think "unlikely" and tries to cover as much as possible. The TLS library needs to work as good as possible regardless of the context.
Comments for getentropy_linux.c explain this http://www.openbsd.org/cgi-bin/cvsweb/src/lib/libcrypto/cryp...
We have very few options:
- Even syslog_r is unsafe to call at this low level, so there is no way to alert the user or program.
- Cannot call abort() because some systems have unsafe corefiles.
This logic seems specious. It's not the job of a library to solve that problem. If a system has crash dump collection configured insecurely, the problem is going to extend well past the SSL library.
> * This can fail if the process is inside a chroot or if file
* descriptors are exhausted.
The right solution is to pre-open the file descriptor. SSL_library_init can fail. Do it there.
To be fair to the LIbreSSL devs, the Linux-specific /dev/urandom code is currently encapsulated rather nicely behind an interface that's compatible with the OpenBSD getentropy() syscall. Following your suggestion would create a layer violation and move LibreSSL closer toward the (much maligned) OpenSSL approach to cross platform compatibility. I don't think this is a great excuse for the current design, but it's an explanation.
The OpenSSL approach to portability is doomed: it can only deal with cosmetic differences between platforms. I appreciate the principle of using compatibility functions instead of #ifdef, but at some point, you need to incorporate the panoply of architectures into your design. It galls me to see the OpenBSD people claim that Linux is broken merely because it is different. That's incredibly arrogance.
> This code path exists to bring light to the issue that Linux does not provide a failsafe API for entropy collection.
Trying to make a point about Linux doesn't seem like a very good reason to me.
If you were porting this to a GNU/Linux distro, you can read their list of options and raise (SIGKILL) resulting in silent termination if that's what your platform decided to do if both entropy methods fail, or test for it earlier and fail. Since they are BSD developers they leave it up to whoever is porting to decide.
Huh, FreeBSD has MAP_NOCORE which allows the program to map pages that will explicitly not be included in the core file. I never realized that this was FreeBSD-specific extension (added in 2007?).
I'm really surprised other platforms haven't adopted it, though I surmise there's a good technical reason or two. (EDIT: or maybe there's similar functionality via another API? I haven't been able to turn up anything).
If that's the case then the fix will not be as simple as I envisioned it. Still, the point stands that LibreSSL should allow you to initialize the PRNG once, before you chroot, so that you can use the PRNG safely once inside the chroot. This could be accomplished by keeping a file descriptor to /dev/urandom open.
I'm fine with stirring in new entropy after chrooting - I just don't want to see sketchy entropy being used, especially for the initial entropy source. If you could make LibreSSL open (and keep open) /dev/urandom before you chroot, LibreSSL could read additional entropy from the already open file descriptor, even after chrooting.
In any case, note that the chroot issue is a bit of a sideshow compared to the much more serious fork issue.
OpenSSL allowed developers to interfere with RNG freely, so LibreSSL must do that, too? [Even if times have changed?](http://permalink.gmane.org/gmane.os.openbsd.cvs/129485)
Well, you can't really go at improving and cleaning up the library if you have to keep up all the old bugs and the whole crusty API around.
It's inconceivable to expect LibreSSL to be both better than OpenSSL, yet to have the exact same API and the exact same set of bugs and nuances as the original OpenSSL.
LibreSSL is meant to be a simple-enough replacement of OpenSSL for most modern software out there (http://ports.su/) — possibly with some minimal patching (http://permalink.gmane.org/gmane.os.openbsd.tech/37599) of some of the outside software — and not a one-to-one drop-in-replacement for random edge cases that depend on random and deprecated OpenSSL craft.
There are various tricks to get a limited number of bytes from /dev/urandom into the chroot jail (such as by writing them to a regular file and secure-erasing that file when finished) to get around that.
* Original process with PID 17519
* PID 17519 forks producing a new process with PID 26606
* PID 17519 produces some "random" bytes then exits
* PID 26606 forks producing a new process with the now unused PID 17519
* New PID 17519 produces some "random" bytes, which will be the same as the "random" bytes produced by original PID 17519, causing a raptor to attack the user.
If I get a block from /dev/urandom, then another one at some later time, what are the chances it's identical? Isn't that what you're saying here (or was the whole post intended to be comical and not just the last line).
[If only it had been raining it would have taken days longer for the raptor to attack, or something /random]
You could mix in the parent's PID too, but that would only delay the problem (you'd need more layers of fork before triggering the shared-state bug again).
Why can't LibreSSL just open /dev/urandom once, on first call to RAND_poll/RAND_bytes/some-other-init-function, etc. and then always read from it directly. If that first open fails then you return an error from RAND_poll/RAND_bytes.
You can't just add the parent pid because that information is lost when the process's parent exits (the ppid becomes 1).
1) The PRNG wrapper apparently depends on the pid to "detect" if there's been a fork, and so the PRNG seed will remain the same in grandparent and grandchild if you double-fork and manage to get the same pid. This may or may not be a problem - whether or not you can induce enough forks to manage to get the right pid will depend on application. This is not dependent on
2) If both /dev/urandom and sysctl() is unavailable, it falls back on generating its own entropy using a convoluted loop and lots of sources. There's all kinds of ways that can be nasty, but that relies on enough factors that just getting the same pid would be insufficient in and of itself (but it may very well reduce the entropy).
I think the scenario here is that process X forks, creating process Y, then process X terminates, then process Y repeatedly forks until it creates a process Z with the same PID as X.
See http://linux.die.net/man/2/fork for more details
This is the way the fork syscall works on all Unices, the fork will start execution right after the fork system call.
I honestly prefer open source and recognize the problem the author points out as clearly significant problem - as well as the benefits of LibreSSL, but I'm just not convinced there are enough eyeballs looking at open source crypto.
Closed source proprietary crypto, you just don't know who wrote it, who audited it and who backdoored it and who knows of any flaws in it.
Open source crypto, it's there. Go read the source. Anyone can and it's open for audit.
There aren't enough eyeballs I agree but there are infinitely more trustworthy people looking at it than closed source.
But reflect on this: we're looking at it now. There are more eyeballs looking at open-source crypto than closed source crypto. Reflect on that for a moment, and on the RSA BSAFE/NSA 'enabling' and the like, and remember that being well-funded didn't stop Apple's source-available implementation from going directly to fail.
I wonder, for example, what's really under the hood of, say, Microsoft SChannel/CAPI/CNG? I'm a reverse-engineer (which means I don't need no stinking source code, given a large enough supply of chocolate) so I may look in detail, when I get a large enough patch of free time. I've heard it's not as bad as it could be… but I know on this subject, for example it ships Dual_EC_DRBG as alternative in CNG (but uses CTR_DRBG by default from Vista SP1 onwards, thank goodness). The old RtlGenRandom wasn't too great, I know that much.
In other words, with proprietary sw, at least SOMEBODY evaluated it and placed their seal/name on it. With open source, you are relying on a hope that somebody out there somewhere does it. And in various cases, we've seen how that turned out.
You're not making a substantial argument, but if someone has a track record of writing completely safe software that doesn't have to be secured through third party products, like Microsoft, I will be willing to listen.