There has historically been some crowing from the LibreSSL crowd about how their work avoided CVE's later discovered in OpenSSL: https://undeadly.org/cgi?action=article&sid=20150319145126
The bug count per library can not be used as an absolute metric and not all bugs are security vulnerabilities (though many can be under specific circumstances).
BoringSSL has fewer bugs than LibreSSL, which has fewer bugs than OpenSSL. One of the reasons for this could be that the bug count is proportional to the complexity/SLOC: BoringSSL is smaller in terms of functionality (# of message digests, ciphers, ...) than LibreSSL, which is smaller than OpenSSL. OpenSSL sometimes commits new buggy code whereas BoringSSL does not, but OpenSSL has a higher commit frequency than BoringSSL, and BoringSSL might be aiming for a production-safe master branch where OpenSSL might not.
So it's not easy to derive a reliable metric from this, but with that said, you can't really go wrong with BoringSSL if it offers what you need.
then this may interest you:
^^ ASN1 is really the bees-knees for fuzzing telecoms protocols in UMTS/LTE/5G etc and doesn't get enough love in other domains. It's a high learning curve but once you get beyond the "standardese" language in the docs it's opening doors to opportunities in so many industries.
> BoringSSL has fewer bugs than LibreSSL, which has fewer bugs than OpenSSL. One of the reasons for this could be that the bug count is proportional to the complexity/SLOC
the openSSL codebase is notorious but I think it's also because it has been in existing for so long. if I look around today I see 2 camps: cryptographers and software engineers. letting sw-engineers do crypto is usually a bad idea but it's often worse when cryptographers start coding. it's almost like a variation of the old joke of "the 2 most dangerous things in Tech are a sw-engineer with a soldering iron and a hw-engineer with a sw-patch" ... apart from complexity leading to bugs I'd also say there is another downside which is stronger in openssl: people end up using it wrongly which makes it a proverbial foot-gun for implenters.
>  https://github.com/guidovranken/cryptofuzz
very cool thanks!!
I think to make such a claim you need to specify attack model constraints. With an unconstrained model all bugs in a library are in fact security vulnerabilities.
This does depend on a very narrow reading of what constitutes a bug, for example I would not consider it a bug per se if a function does something that you wouldn't expect (and which perhaps the developers didn't intend) but does strictly match the documentation.
The problem is that the application software developer may depend upon documented features of your library in ways you didn't anticipate and if those documents are wrong (because of a bug) then you can't be sure it hasn't got security implications somewhere.
Also, OpenSSL supports all kinds of ancient esoteric platforms that are essentially unused, yet were kept in the code base for sentimental reasons.
The real metric they should be looking at is the number of features/platforms/LOC removed from the project. Less code = less surface areas for exploits.
I happened to be reading some OpenSSL code around the time of heartbleed and statically linking some of it in a project. I found a lot of the portability ifdefs were poorly done even for the standards of the 90s. It wasn't portability itself that was the issue, it was actual quality and in many cases the wrong abstractions in place.
One example that sticks out in my memory... There was a logger that had a Win32 ifdef. If you ran on Win32 it fed log messages directly to MessageBox(). Unless it detected the current process was an NT service, in that case it used another logging mechanism without asking. It wasn't actually "windows portability" or "windows support". The whole thing wasn't appropriate for a library. It could have had a mechanism to give the log lines to the application as a C string and be done with it. Instead, it was mixing of library layer stuff and application layer stuff, or just ordinary library bloat.
The other huge example I recall was compliance with older configurations that were pre-C99. C99 is much more universal now vs 20 years ago, although there are a few things Microsoft still doesn't support. But again, a bunch of these things seemed to be handled with ifdefs at the call site, rather than put in a proper compatibility layer that only an older configuration gets.
It does seem that in the years since then, cruft has been removed not only in forks but also in the upstream project.
Also as someone that has a code base that targets a dozen different pieces of hardware ifdef madness is a strong indication that your program architexture is broken. Sometimes it's unavoidable but you should think of ifdef's as another name for FIXME.
The developer didn't know what algorithm to pick so he just went with one at random, assuming it was ok since it was in the library. How many other security vulnerabilities are out there due to similar circumstances? It's a bit troubling.
If the purpose of the hash in that code was security-critical and compromised by malicious collisions, it would definitely be a problem. Otherwise it shouldn't be --- and jumping at things without understanding the nuance is precisely one of the problems with the "security industry" today.
$ set def [.openssl]
$ run openssl
WARNING: can't open config file: SSLROOT:
OpenSSL 1.0.0r 19 Mar 2015
To a first approximation, crypto is pure maths. The rest can be taken care of by the standard C library.
How does that work? How does anyone even approve it if it isn't going to be used?
Admittedly I'm out of the loop as far as contributing to such projects, maybe letting that stuff in is the norm?
How do you know it won't be used if it isn't put in in the first place?
The criticism here is that OpenSSL wasn't particularly choosy in which features of SSL (or other crypto in general) that it supported; it supported all of them, even if they were of more questionable utility.
> Heartbleed can't even be considered the worst OpenSSL vuln. Previous bugs have resulted in remote code execution. Anybody remember the Slapper worm? That worm exploited an OpenSSL bug (which was apparently titled the SSLv2 client master key buffer overflow) which revealed not only encrypted data or your private key, but also gave up a remote shell on the server, and then it propogated itself. Yeah, I'd say that's worse. But no headlines.
> I mention this just to reinforce that LibreSSL is not the result of "the worst bug ever". I may call you dirty names, but I'm not going to fork your project on the basis of a missing bounds check.
> why fork
> Instead, libressl is here because of a tragic comedy of other errors. Let's start with the obvious. Why were heartbeats, a feature only useful for the DTLS protocol over UDP, built into the TLS protocol that runs over TCP? And why was this entirely useless feature enabled by default? Then there's some nonsense with the buffer allocator and freelists and exploit mitigation countermeasures, and we keep on digging and we keep on not liking what we're seeing. Bob's talk has all the gory details.
You can see "Bob's talk" here: https://www.openbsd.org/papers/bsdcan14-libressl/ though I think there is a YouTube video somewhere.
Also I would like to point out that Google also forked OpenSSL. So it seems that the LibreSSL folks are not the only ones who thought a fork was necessary.
A lot of that uglyness seems to come from the fact that OpenSSL wants to support all environments (even DOS). I wonder why distributions haven't switched since LibreSSL was made to be API/ABI compatible with openSSL and target a POSIX OS. This would be much more justified than the ffmepg / libav thing imo.
LibreSSL is neither API compatible with newer OpenSSL versions, nor is it ABI compatible. In fact, they break ABI every six months. Furthermore LibreSSL upstream only targets OpenBSD, with the portable version existing as an afterthought.
The only linux distribution using LibreSSL is Void Linux (Alpine switched to OpenSSL some time ago). Even Void is considering switching to OpenSSL: https://github.com/void-linux/void-packages/issues/20935 .
The slides came out in 2014 so the API / ABI thing was probably true then but not anymore.
Maybe things would have been different if LibreSSL was backed by a major Linux distribution and OpenBSD. Even then Unix/Linux is not the only target of a lot of software and I doubt a lot of developer would have put the time to support both.
[Edit] I just saw in an other comment that LibreSSL is used in MacOSX and windows for openSSH. Maybe developers will consider it if it becomes available on major platforms
1. It's a ton of packaging work even if the API/ABI were compatible (Calling it compatible is a bit of a stretch IMO).
2. One of the things LibreSSL removes is the FIPS validated stuff. Distributions that harbor ambitions of being used in large US corporate and government installations want that.
3. By the time the portable LibreSSL build system came out, there were already significant improvements afoot within the OpenSSL project.
I'm sure there are other reasons, but those are the big ones I'm aware of.
Sounds like it could have happened if someone went to bat for it. Red Hat deciding to include it (even if they didn't replace OpenSSL with it immediately) and pushing to get it certified and the portability stuff more stable would have done this.
> 3. By the time the portable LibreSSL build system came out, there were already significant improvements afoot within the OpenSSL project.
That's probably the real reason. Although, given the stuff mentioned in that bug/talk and how much seems to have been based on extreme portability, unless OpenSSL decided to just give up on some aspects of that (I doubt it), then some of the problems (code complexity, not to mention ROP helpers) probably survive (not that I know).
Someone correct me if I'm wrong, but I seem to remember either BoringSSL or LibreSSL (or both?) saying that their fork removed support for DOS because not only did almost nobody use it, but it didn't even work anyways.
"LibreSSL with Bob Beck" at https://www.youtube.com/watch?v=GnBbhXBDmwU
I think I actually got grey hairs from that...
Then I realised that OpenSSL is quite entrenched, I'm still running it to this day, in production.
OpenSSL implemented its own memory management system instead of using malloc. It would allocate one pool of memory and then manipulate into that. This meant that static analysis, runtime analysis, fuzzers were incapable of finding memory bugs. Because all pointers into that pool were "valid". LibreSSL stripped out OpenSSL's memory system and replaced it with malloc() and free() as provided by libc. Suddenly, static analysis, valgrind, and fuzzers found something like 3 dozen memory errors.
That is bad on so many levels.
OpenSSL supports big endian x86_64. OpenSSL supports EBDIC paths, certs, encodings, etc. OpenSSL supports DOS and Windows for Workgroups 3.11. It had a config -D NO_OLD_ASN1 and NO_ASN1_OLD. These defs performed different things. Having more eyeballs on a project didn't help OpenSSL, because all of those eyeballs glazed over immediately and/or had forks stabbed into them.
In something like the first month, the LibreSSL project deleted something like 90,000 LOC and 60,000 lines of comments/whitespace. Not changed: deleted. Removed useless cruft like mentioned above, and dangerous cruft like SSL 3.0 support. The latest OpenSSL tarball is 9.4MB, the latest LibreSSL tarball is 3.6MB. It has fewer eyeballs, sure, but it's an order of magnitude easier to audit.
I believe the state of the art in generic malloc implementations has improved since this sort of thing was commonplace, though.
That doesn't sound like an accurate description of LibreSSL, what with it being a part of OpenBSD.
...so even more people are looking at it? not sure what problem you think is happening here.
Example: "“Do you really want to build OpenSSL for 16-bit Windows? Well, we don’t.”"
Looking up on Wikipedia it seems that LibreSSL is focused on OpenBSD and removed lots of legacy code. BoringSSL (Google) got renamed to Tink but I couldn't not find much more.
It's sad to see that duplication of effort but it's also the force of open source
Tink isn't really a rename, but an API wrapper to prevent misuse that happens to use BoringSSL internally.
OpenSSL and it's forks expose the primitives directly, but as is the usual advice, don't roll your own crypto and say "We're using RSA 2048" because you copied an example from stack overflow without padding.
How's that a FOSS thing? Do you think there would be fewer TLS implementations running around if they were proprietary?
This was the engineer who helped set up the new policy: https://awe.com
To be honest, maybe it's a good idea. It depends on how much support Huawei is willing to give OpenSSL.
> The OMC voted this week to update our security policy  to include the option of us giving prenotification to companies with which we have a commercial relationship. (Edited to clarify: the vote was to allow notification to our Premium Support customers and this does not include lower support levels, sponsors, or GitHub sponsors.)