This approach makes it quite apparent where missing accessor functions exist, where loss of functionality might come from, and it's easy to do over an intermediate period of time while not being too obnoxious to work with during the period.
But good software engineering practices and OpenSSL seem to be mutually incompatible so I guess I shouldn't be all that surprised. Leave it to them to do the most obnoxious thing every time.
The ABI/API of the older versions had eventually become inadequate to describe what modern TLS/crypto applications need to do, and people had just consciously worked around it to dig into structs that they should have never been accessing anyway. Most problems I have fielded regarding this almost always turn out to be things that you never should have been accessing to begin with, but because of necessity people did.
If anything this highlights a more general problem with linking strategy against core libraries. There is no ability to add accessors without having a high risk of breaking the universe, and this makes the OpenSSL problem worse because packagers downstream like Debian and RHEL are making a bet against doing nothing and breaking nothing or doing anything and breaking everything.
Hogwash. Adding a function doesn't affect existing programs at all. The tricky part is getting people to use the function instead of the field to get the same information. That's the point of introducing the macro around field access: either forcing people to define things in their build to keep things compiling (acknowledging their impending doom and refusing to do anything) or fix their code. If you write the macros in a clever enough way, they'd have to touch every access site to get things building again anyways, so they might as well use accessor functions when they go to port.
Then define a hard date in which things are going to break (near enough in the future to call people to action, far enough to actually complete a port if they're trying), and then break as close to that date as you can to both build trust and confidence in the community. (This of course includes feedback; if everyone is saying the port is taking twice as long as planned, adapt your break date.)
This is not actually difficult, it just requires software maintainers that care, and a healthy environment of engineers willing to do the work to clean up their software. And it also requires software landfills like Debian and RHEL repos systematically purge software that refuses to update or has zero active upstream maintainers, as it's likely filled with badware, or at the very lease insecureware, anyways.
Where OpenSSL breaks down is the healthiness to take these steps; although recovering thanks in a large part on the spotlight placed upon them and the few paltry dollars offered up by various tech companies to maintain the commons, they still haven't caught up to simple practices like these. (I'll partially blame this on fragmentation like LibreSSL and BoringSSL as enabled by software licensing and companies like Apple (and to a lesser extent Amazon, s2n) deciding to abandon public implementations and write their own based on these public ones rather than returning effort to community solutions, but that's a topic for another day when I feel like donning asbestos armor and beating up trolls with cluebats.)
Not sure what you're trying to say. Sure there is. You add the accessors but don't simultaneously convert all structs to opaque types. Accessor functions work just fine on transparent structs, too.
Hear hear. Exactly what Carbon did to ease porting from InterfaceLib.
* #define OPAQUE_TOOLBOX_STRUCTS 1
* #define ACCESSOR_CALLS_ARE_FUNCTIONS 1
It will compile against (and work with) OpenSSL versions 1.0.0, 1.0.1, 1.0.2, 1.1.0, LibreSSL version 2.0, LibreSSL version 2.2, and LibreSSL version 2.4, with only a small header file and some ifdef soup . I imagine adding support for LibreSSL 2.5 would be trivial, if anything needs changing at all.
The C pre-processor exists. Learn it. Use it. Love it.
The email states that writing your own wrappers or accessors is a bad idea (and I agree), but that's not what's going on here -- I'm using the preprocessor to decide which code path to follow based on the version we're building against.
It's as if it was only written for 1.1.0 (when building against 1.1.0), and only written for 1.0.2 (when building against 1.0.2), you get the idea.
The only maintenance burden then is figuring out how to do something in both versions, and adding future preprocessor branches if the API changes again. This is always the case when deciding whether a new library feature is available, anyway.
Are these two options specified anywhere in the email?
Why are libraries written in this way? Seems fairly insane to expose parts of another library as a part of your public interface. This feels like the shining example of a poor practice coming back to bite, hard.
For instance if you want to implement certificate pinning with libcurl you can provide a callback with "CURLOPT_SSL_CTX_FUNCTION" that will be called at the beginning of the SSL connection: https://curl.haxx.se/libcurl/c/CURLOPT_SSL_CTX_FUNCTION.html
This callback is provided with an opaque "void * ssl_ctx" which is probably an OpenSSL "SSL_CTX *" and then it's up to you to do the rest. That's assuming libcurl is built with OpenSSL or wolfSSL (what happens if your code gets linked with a libcurl that's compiled against wolfSSL and you use the pointer as an OpenSSL context? I don't know either).
Seems like a very unsafe way to handle that but it's not like I can come up with some easy way around it besides adding a whole lot of complexity to libcurl.
I really have a huge respect for the maintainers of OpenSSL, managing such a codebase is quite a daunting task. I consider myself a pretty decent C coder but every time I've had to deal with OpenSSL I always ended up with a deep feeling of uneasiness. Simply figuring out if you're supposed to free the pointers returned by some OpenSSL calls ends up in a deep dive into the source code. Or you get frustrated and end up copy/pasting some code you've found on stack overflow and hope the original poster knew what they were doing.
I'm personally very eagerly looking forward a comprehensive crypto library written in pure Rust but it will take time. In the meantime there's rust-openssl (and other language bindings) which manage to hide some of the insanity.
It's not necessarily intentional. One of the more "interesting" bugs we've seen in FreeBSD was when a program linked to libraries A and B, and library A linked to libcrypto (openssl) while library B linked to libmd (a lightweight hash function library).
Each part worked fine independently, but when it was all put together, the dynamic linker said "we need to find an object called SHA256_Init? Oh look, it's in this library we already loaded!" with predictably hilarious results. (We now use preprocessor macros to define _libmd_SHA256_Init etc. to avoid such linker confusion.)
What you think your public interfaces are and what the linker thinks your public interfaces are don't necessarily align.
If you are a library maintainer and export a stable ABI, it might also be good to provide a version or every symbol if for no other reason than to avoid issues with the flat namespace.
You may still think it's insane, but it's much less insane than that quote might otherwise imply.
It will need some time to port all these projects over to 1.1 but everyone can easily see the benefits.
Supporting both will be a nightmare. Just drop 1.0 support and require 1.1
The point is applications won't be able to easily do this. As the post points out it's likely many distributions will adopt the 1.0 release series due to the length of support. So if you only use to 1.1 API you may find the burden of installing another OpenSSL release is placed on your users as their base system contains a version of OpenSSL with an incompatible API.
To be clear I think the API change has good intention but it would be been nicer to have marked parts depreciated and introduced new interfaces more gradually. OpenSSH is a good example of this in terms of advertising ahead of time breaking changes.
-The API changes are needed and good
-The migration to 1.1 will be difficult
Outside of bindings to OpenSSL for each language, the migration path is pretty simple.
This was a cleanup that had to happen sooner or later: projects using OpenSSL were never going to have an option to not change.
See https://github.com/wahern/luaossl/blob/master/src/openssl.c#... for all the compatible wrappers we have
The migration path in question isn't "change project to support OpenSSL 1.1" - it's "change project so that it can be built against both OpenSSL 1.0 and OpenSSL 1.1 in the medium term". And yes, of course that can be done by introducing a bunch of glue macros or #ifdeffery in each project, the argument here is just that if every project has to reinvent that compatibility layer then inevitably many of them will be incorrect.