And right about here is where the claims that dynamic linking is better for everyone's security update schedule just falls apart, for this reader.
Dynamic linking does not make it easier to update things. There, I said it.
Updating things confidently and safely requires -- as the article wisely references -- testing those things, and the entire matrix of other things they interact with.
Static vs dynamic linking simply doesn't register. In either case, the newer versions need to be built, and then tested together.
The only thing dynamic linking does is make it possible for distros to hamstring themselves[^] by tossing everything into one big cauldron, so nothing can be upgraded because it requires testing too much at once.
And here's the endgame. "Is OpenSSL getting updated on $distro?" "Nope."
Sigh.
---
[^] And this is a red herring, in turn: dynamic linking isn't the real enemy either; it's the global sloppile approach to sharing dynamic linked libraries that really causes the mess. NixOS and others like that still use dynlinking, and yet have successfully immunized themselves from these problems by using content-addressable library versioning. So let's stop with the "dynlinking is better for security updates" claims. It's not. It's either unrelated at best, or actively a ball-and-chain making things harder and slower, and I submit as evidence OpenSSL versions across these distros.
I agree. Anyone who thinks that dynamic linking makes things easier to update never lived through the DLL-hell of Windows where installing one new product would break a completely unrelated one because it "upgraded" a library.
There is absolutely nothing stopping someone from packaging OpenSSL 1.0.2 for the distros that don't have it included by default. It's trivial to parallel-install different OpenSSL versions, though admittedly with most distros it requires some special attention at build time to build things (such as nginx) against the newer version. You hint at this is in your postscript but don't do it the attention it deserves.
Google could even take this on and put up apt/yum repos supporting various versions of various distros; with their resources and infrastructure it wouldn't even be that much work.
Also your argument has nothing to do with security updates, and I'd maintain that dynamic linking does indeed make security updates easier, since you still only have one package to update rather than some potentially large N.[0] Yes, there is still the burden of testing dependent packages, but at least they don't need to be rebuilt.
I would argue it's safer from a security perspective, as it's always possible to forget that a particular statically-linked package needs to be rebuilt. At my org we used to have a few special-flower services that needed a newer version of OpenSSL and were statically linked. This just meant more work after every security disclosure. (Arguably we "did it wrong": instead of static linking, we should have just built and installed the shared lib and linked to it, making it easier to identify and track what needed to be updated.)
[0] At least for the case where a patched update can be released, one that doesn't have ABI changes. If you're running a distro that is using an EOLed version of a library, I'm sympathetic, but ultimately that is a problem larger than the one at hand that you need to solve.
I would be the first to agree that dynamic linking is evil, but how is this related with the issue at hand, which is client-to-server communication? No amount of static linking of Chrome would magically upgrade the SSL version being used to package Debian, for example.
>No amount of static linking of Chrome would magically upgrade the SSL version being used to package Debian, for example.
No, but if Debian packages had statically compiled OpenSSL you could just update say Nginx and get it working with the new protocol Chrome requires, while 100+ packages could still be left as is and use their own, older, version of the lib.
Chrome did not previously rely on a dynlinked system library for the negotiation feature. Now, evidently, it does.
In making this transition, Chrome experienced a massive practical regression for many users, precisely because the dynlinked library is not sufficiently up to date for many people.
I'd say dynamic linking is related to the issue at hand, yeah.
What happens if a program is statically linked to OpenSSL 1.0.2, but dynamically linked to a library like libpq (Postgres client) and this is linked to an older OpenSSL version?
The respective codebases ("a program", libpq) would use their own linked versions to execute their code.
So yes, two different versions but is that automatically bad? It could be, but remember that exploits, even serious ones, are usually on very discrete vectors. It can be hard to exploit something abstracted away through two application layers.
The Chromium team's blog post about this suggests that > 99% of HTTP/2 connections are _already_ negotiated with ALPN [1]. I imagine all of the CDNs and other large providers already use ALPN, so this mostly affects the long tail of small sites. It seems like the claims in the above blog post are overblown.
They might reach 30% with just Google -- but Reddit, and even more so CNN and ESPN are insignificant as percentages of web traffic, whether in the US or (even worse) globally.
CNN is not even in the top-50 websites, and ESPN is not even in the top-100 (e.g. in the Alexa index).
Heck, pornsites like xhamster have far better placements than either.
If you're stuck on a OpenSSL version without ALPN, and don't want to deal with upgrading OpenSSL (or rather statically linking your web server to a newer release), consider using caddy[1]. It's a HTTP/2 web server written in Go, which comes with its own TLS library (crypto/tls) and thus doesn't have any dependency on OpenSSL. It even supports automatic deployment of a publicly-trusted certificate through Let's Encrypt.
So instead of upgrading a library or the webserver you'd rather replace the webserver completely with a relatively new [1] webserver and redo all the configuration?
That post is more than a year old. Caddy has since had more than 70 contributors and > 1K commits.
Based on my experience, replacing nginx (in my case acting as a fairly simple reverse proxy for rails) with Caddy wasn't significantly harder than compiling a custom nginx. If you have a complex configuration, YMMV. I'm definitely not suggesting using Caddy as a drop-in replacement for your existing production web server without further testing. That being said, getting rid of a OpenSSL dependency isn't a bad thing.
I haven't seen a list yet. A couple of Go community sites have started using it (for obvious reasons). Without having any particular insight, I'd guess the user base consists mostly of early adopters, Go developers and some side-projects right now.
I'm not a golang user, but I am curious, how hardened is the crypto/tls library? One of the reasons I use openssl is because I don't want a DIY crypto layer, I want something a lot of people have banged on. I'd hate to move to a new server, just to find out it is susceptible to a whole new set of Ye Olde Underrun Bugges that openssl patched ages ago.
It is maintained by Google and the whole go community since it is in the standard library. So, I think it is fairly secure (probably not as well tested as OpenSSL but the attack surface is smaller, bugs and security risks are still there, though).
What is important is that the library is not hardware accelerated and it thus uses a software accelerated AES cipher.
So, I think it is a bit slower, but you would really have to measure it yourself.
Upgrading your Load balancer should not be that hard, though.
> [...] consider using caddy[1]. It's a HTTP/2 web server written in Go
To be fair, it's not. It simply wraps Go's HTTP/2 server which is mostly developed by Brad Fitzpatrick, which in turn relies on Go's crypto libraries (written in Go and assembly).
As a Go user, I personally don't see any point in using Caddy, as I prefer writing a few hundred lines of Go to serve static pages with all the features I need, with the additional ability of serving more than static pages (you can do fastcgi with Go's stdlib if you need it, I personally prefer writing server code in Go). This is especially true considering that they called their non-paying users dishonest, for not "honestly paying back the value it provides you" while at the same time not sharing a single cent of their revenues with Google who develops the actual HTTP server they wrap (or with any other third party libraries that "provide value" to them).
(old discussion: https://news.ycombinator.com/item?id=11264041)
> To be fair, it's not. It simply wraps Go's HTTP/2 server which is mostly developed by Brad Fitzpatrick, which in turn relies on Go's crypto libraries (written in Go and assembly).
I don't think that's a fair description of Caddy. Of course they're going to use the existing packages in Go, why would they reinvent the wheel? Caddy provides useful things like a config syntax, middleware, automatic certificate deployment, etc. which you'd have to reimplement or take from somewhere else.
> As a Go user, I personally don't see any point in using Caddy, as I prefer writing a few hundred lines of Go to serve static pages with all the features I need, with the additional ability of serving more than static pages (you can do fastcgi with Go's stdlib if you need it, I personally prefer writing server code in Go).
Caddy allows you to write your own extensions/middleware, which should cover all that. In turn, you get to re-use all the existing extensions and other things Caddy takes care of (which, in my experience, you'd inadvertently end up reinventing at some point anyway).
> This is especially true considering that they called their non-paying users dishonest, for not "honestly paying back the value it provides you" while at the same time not sharing a single cent of their revenues with Google who develops the actual HTTP server they wrap (or with any other third party libraries that "provide value" to them).
I don't parse that sentence as "you're dishonest if you don't pay for Caddy", but rather "be honest about the value it provided." It's still a free project, and I don't see the problem here at all.
> Caddy provides useful things like a config syntax, middleware, automatic certificate deployment, etc. which you'd have to reimplement or take from somewhere else.
Caddy is promoted as an HTTP server. If you focus on that feature rather than the cloud of extra stuff that can make life easier for some (which are orthogonal to what HTTP server does), this is exactly what Caddy does.
Sure, I'm not saying they should have written their HTTP server from scratch. But the way they're selling is apparently confusing enough that some people think Caddy developers are actually writing an HTTP server. net/http (and other go libraries) does the real work here, and they sure deserve credit (which they or you don't give in a visible way) if we're going to talk about fairness or paying for honest work.
Let's see about those features.
You don't need any configuration script syntax when you use net/http directly, you can simply marshall/unmarshall your config struct as a json file.
I'm using Let's Encrypt and renewing the certificate is as trivial as running a single bash line.
So I still don't see anything I need in actual Caddy code that I need and can't implement easily on my own.
> Caddy allows you to write your own extensions/middleware, which should cover all that. In turn, you get to re-use all the existing extensions and other things Caddy takes care of (which, in my experience, you'd inadvertently end up reinventing at some point anyway).
Why on earth would I do that when I can write Go directly?
This has been discussed many many times, and the community has been clear and loud about web-frameworks in Go. For most, net/http and standard library, maybe with a few libs from gorilla, is the best way to write a server.
> I don't parse that sentence as "you're dishonest if you don't pay for Caddy", but rather "be honest about the value it provided." It's still a free project, and I don't see the problem here at all.
No need to weasel around words now, the words have been said. This is not something taken out of context. Go read that whole announcement.
Regardless of how and why you want to sugar coat it, that is what they said, that is what they wanted to say. And after criticism, they since removed the word "honestly".
To add to the insult, they're not "honest about the value if provided" them in the first place, since they're giving any portion of their income back to Google and other developers (they're using blackfriday etc.).
This was their pitch when they called their free users dishonest:
> There's no such thing as free software. The question is, "Who pays the price?"
Being a FOSS project is something.
Telling people is morally wrong using their FOSS project without paying and speaking all high and mighty, while making profits from others' FOSS code and not paying them back a single cent is more than questionable. This is in particular true when it is the 3rd party code (go's stdlib) which does the real work.
It's amazing how you claim you're not seeing the gigantic problem here.
To summarize, the problem is two-fold:
1) They're morally trying coerce people into paying a software which they chose to publish under Apache License.
While this seems contradictory, they're trying to justifying this with: There's no such thing as free software. The question is, "Who pays the price?".
2) Somehow, the moral code they want people to obey doesn't apply to them. This is in particular disturbing if you add the fact that the code which does the heavy-lifting (the actual HTTP server that Caddy uses) is written by a 3rd party, with whom they don't share any of their revenue. Of course, the "Who pays the price?" argument also doesn't apply to other 3rd part code they use to make profits.
Both has parallels in the brief and colorful history of Elementary OS: they called their free users cheaters while not paying a single cent to the upstream or to Debian. They tried to weasel their way out by saying "cheating the system" doesn't mean "cheaters"` etc. (akin to your attempt above).
But the words had been said, and everyone who read the post knew what they said. People were rightfully upset. They edited their blog post to remove the wording "cheating the system".
Though you are 100% correct that Caddy totally leverages Go's stlib http and http2 servers (along with other OSS projects), the rest of your comment is straight up FUD. I cannot follow your logic at all.
Does every Rails app owe the core project, gems used, database developers, os developers, etc?
How far down the OSS stack do you go?
Where would your career be if you never leveraged a single bit of OSS code to generate profit/value for yourself or your employer?
The strategy to try and earn a living wage offering 20% of the value as Mike Perham suggests is a sound one. People have used this strategy for as long as the internet has been around. If the Caddy team is wrapping Go to try and offer that 20%, more power to them.
Sure, package maintainers may backport fixes to their old versions. But they need to fully understand all upstream source code and follow all commits. Otherwise they can miss important fixes: following only security fixes for supported branches is not enough. Sometimes project developers fixes bug/security problem in the code, but doesn't flag it as CVE because current code usage doesn't trigger it. But code in old branch could.
That's the current reality. That's how it was for years. Especially CentOS and other RH-based systems are more happy to patch than to upgrade. This caused the kernel 2.6.32-573 situation where lots of patches (over a hundred?) were applied by the distro.
>Sure, package maintainers may backport fixes to their old versions. But they need to fully understand all upstream source code and follow all commits.
Only if they need to do a perfect job. But as history tells us they are just as content of making a ho-hum job.
I struggled with getting HTTP/2 working with ALPN on Ubuntu 14.04 back in February. After building nginx from source [1] I ended up switching to Docker base images that support OpenSSL 1.0.2 [2].
I actually found your blog post [1] on Google earlier this morning after seeing this thread. It was hugely helpful. Thank you very much for writing it.
Was there a bug or technical limitation that made you switch to a Docker image or just preferences/ease of updating?
The main reason I switched Docker is there's an nginx image (on alpine) that supports ALPN. No compiling from source required, which simplified things (in my mind, at least).
Gotcha. As a heads up, there's a confirmed bug with HTTP/2 implementation on nginx 1.9.15 and 1.10.0. It seems to affect Safari and iOS (native and browser) markedly.
For me, the first HTTP/2 POST request to a server running the affected version of nginx would never leave the browser and Safari left a less than descriptive "Failed to load resource: Could not connect to the server." console message.
HTTP2 is a relatively bleeding edge technology at this point. Seems fine for it to depend on a recent version of Open SSL. It's not like HTTP 1.1 is going anywhere any time soon.
Somebody tell me again about the supposed benefits of dynamic linking, if we have to recompile and redistribute all the dependent packages for every OpenSSL upgrade anyway?
> if we have to recompile and redistribute all the dependent packages for every OpenSSL upgrade anyway?
Because we don't – look at how many updates Debian, Red Hat, etc. have shipped for OpenSSL vs. the number of times you've had to recompile dependent packages.
OpenSSL is also something an outlier because it's both critically exposed for security and has a somewhat unusual development model, so it's also important to remember that even if it _was_ true that we have to rebuild OpenSSL callers regularly, that's manifestly not true of the hundreds of other libraries where dynamic linking saves time and memory.
It's honestly so mission-critical to interactions on the internet---and computers are so increasingly assumed to be on the internet---it's a bit of a wonder that no-one to my knowledge has tried to roll it into the kernel as a core and necessary function yet.
It's definitely been discussed[1]. I last read about it when investigating the use of splice(2) in nginx; seems there are a few use-cases, but nothing really compelling.
1.0.1->1.0.2 is major upgrade (it's more like an "openssl 2.x to 3.x" type update). Static linking wouldn't help you because when you go to recompile you'll most likely discover compilation fails because the API changed.
For 1.0.1x->1.0.1y upgrades, dynamic linking means you just update the lib and restart running services, no recompilation needed. No random little-used binary hanging around with an old embedded static openssl, hopefully, as they will all pick up the patches as long as they dynamically link to the updated .so
It would be technically possible but then they would end up having to maintain and offer two sets of every dependency, i.e. apache-ssl101 and apache-ssl102. Also, distros usually freeze what they have, so this would only be of interest for backports or perhaps if there was a need to supply old openssl versions even if a newer one is available before the freeze.
The "solution" here is just to wait it out for new major releases of the various distros, and not expect everyone to change core libraries every six months :)
For example, Debian Jessie shipped with 1.0.1 even though 1.0.2 was already out, probably because not every app had been ported to 1.0.2 yet. But their "megafreeze" development model amplifies a 6-month scale problem into a many-year problem. If they had shipped both 1.0.1 and 1.0.2 then apps could have incrementally switched over at their own pace. Naturally, each app would only link against one version of OpenSSL so there'd be no need for apache-ssl101 and apache-ssl102.
Well, Jessie froze on November 5th 2014, and OpenSSL 1.0.2 was released on January 22nd 2015. I do appreciate the "megafreeze" to work out all the quirks for a few months so I can have a stable release that works well for several years.
If you only have one single app to host, I can see it's annoying to be stuck with last year's software. When you have dozens or hundreds of small websites and app backends, you really start to appreciate stable versions. :)
> We knew, since the end of 2015, that this change was coming -- we were given 6 months time to get support for ALPN going, but by the current state of OpenSSL packages that was too little time.
This hints at part of the problem - simply not enough resources. It's not that OpenSSL folks are slow or lazy (absolutely far from it) - but rather they've got tons on their plate already.
One day (hopefully) corporations will recognize that everyone can benefit by helping open source (either providing resources or $funding).
Been delving into hairy problems at work and yup, it's openssl's fault. More specifically it's the misguided way it handles multithreaded, and memory management. Yay.
This has definitely been a known issue for awhile, frustratingly.
At one point, a team I was working with had to code their own RNG replacement for OpenSSL's; the one in the codebase was behaving badly in a multithreaded environment. I don't know if the change was ever accepted back upstream; if I recall correctly, the initial response was something to the effect of "you're using it wrong" (as if multithreaded applications are some new and fascinating beast, and not increasingly just the way you do it to leverage all the performance a modern computer offers in hardware).
> Upgrading OpenSSL packages isn't a trivial task, either. Since just about every other service links against the OpenSSL libraries, they too should be re-packaged (and tested!) to work against the latest OpenSSL release.
Not necessarily. You could simply statically link the newer OpenSSL library with your web server and everybody is happy.
Chrome was no longer accessing my site via http/2 – turns out that the docker nginx image has OpenSSL 1.0.1k. I switched to the Alpine image and that fixed it – a few nginx.conf changes needed, notably to change the user from www-data to 'nginx'. UID is 100 and GID is 101, which differs from the non-Alpine image.
Why disabled? More like "will disable", unless you are on some island on the pacific - for the majority of Earth May 15th will come in more than few hours.
Potentially, big sites won't be (shouldn't be) relying on system binaries for their web server etc. That would give them the freedom to remove unneeded modules, add custom modules and upgrade at their own speed.
Dynamic linking does not make it easier to update things. There, I said it.
Updating things confidently and safely requires -- as the article wisely references -- testing those things, and the entire matrix of other things they interact with.
Static vs dynamic linking simply doesn't register. In either case, the newer versions need to be built, and then tested together.
The only thing dynamic linking does is make it possible for distros to hamstring themselves[^] by tossing everything into one big cauldron, so nothing can be upgraded because it requires testing too much at once.
And here's the endgame. "Is OpenSSL getting updated on $distro?" "Nope."
Sigh.
---
[^] And this is a red herring, in turn: dynamic linking isn't the real enemy either; it's the global sloppile approach to sharing dynamic linked libraries that really causes the mess. NixOS and others like that still use dynlinking, and yet have successfully immunized themselves from these problems by using content-addressable library versioning. So let's stop with the "dynlinking is better for security updates" claims. It's not. It's either unrelated at best, or actively a ball-and-chain making things harder and slower, and I submit as evidence OpenSSL versions across these distros.