I really love the Go has gone with static binaries. There are so few use cases where we really need to be distributing libraries separately from apps and link them dynamically later. It's 2015, static binaries only takes a tiny bit of memory (relative to how much is available and what we use for other things), and both disk space and bandwidth for downloading binaries is pretty abundant. Why are we still worrying about ABIs!!
[edit: clarified it isn't about the type of linking, it's about separate distribution of libs from apps that use the libs).
The graves of the untold millions (billions?) of systems compromised by malware injected through statically linked zlib and libjpeg blobs grow restless...
The value of dynamically linking common system libraries isn't performance (well, anymore), it's that your friendly distro maintainers can do a far, far better job than you at maintaining that software for you as bugs are fixed over time.
Now, Go isn't subject to the kind of severity of bug that C is, and their package management may be slick enough to make straightforward recompilation into the default deployment mode. But if that's true it's true in spite of the drawbacks of static linkage, not because of it.
Proxying all image decoding to a separate process-per-image isn't likely to be practical anytime soon. Browsers already struggle with too many processes even with process-per-tab.
It's not so crazy. Proxying all video decode to a separate process is, in fact, the deployed architecture on the most popular mobile OS in the world.
Sure, there are performance implications. So you come up with complicated meta-streaming APIs to put as much of the intelligence into that "mediaserver" as possible. And on the other side you come up with complicated buffer sharing architectures and APIs to make sure that the output can go straight to the screen instead of back through the app. Oh, and there are (cough) "security" concerns too (which is the whole point to doing this all in a system process), so you need to drill those bits down not just through the userspace but into the driver and out through the HDCP pipeline nonsense below the kernel (which of course needs userspace helpers in most architectures, so up it all comes again through different drill holes...).
Er, rather, it is crazy, but for different reasons than you posit. You could totally make it run fast if you had to.
> - that a dev team is responsible for knowing about security vulns in their dependencies when they happen,
This is the part that falls down though. It sounds sane, but in the real world the "dev team" was a contractor hired for a one-off project six years ago, and the developers themselves were laid off last year when it folded.
Or the "dev team" is an open source website that hasn't been updated in three years, but hey -- they have this nice windows binary for you to pull and use and it still installs and works fine.
This isn't really a dynamic vs static linking problem. This is an ABI problem and affects static libraries too depending how each library was compiled and what its dependencies are.
Go avoids this because the language designed in simpler ABI requirements, and also doesn't allow binary libraries at all (everything is expected to be source code). This completely avoids the problem since you only have one compiler producing all the components.
But this isn't always a good thing. There are some good reasons for prebuilt libraries. (I had the unfortunate need to recently compile a huge C++ library on a slow ARM based embedded machine without the help of a cross-compiler. It took about 14 hours to compile. And I had to do it twice because the first time the build flags were wrong.)
For instance, all those cases where you'd like security issues fixed by updating a shared library, rather than finding and recompiling and redistributing every program on your system that may have used that library.
As an end user, I'd like to ensure that it's impossible to change the behavior of multiple unrelated applications by upgrading something on my system. I don't want a program to change until I decide to upgrade that specific program.
I'm not running a server; I'm running a PC. Security problems affect me very little; broken functionality affects me all the goddamn time.
edit: also, because I'm talking about PCs and not servers, I'm running very few apps, and most of them do not interact with the network. Strategies for securing servers do not necessarily make the right tradeoffs for personal machines.
I understand your base viewpoint. Nevertheless, you are still incorrect; many hundreds if not thousands of executable programs provide you with your experience, if you're using linux, osx, or windows; every executable on your system is capable of interacting with outside sources of malevolence, be that the network, usb drives, bluetooth anything, files that you got via e-mail; and if anything, personal machines need even more vigilance and patchability, as their attack surface is radically larger than your average server.
The ability to update common components is a boon. Seriously. Not without concerns or flaws, as you correctly note. But overall, it's radically better than the alternative.
It's not a matter of correctness or incorrectness; it's a judgement about the relative costs. I have experienced vastly more inconvenience as a result of breakage caused by well-intentioned updates than I have ever experienced as a result of malevolence. As a result, I habitually disable all auto-update systems and do whatever I can to prevent my machine from trying to update itself. So what's really been gained? I have the stability I want, but the supposed security benefits of the incremental update process are lost. In practice, hacker attacks trouble me about as much as terrorist attacks, while system breakage resulting from library updates is common and hard to fix.
First, you are trying to argue that your personal experience and anecdotes should dictate the policy of a large group of software deployments. I don't think that's true, regardless of your position.
Second, you think that the cost of breakage and inconvenience is around equal to the cost of a serious malevolent attack. I disagree. I'd rather experience an issue a month where a program crashes and I have to install an update, if the alternative is someone getting access to my bank account, credit cards, or even personal, sensitive information that could harm me if taken out of context and made public.
Third, I think the reason you don't experience as many malevolent attacks is precisely because many, many PC users keep their systems updated, and the cost/reward for the attackers is low. If every single PC user had your attitude, and no one updated, the first CVE from Windows that was easy to exploit remotely (either via the network, or email + images, or whatever) would result in a skyrocket of successful attacks.
And finally, my own anecdotal evidence is in stark contrast to yours - I relatively rarely get any sort of library or system stability issues from keeping my systems up-to-date, but I have been attacked before, and it is a much larger inconvenience to get new credit cards, monitor my credit reports, and install counter-measures to prevent it from happening again.
In addition to the flaws that this brings when there's any sort of severe bug, it also makes third party addons to your software much more painful. Dynamic linking lets a user, at runtime, add new libraries and new functionality, without having to stop their workflow, recompile the binary, then start their workflow again. If the program's source is several hundred megabytes, like many user facing apps tend to be these days, that can lead to a lot of painful waiting. The upsides to being able to dynamically add functionality severely outweigh the problems of dynamic versioning.
CPU cycles are cheap. Every major distribution has a build server. Determining dependencies is easy with proper package manager. Rebuilding all binaries should be as easy as pressing one button, and not take more than a few hours.
Out of curiosity, an example? The only programs I've compiled that remotely approach an hour are the Linux kernel, GCC, and ATLAS. The first two are so fundamental that there's no point in recompiling for security reasons (if your current version is compromised, you should assume the resulting binary from a recompilation to be compromised). ATLAS is a specialized package that can be replaced in common circumstances with faster-to-compile packages.
IIRC, you can go from zero to a complete desktop Gentoo system in 2-4 hours on an i7 desktop. But even this long is admittedly an annoyance most users would not want to endure on a regular basis for a modest increase in security. The main reason people use Gentoo seems to be configurability, not security.
Consider a build where clang (c++11), gcc5 (c++11), and gcc4 (c++03) are used. gcc5 uses the new abi, gcc4 uses the old abi, clang uses the new abi headers but links to the old abi (since it doesn't yet support the new name mangling scheme).
You want to eliminate separate compilation? Even Golang isn't doing the equivalent of LTO for all builds anymore. (It changed in 1.2 or thereabouts, I think.)
But no, of course I'm not suggesting that - and that's a good counter-example, thanks. Really, I believe that the idea that the OS distributes libraries like libjpeg as archaic (distros strike me as an awful idea, tbh). That's part of the application, and it's the application's responsibility to ship a fix.
That's theoretically pure but nutso in practice :)
You want to be able to update openssl for the heartbleed bug, and know "it's fixed on my system".
Not "welp, just as soon as i finish getting new binaries from the 500 people who produce the software packages i use, I won't have a problem".
There is no really sane way to accomplish this goal without decoupling binaries and libraries (AFAIK, love to hear any other idea. Things like common intermediate forms for executables would work if you didn't allow inlining/etc)
:)
Have you seen how many people suck at updating their software when their own code breaks? You really trust various projects shipping all the various programs you use to keep track of all the code other people wrote that their project depends on too?
To take an example from your cloud enabled world (kids these days) that should be easily understandable from your perspective: How many people honestly update their version of rails and Ruby when a new one comes out? And how often does that happen on most projects in practice?
Do you think it's going to be any different if we stop having ABIs?
This is the type of dream that leads directly to a cold dark nightmare if you ever tried to implement it farther than an HN comment.
it's the application's responsibility to ship a fix
Well, that's not going to happen in every case. In a world where AAA games are shipped almost completely broken (Batman passim), vendors are part of the problem.
We can go back to shipping software as complete units like we did in the cartridge/tape/floppy days. But doing so on anything connected to a network is extremely dangerous.
Imagine writing, say, Firefox, without dynamic linking. You would have to provide a separate version of Firefox for every version of the OS, and when the next version of the OS is released, your current Firefox would stop working!
It's not about saving memory or disk space (although those are still very significant, especially on mobile). It's about allowing a program's components to evolve separately.
Consider Copy/Paste. This is a system feature, and all apps must agree on where the clipboard is and how to access it. If the system implementation of the clipboard changes (say, in the next OS), then apps that statically linked against the old implementation will not be able to Copy/Paste with those in the new implementation. You also have the reverse problem: apps compiled against the new implementation will break when run on the prior OS.
Dynamic linking is one way to ensure that all apps automatically get the correct clipboard implementation for the system they are running on. Of course this applies to all system features, not just copy/paste.
I'm not saying you can completely avoid integrating with the OS (after all, you need to communicate with the OS if you want to run). I'm saying that if you use something like libjpeg, you should statically compile it in. Application libraries should be part of the application (system libraries, then, should be part of the system).
For sure, I agree. There's not much cause for bundling a dynamic library with your app. But you still need dynamic linking for the system.
Of course C++ is really terrible at this. Its binary interface is very fragile: you can't add variables or virtual methods without breaking clients, and of course different STLs cannot interoperate. So for the case of C++ specifically, ABI compatibility is lipstick on a pig. Still, the gcc guys deserve props for their abundance of caution, and it does pay dividends for other tools (e.g. gdb) that have to understand the ABI.
For one, on some OS's (like Windows), the syscall interface is unstable, so you really want to use kernel32.dll and friends or you're signing up for pain. Even if you're on an OS with stable syscalls, there are plenty of vendor-specific DLLs you have to work with: GPU drivers, for example.
You don't have to use syscalls directly. If you statically link a library that makes system calls, then a syscall interface change will break your app. But if you wrap all system calls in a dylib, then that dylib insulates you against syscall interface changes.
Shared libraries still make sense for unix/linux distributions; There's a lot of little binaries, a lot of little processes, and the size of the full libstdc (or even one pruned to the binaries necessities) is high overhead compared to their actual size.
Well, this is a false question. The ABI didn't break; GCC5 deliberately supports both the old pre-C++11 and new C++11 ABIs in the same library. There are various arguments for and against this approach, but the bottom line is that binary compatibility was explicitly not broken.
The issue here is solely that clang++ doesn't yet implement some of the requirements of the new ABI and so isn't fully compatible with it. This is a very different point.
I don't think there was anything specific about C++11 that would have forced every implementation to change their ABI.
IIRC, one of the most significant was a change that made copy-on-write an invalid (or at least much more difficult) implementation for std::string, and gcc's std::string was CoW.
Just gcc produced ones. GCC wanted to add a feature so that developers can add and change functions without changing the library version number. Which, to be quite honest, is a silly feature, as it leads to problems like the one we're seeing right here.
GCC libstdc++ has been backward compatible with no ABI breaks for around a decade. GNU libc for what, two decades.
You don't break such fundamental ABIs on a whim. If the libstdc++ ABI had changed, it would have broken every single bit of C++ code built with GCC4 over the last decade. Including code with transitive dependencies on the old ABI.
Rather than declaring this as a "silly feature", I think we might be better off thanking the developers concerned for solving a very hard problem and going the extra mile to ensure backward compatibility. They could have been lazy and said "ABI break! Flag day today: every user of GCC much rebuild the entire world and, by the way, none of your old software will run any longer", but they didn't. So clang hasn't got the new ABI quite right to match GCC, that's a minor niggle. It can be fixed. You can't fix breaking the whole world.
As an example, consider that the Mesa OpenGL C library internally uses libstdc++. If this was built using an incompatible new ABI, it would break every C++ program linked against the old ABI. And vice-versa. That game or expensive visualisation software you bought, that's not going to work any more. Stable ABIs matter.
This is a flag day, though. Even the Fedora developers have said as much [1]. Had they changed the version number to add this feature, it would have caused less pain for developers and distributors, because they wouldn't have to worry about what ABI to link to and who to make angry. They could have been in a position to bundle both, and tools that require delving into the deeper workings of the library, like compilers, would be better able to get the info they need. Had the GCC team done this correctly, they'd have bumped the version number, so people who depend on their software can choose to ship both versions if they want to. Instead, c++ software linked against libstdc++ is going to have a period of confusion, because both versions exist in the same library.
The dual ABI is only implemented in libstdc++, so if you want to use the new ABI in a library then all your dependent libraries will then need to use the new ABI so you are all using the same std::string etc.. From this point of view it's perfectly sensible for a Linux distribution to decide to rebuild everything using the new ABI. You're right that a soname increment would have been one way to do this. But you would still have the problem of transitive dependencies on the old ABI since you can't load two libstdc++ libraries at the same time.
In a few months time we'll all forget this was an issue since it will for all intents and purposes (for the end user and developer) be a completely transparent upgrade.
My understanding is that supporting C++11 requires changes to libstdc++ (Specifically, how std::string and std::list are implemented) which will make the new implementation incompatible with code compiled for older versions of libstdc++.
You are correct that C++11 required changes to certain implementation details of libstdc++. However, you are incorrect about this making the new implementation incompatible with old code compiled against older libstdc++ versions; this is explicitly and intentially supported via a dual ABI--the "new" libstdc++ will provide both the old and new variants simultaneously.
Sorry, I wasn't trying to imply the new libstdc++ wouldn't work with old code, just explaining to the poster why the new ABI is required in the first place. The changes to libstdc++ which are incompatible are what is causing the dual ABI to be required - And of course, the dual ABI is what allows old programs to continue to use the old ABI with a newer libstdc++ library without problems.
[edit: clarified it isn't about the type of linking, it's about separate distribution of libs from apps that use the libs).