Hacker News new | past | comments | ask | show | jobs | submit login

> In a recent work discussion, I came across an argument that didn’t sound quite right. The claim was that we needed to set up containers in our developer machines in order to run tests against a modern glibc

You're right, this is wrong. You need to set up containers in your developer machines to test against *everything*. You need to take the exact environment that you, the developer, are using to build and test the app, and reproduce that in production. Not just glibc, but the whole gosh darn filesystem, the environment variables, and anything else capturable in the container. (That is, if you care about your app working correctly in production....)

> Consider this: how do the developers of glibc test their changes? glibc has existed for much longer than containers have. And before containers existed, they surely weren’t testing glibc changes by installing modified versions of the library over the system-wide one and YOLOing it.

No, they were just developing against one version of glibc, for each major release of their product.

Back in the day, software developers took incredibly seriously the idea of backwards compatibility. You were fairly sure that if the user could run your app with at least the same version of glibc as you, they could run your app. So the developers would pick one old-ass version of glibc to test with, to ensure as many customers as possible could run their app.

Eventually a new version of the product would require a breaking change in glibc, and the old product would fall out of support eventually, but until then they had to keep around something to test the told version for fixes.

Either they'd develop on an old-ass system, or have a "test machine" with that old-ass version of glibc, or use chroot. You know, the thing that lets you execute binaries in a fake root filesystem, including with a completely different glibc, and everything else? Yeah. We had fake containers before containers. Tar up a filesystem, copy your app into it, run it with a chroot wrapper.

You don't have to wonder when you should use containers or not, i'll make it very simple for you:

  Q: Are you developing and testing an application on 
     your laptop, and then running it ("in production") 
     on a completely different machine, and is it
     important that it works as you expect?
  
  A: Use containers.

(p.s. that system you thought up? i worked for a company 20 years ago that took RPM and Frankenstein'd it to do what you describe. even did immutable versioned config files, data files, etc not just binaries. it was really cool at the time. they use containers now. soooo much less maintenance hassle.)





"It works on my machine so we ship my machine" certainly works but it's not the only solution and not even always the best one. You can develop software for Ubuntu servers just fine when running Ubuntu desktop as long as you stick to the same major version.

Or, if you deploy on Windows, you don't even need containers to validate basic system functionality. The whole glibc mess is one of the biggest selling points of Windows Server to me, and I don't like Windows Server at all.

Containers don't even give you guarantees about your production environment. I've seen more than a few cases where a local deployment failed in production because of a missing instruction set on the production machine. Containers also don't solve your firewall's nftables rules being different from prod's, and they don't solve for eBPF programs running on your system either. If you go down to troubleshooting at the glibc level, containers aren't enough anymore to guarantee the things you want to guarantee, because you share things like a kernel; you can maybe get away with a VM, assuming your dev machine has a superset of CPU features to cover the CPU features your production machines have, unless you limit yourself to a very restrictive subset if you don't know what CPUs your code will run on. And god forbid you need access to special hardware, your only remaining option then is to mess with PCI forwarding or to take a production server and run your tests on that.

Usually, portable software doesn't really need that kind of verification. What works for me in practice is "statically compile for the oldest glibc you want to support and hope for the best" or "develop for the distro you're running on your desktop and don't run anything newer than the oldest distro you still need to support".

Alternatively, you can just switch to a language like C# or Java or Python or PHP or whatever other language deals with the glibc mess for you. It's not always an option, but avoiding native code when you can sure makes testing and deployment a whole lot easier.


My intentionally inflammatory take: containers are for people who don't know how to write portable software that doesn't depend on random details of their environment they've let leak in.

Containers are a great testing tool. When running your CI pipeline, absolutely, run it in a container environment that looks as close as possible to production. That will help shake out those non-portable, environment-leaking things that do end up in your software sometimes.

And for production itself, sure, run things in containers for isolation, ease of bin-packing, sort of as a poor-man's virtual machine. (Some security benefits, but not as many as some people believe.)

The funny thing is that people who advocate container use in order to duplicate the runtime environment always end up reluctant to update the runtime environment itself. Because then you just have the same problem again: you're using containers because you don't want to have to care about portability and environment leakage... so then you end up with something that doesn't work right when you do more than trivial upgrades to your runtime environment.

When I first started doing backend work ~15 years ago, I came from a background writing desktop and embedded software. I thought backend was some mysterious, mystical thing, but of course it turned out not to be. In some ways backend development is easier than desktop development, where your software has to run well in every random environment imaginable. And when a user reports an issue, you sometimes have to work to duplicate their environment as closely as possible in order to reproduce it. But backend apps mostly run in the same environment all the time, and it's an environment you have access to for debugging. (Certainly backend dev comes with its own new challenges; it's not easier on all axes.)


Remember why containers were invented, though. A PaaS provider wanted customers to be able to run any app without a lot of hassle. So they made a way for the customer to essentially ship their computer to the PaaS, so the apps could run on computers that were never set up to run those apps.

In that situation, where any customer could have any kind of environment, it's much less effort for both the provider and the customer to just duplicate an environment, rather than spend time trying to make portable or compatible environments. And it's more accurate. As you say, a lot of people can't or won't write portable software. Many devs use Macs, and then ship their apps to Linux... more than a few inconsistencies, to say nothing of packages, versions, system files. So if we want to do more work, faster, more reliably, with people who don't write portable code, on incompatible systems, containers are the best possible option.

And it's great for frontend / GUI apps. I use Alpine Linux for a desktop, because I'm a moron. But that means there's many GUI apps that just won't run on my system, for many reasons. Docker and Flatpak allow me to run those GUI apps, like 1Password and FreeCAD, on my wacky non-portable system. It's a boon for me, the user, and for the developers/vendors. (Alternatives like 'AppImage' don't work on musl systems)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: