Hacker News new | past | comments | ask | show | jobs | submit login

I have a mixed opinion about his first point.

There are two basic approaches to take with dependency management.

The first version is to lock down every dependency as tightly as you can to avoid accidentally breaking something. Which inevitably leads down the road to everything being locked to something archaic that can't be upgraded easily, and is incompatible with everything else. But with no idea what will break, or how to upgrade. I currently work at a company that went down that path and is now suffering for it.

The second version is upgrade early, upgrade often. This will occasionally lead to problems, but they tend to be temporary and easily fixed. And in the long run, your system will age better. Google is an excellent example of a company that does this.

The post assumes that the first version should be your model. But having seen both up close and personal, my sympathies actually lie with the second.

This is not to say that I'm against reproducible builds. I'm not. But if you want to lock down version numbers for a specific release, have an automated tool supply the right ones for you. And make it trivial to upgrade early, and upgrade often.




> The first version is to lock down every dependency as tightly as you can to avoid accidentally breaking something...The second version is upgrade early, upgrade often...Google is an excellent example of a company that does this.

This is misleading. My understanding of Google's internal build systems is that they ruthlessly lock down the version of every single dependency, up to and including the compiler binary itself. They then provide tooling on top of that to make it easier to upgrade those locked down versions regularly.

The core problem is that when your codebase gets to the kind of scale that Google's has, if you can't reproduce the entire universe of your dependencies, there is no way any historical commit of anything will ever build. That makes it difficult to do basic things like maintain release branches or bisect bugs.

> if you want to lock down version numbers for a specific release, have an automated tool supply the right ones for you. And make it trivial to upgrade early, and upgrade often.

This part sounds like a more accurate description of what Google and others do, yes.


A larger problem is that Docker is nearly inherently unreproducible.

Downloading and installing system packages lists, etc.

For this reason, Google doesn't use Docker at all.

It writes the OCI images more or less directly. https://github.com/bazelbuild/rules_docker


well also docker's sha hash for each layer is just a random sha and not a sha of the actual content. also it includes a timestamp. thus docker is not reporducible. but google actually has kaniko and jib which correct that problem.


Your first point is incorrect. That was true of v1 docker images, but layers have been content-addressable for a while now.

Your second point is absolutely correct - we strip timestamps from everything which tends to confuse folks :)


> true of v1 docker images

yeah I missed that one. but basically it was still a pain. But yeah google's tools are actually awesome. I mean I even own a "fork" (or basically a plugin) for sbt which brings jib to sbt: https://github.com/schmitch/sbt-jib It's just so much easier to build java/scala images with jib than it is with plain docker.


Thou shalt be able to recognise the Unix Epoch 0


Yes they have a huge mono repository and tooling to update projects in it to specific versions. You don't get a choice really. You can go home one night with your project on say Java 7 and then wake up and find someone has migrated it to Java 8 because they've decided it's Java 8 now.


But that change only happened once all the tests for "your project" passed on Java 8.


This is the crucial difference. Library developers at Google know all their reverse dependencies, and can easily build, test, notify, or fix all of them.

You can't do that with external FOSS libraries. The closest thing we have is deprecation log messages and blog posts with migration guides.


Their external FOSS dependencies are imported into the monorepo and are built from there. So they get to use the same pattern there. Someone who updates the copy of the dependency in the monorepo will see the test failures of their reverse dependencies at that time, before the change is merged to master.

(Yeah they use different version control terminology since their monorepo doesn't use git, but I've translated.)


> The closest thing we have is deprecation log messages and blog posts with migration guides.

Rust has crater, which can at least build/test/notify over a large chunk of the rust FOSS ecosystem. It won't pick up every project, granted, and I haven't heard of anyone really using it outside of compiler/stdlib development itself, but it's an example of something a bit closer to what google has.


So now you're saying that I have to write tests?

/s


There is in fact something of a philosophy at Google that it's both your problem and your fault if a dependency upgrades in a way that breaks your project without breaking any of your tests.


For an easy open source example of such tooling, see Pyup.

We use it to do exactly that: pin down every dependency to an exact version, but automatically build and test with newly released versions of each one. (And then merge the upgrade, after fixing any issue.)


Or the original ruby bundler, which locks down exact versions in a `Gemfile.lock`, but lets you easily update to latest version(s) with `bundle update`, which will update the `Gemfile.lock`.

Actually, it goes further, `bundle update` doesn't update just to "latest version", but to latest version allowed by your direct or transitive version restrictions.

I believe `yarn` ends up working similar in JS?

To me, this is definitely the best practice pattern for dependency management. You definitely need to ruthlessly lock down the exact versions used, in a file that's checked into the repo -- so all builds will use the exact same versions, whether deployment builds or CI builds or whatever. But you also need tooling that lets you easily update the versions, and change the file recording the exact versions that's in the repo.

I'm not sure how/if you can do that reliably and easily with the sorts of dependencies discussed in the OP or in Dockerfiles in general... but it seems clear to me it's the goal.


I’d imagine this is easier now with dependabot joining Github, being free for all, and implementing a proper CI test system for your repos.

Logically, the next step is supporting such infra for containers. Automate all the mundane regression/security/functionality testing while driving dependency upgrades forward.


> Google is an excellent example of a company that does this.

Which part of Google would that be? My impression is the complete opposite, dependencies are not only locked down and sometimes even maintained internally.


Yeah but only google is google. You are not google, your code doesn’t need to google scale and you don’t need to go to their extremes to manage dependencies. They do it because they are forced to, doesn’t mean it is right or their way is how it should work for everyone.


I haven't worked at Google, but I have worked at Facebook, and I can say with some confidence that in this respect Facebook is Google too :)

For sure there are tradeoffs for big projects that don't make sense for small ones. But there are also times where big projects need a tool that's "just better" than what small projects need, and once that tool has been built it can make sense for everyone to use it. I think good, strong, convenient version pinning is an example of the latter, when the tools are available. That was the inspiration for the peru tool (https://github.com/buildinspace/peru).


I agree with this, but I think to the extent that such tools are lacking (or at least that the overhead is prohibitively high for smaller projects), the parent is correct. Thanks for tipping me off to peru; hadn't seen that before.


This isn't really a 'wow, look at the crazy stuff Google needs' thing.

Any tiny open source project benefits from a reproducible build (when you come back to it months later) and also new versions (with fixed vulnerabilities, and compatibility with the new thing you're trying to do).


I think this depends on your definition of "reproducible build." If you're talking about builds being bit for bit identical, that might not be worthwhile given the complexities of doing so with most build tools. But if you mean the same versions being used for dependencies, then absolutely.


Yes, I agree completely, as replied to sibling: https://news.ycombinator.com/item?id=20032980


No not really, any tiny open source project isn’t worth the hassle of making a reproducible build for.


Well, I look at reproducible as a scale (and incidentally, with an increase in effort as you slide along it, too).

A certain amount of reproducibility - a container, pinned dependencies - gives such large reward for how easy it is to achieve that it absolutely is worth it for a tiny open source project.

Worrying about the possibility of unavailable package registries and revoked signing keys, on the other hand, probably isn't.

It's a trade-off. But you certainly don't need to be Google-scale for some of it to be very worth your while.


If I remember correctly Angular comes with unpinned dependencies.


The package.json file specifies unpinned dependencies.

The package-lock.json or yarn.lock or similar specifies the pinned dependencies.


Correct.

But neither the package-lock.json or the yarn.lock file is part of what you get when you create an angular project using the angular cli, meaning that the versions aren't pinned from googles side.


A. Nobody wants that. The reality is you're going to be using dozens of other libraries and will welcome wiggle room on how many versions your output includes. You want the list to be consistent and versioned, but you don't need the exact same one as Google.

B. If you really want to know what Angular is being tested with, see https://github.com/angular/angular/blob/master/yarn.lock


> The post assumes that the first version should be your model.

No, it doesn't. It just assumes that you want explicit control over when you upgrade. You can always change your Dockerfile or your requirements.txt and build again when you've tested your software against a new Python version or a new version of a package. You can do that as often as you like, so this is perfectly consistent with "upgrade early, upgrade often". But not specifying exact versions in those files means they can get upgraded automatically when you haven't tested your software with them, which can break something.


From what I've seen, explicit version control does not really work unless there's an organizational force toward timely upgrade. In every company, everyone's busy, nobody has time to look at a service that's been running fine for six months (and risk getting paged "Why did you change it? It suddenly stopped working for us!"). The path of the least resistance is to not upgrade anything that's running "fine", and then old versions and their dependencies accumulate, and when you actually have to upgrade it becomes much more painful.

It might work if there's a dedicated team whose mission is upgrade dependencies for everyone in time, but I haven't seen one in action so I'm not sure how well it might work out. (Well, unless you count Google as one such example. But Google does Google things.)


Totally agree with you. At my current company we've got a devpy repository where almost everything is ancient. Even trying to add a more modern version for your own service doesn't work because some people have pinned versions and some people haven't. It's not ideal.

At my last company (a smaller startup) we used to have a Jenkins job which would open a pull request with all of the requirements.txt updated to the latest available pypi version. That worked pretty well, you always had a pull request open were you could review what was available, it would run the test suite, you could check it out and try it, hit merge if everything looked good and roll it back if it caused an issue somewhere. It made it easy to trace where things changed but not as 'cowboy' as accepting all changes without any review or traceability.


Updating pinned dependencies is a form of paying down tech debt. You want to do it as quickly as you can afford to, but not mandate doing it robotically. If a new python version comes out, great, but mitigating a site outage is not the right time to try it.


> From what I've seen, explicit version control does not really work unless there's an organizational force toward timely upgrade.

I agree. But that doesn't contradict what I was saying. I was not saying that explicit version control always works. I was only saying that it is perfectly compatible with "upgrade early, upgrade often", since the post I was responding to claimed the contrary.

Also, if an organization can't reliably accomplish timely explicit upgrades, I doubt it's going to deal very well with unexpected breakage resulting from an automatic upgrade either.


And then you've just moved the immutability boundary to include the whole Docker image (including the application itself).


Enters dependabot and it's nice integration with Github

You get pinned versions that get updated when needed


> and risk getting paged "Why did you change it? It suddenly stopped working for us!"

So, the alternative is that it suddenly stops working, but caused by the update being available instead of by any explicit action on your part. You'll have more time to react to the problem in this scenario than the other?


Isn't this dichotomy the whole point of dependency locking? Sometimes, you want to specify that your code requires a specific version. Sometimes, you just want to keep track of the most recent version that the code has been tested with. They are two totally different needs


I learned the hard way to lock down dependencies. Long ago, in a galaxy far far away, I was doing some Java/SQL Server stuff. We upgraded Java (which was badly needed), and immediately all the SQL stuff stopped working, which led to a few days of paralyzed bafflement.

Found out a few days later that the official release of the JVM broke the Microsoft SQL Server drivers, and Oracle had to ship a new version out asap. Meanwhile, we lost days of work.

Of course, that was also the bad old days of bad old configuration management. But I'd never do something like put an arbitrary version of a language driver in a Dockerfile, not for production.

edit: Of course, the main reason we get scared to upgrade is because we often can't easily back out the change. Docker fixes a lot of that.


Yeah, seriously, having been in the nightmare that is trying to rollback on systems not designed for it, it is my number one (background) priority to get systems to a place where rolling forward isn't the only answer for trying to rollback. I tell you virtualization really made my life easier, being able to at least take a snapshot prior to major upgrades was a game changer. After that, finally getting chef to the point where we could rebuild production in a mostly repeatable way (dependency chains can only be tamed so far without increasing infrastructure costs) really made dev work easier. Using chef kitchen to trivial build a new VM to know that you're close to production really helped reduce dev time by a lot (even if it seemed like the chef recipes would break in subtle ways every month or so). I've been watching Docker for years now, and am hoping it's hit the tipping point where the benefits outweigh the added complexity. I suspect my next gig that's lacking reproducibility I'll start with docker rather than chef and see how far that takes me.


I think of such virtualization as "rolling back by rolling forward". I can just deploy whatever version I want, when I want. If I don't like what's there, I can deploy a different version that happens to be an earlier version.

Docker by itself isn't enough. But Docker in concert with Kubernetes (or Openshift, in my world) is very, very powerful.


> we often can't easily back out the change. Docker fixes a lot of that.

Software can be hard to downgrade. Sometimes dependencies change. Sometimes data models are migrated one way only. Nobody takes the time to properly test them. Among other things.

How Docker, or any other container packaging format for that matter, could possibly help with that I do not understand. It is not the first time I've heard something like this, but I have never been in a situation where the application packaging was part of this particular problem.

Surely starting the an old version of some software isn't neither harder nor easier with Docker than any other way.


Having the old Docker image makes it easy to revert to it. Whether this is easier or harder depends on what other way people were doing deployment.

Sometimes people don't have good deployment processes that automatically back up whatever they deployed. They might even have installed stuff manually, so they don't know how they did it last time. In that case Docker helps. The build might not be reproducible, but at least you have the binary.


The container contains many dependencies. If designed correctly, I can lock down my dependencies out to the port level. It's not that application packaging is itself the problem... it's that I have control over the environment. I can build a Docker image, test it, and know that is exactly the environment I'll be running in, out to the services level. No worrying about someone upgrading JBoss or the OS or openssl or something without me knowing about it or having any control over it.


You have to have tests and you need a CI that will scan your requirements.txt regularly and throw a warning when they're out of date.

Test are ESSENTIAL. You should be able to bump all your versions, run your tests and fix the errors. If something gets through broken, then you know where to add a test (before you fix it).

You should pin versions for your sanity. You should also have a process (a weekly process) to deal with updates to dependencies. Dependency Rot will catch up with you!


Maybe this is pedantry, but I have a problem with "if something gets through broken, then you know where to add a test". No. If something gets through broken, your tests have failed to fulfill their purpose.


> The first version is to lock down every dependency as tightly as you can to avoid accidentally breaking something. Which inevitably leads down the road to everything being locked to something archaic that can't be upgraded easily, and is incompatible with everything else. But with no idea what will break, or how to upgrade. I currently work at a company that went down that path and is now suffering for it.

If you use a system like nix or guix, this concern is largely obviated.


At my previous workplace we were using https://greenkeeper.io/ and locking dependencies, which i think may be the perfect compromise between those 2 systems of organization. You get pinned dependencies, resulting in stable builds. For every package update (automatically scanned), you get a branch spawned w/ tests run if you've set up CI. It makes staying up to date easy when it's an easy upgrade (just merge a green branch!), and you get isolated knowledge up-front when a dependency has upgraded and you're gonna need to budget some time on it.


GitHub just bought Dependabot, so something like this is now available in beta and eventual general availability for all GitHub users.


I think the last option you mentioned is (effectively) the best of both worlds. Lock down dependencies explicitly for the sake of reproducibility, but make it very easy to upgrade (as automatically as possible).


Not sure about Python, but I think its language specific. In the JS world, we have "yarn upgrade" which bumps all non-major versions of your dependencies to the latest. It then locks them in until the next time you upgrade something. There are other actions that may also upgrade them, but it's always through a dependency change in some way.

I still think the overall advice is good. We depend on node in our Dockerfile like this:

FROM node:11

If we went further into the version, of course we'd be even better off probably, but there's a tiny point to make here. We don't build any docker images for deployments from dev to production. In fact the last time a docker build is run is for the development environment. After that it's just carrying the image from dev to qa to stg to prod, and we simply change the configuration file along the way.

This makes it so that we're not re-building again and possibly getting a different set of binaries that were not tested in any of those other environments.


>FROM node:11

Node follows semver and rarely has breaking changes within major versions, so this makes sense to do. The article recommends pinning a minor version of Python because it doesn't follow semver and sometimes has breaking changes within minor versions.


> This will occasionally lead to problems, but they tend to be temporary and easily fixed. And in the long run, your system will age better.

This is the reason I love archlinux. Most of the time, updates are no big deal. Sometimes, they break the system. Rolling release distros force you to deal with each change as it happens, usually with a warning that breakage is about to happen, and a guide for how to quickly deal with it. Once the system is up and running, basic periodic maintenance will keep it that way. In the past, I've used arch machines continuously for 5+ years and they work great and stay up to date.

Compare to intermittent release distros like Ubuntu. Every time I need to update an ubuntu machine, I end up reinstalling from scratch and configuring from the ground up. There are too many things that need tweaking or simply break between when releases are 6-24 months apart. And I'm not convinced that locking down dependencies actually solve anything. Wait six months after an LTS release, when you need to get the latest version of some package. Suddenly, you are rummaging through random blog posts and repos trying to find the updated package. PPAs, Flatpacks, Snaps, oh my! Intermittent distros offload a lot of their responsibility onto users by pretending like package update problems don't exist.


Why do you configure from ground up? Don't you have your home directory in a separate partition? I keep it around and reuse it in case I have to upgrade the whole system. When you install most of your configuration into user space (except things like ssl and third party repos) it becomes easier to recover from a full system upgrade. Worked for me past 10 years.


You can always have both in parallel. The first one to test changes and deploy to production and the second one to try to upgrade your dependencies. Should all the test pass on the second one, you can then commit the new requirements.txt and other updated package versions.

You can then run the second continuously and warn when it fails and handle whatever happened manually, without having a broken prod.


we do that but it's hard to CI a whole operating system, some week ago we got bit by a weird imagemagick bug that was triggered by very specific tiffs that weren't in out test suite but that one of our clients used extensively for their product images.

annoying and wouldn't have happened if we were running pinned versions, that said getting stuck on old software would be worse. however nothing can ever test something like that fully, just too many combinations :(


Yup software will always break. The key is whether you can fix it quickly (fix meaning land commits AND get it in prod) and test for the issue in an automated way in the future.


I used to be strongly in favor of using fixed versions everywhere, but now I also have a mixed opinion. I think a reasonable compromise is to continually update and to promote images. That way you can start with `dev --> prod` when you're small and add more QC layers as things grow.

Something that's even more difficult is dealing with upstream changes. What do you do when `ubuntu:18:04` updates? It's easiest if the upstream is released with a predictable cadence (ex: every Wednesday), but none are AFAIK. That way you could plan a routine where you regularly promote an update through QC.

I'm not sure what to think about event driven release engineering like auto-builds (repo links) on Docker Hub. I think that might be an ok solution for development builds or rebuilds of base containers, but it seems to be abused. I bet there are maintainers of popular images on Docker Hub that are effectively triggering new deployments for downstream projects every time they publish a new image.


Something we have built for our stuff: There's a private repository and all applications run with minimum versions for their dependencies, so if there's a new available version, everything will update.

Beyond that, we have a daily job that runs the integration tests of all applications with the upstream repository, and if all integration tests end up green, the current set of upstream dependencies gets pushed into the private repository.

It is work to get good enough integration tests working, and at times it can be annoying if a flaky new test in the integration test suite breaks fetching new versions. But on the other hand, it's a pretty safe way to go fast. Usually, this will pull in daily updates and they get distributed over time.

And yes, sometimes it is necessary to set a maximum version constraint due to breaking changes in upstream dependencies. Our workflow requires the creation of a priority ticket when doing that.


In your examples, you're comparing locking down a system and letting it age.. vs upgrading and fixing things that are broken.

The difference here is the amount of labor. In your first example, you propose no labor investment.. and in the 2nd, regular investment of labor. Of course that results in the 2nd system being better. If you invested the same labor required by your 2nd example in the 1st, you would likely have an equally usable and up to date system. Similarly, if you didn't invest any labor in the 2nd, you would have a broken unusable system (since the bugs would never get fixed).

Another way to think about this is that as systems age (bugs are found, exploits, etc), they create technical debt. You need to invest the time to address that debt, or you will suffer later down the road.. same as most tech debt.


My understanding about google is that they approach the dependencies as trying to build one statically linked binary and they tend to try to put all of the dependencies together in the same repo. (Mono-repo). For a large organization, I'm not convinced that's a sane way to go. Individual groups should have the primary responsibility for the modules that they produce.

What we need is:

1. Tools that will automatically upgrade for us and report back when there are failures.

2. Better automated tools on where our code is missing test coverage (automated upgrades should break our code when something changes)

3. Better CI/CD rejection support (Something breaks the build, force a revert of the code)


There is a 3rd and IMO ideal approach. Instead of pinning to dependencies, you pin to an entire set of dependencies. You are guaranteed stability and that dependencies work together while you are pinned to one set. Non breaking changes like security fixes can still happen but major updates don’t. This is less reproducible than locking everything, but we get to reuse fixes among projects and backport them to the shared set. Updating your dependencies is just moving to a new package set. This approach is utilized by NixOS and Stack, but AFAICT no where else.


Until a simple bugfix breaks your system because you relied on the wrong behavior.


"but they tend to be temporary and easily fixed."

Only if your dependencies do the same, otherwise you have a new version of something and your dependency wants something old. Especially hard when older dependencies are no longer maintained.

For different environments and languages the problem might be bigger or smaller depending of the culture, strong/weak type coupling, cross compilation or runtime VMs.

This is the reason we've removed all Scala dependencies and now only depend on Java dependencies.


I don’t think your Dockerfile should be downloading your dependencies and building from scratch. Let your build pipeline pull them in, run tests, and pass them verbatim into your container.

Vendor your dependencies if you have to, or maintain a cache, but don’t make your Dockerfile redo all of that work.


We have a Dockerfile for builds, and a Dockerfile that consumes the built artifacts for deployment.


this assumes your build agent is running the same environment as the docker image. What happens when you’re running ubuntu and pull in packages that won’t work in alpine? (Or worse, if it’s windows/OSX and Linux)


If you have the need to go back to older versions of your code and patch them, you want to lock things down at least at the point of release.


> Google is an excellent example of a company that does this.

Chromium is still using Python 2 in its build system.


Python 2 is mature and stable, but not obsolete. It does still receive bugfixes as needed; the latest was version 2.7.16, released 2019-03-04. Maintenance will continue until 2020-01-01.


Sure, I still wouldn't call that "upgrade early, upgrade often".


Your last option us exactly what the author recommends -- pip-tools.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: