Hacker News new | past | comments | ask | show | jobs | submit login

C/C++'s package management story is "defer it to distributions". Personally I much prefer `apt-get install libfoo-dev` to 1) each language inventing its own incompatible system, and 2) each developer self-publishing, so there's little to no safety or accountability when adding a new dependency.



So, to be clear, I also tend to prefer distro-supported packaging. However. Forcing everything to go through the distro means that you massively limit what's available by raising the barriers to entry, you slow pushing new versions (ranging from Arch's "as soon as a packager gets to it" to CentOS's "new major version will be available in 5 years"), and you lock yourself into each distro's packages and make portability a pain (Ubuntu ships libfoo-dev 1.2, Arch ships libfoo-dev 1.3, CentOS has libfoo-devel 0.6, and Debian doesn't package it at all). When distro packages work, they're great, but they do have shortcomings.


> limit what's available by raising the barriers to entry

But that’s what you would have to do youself anyway. You can’t use all fresh upstream version of everything, since they don’t all work together. So some versions you’ll have to hold off on, some other versions might require minor patching. But this is exactly what distro maintainers do.


That's a fair point. I live in embedded world most of the time, and using third party libraries in that space is not always so easy :)


> Personally I much prefer `apt-get install libfoo-dev`

So, what do you do when different projects need different versions of libfoo? Right, you download and build it locally inside the project tree and fiddle with makefiles to link to the local version rather than the one installed by apt-get. So basically you do your own dependency management. Good luck with that.


> So, what do you do when different projects need different versions of libfoo?

Well typically other people's projects I would normally rely on the distro to compile other people's projects as well; and then the distro maintainers would sort that out. That may include making a patch to allow older projects to use newer versions of a library; or if the library maintainer made breaking changes, it might mean maintaining multiple versions of the library.

Obviously if I myself need a newer version than the distro has, I may need to work around it somehow: I might have to build my own newer package, or poke the distro into updating their version of the library.

I mean, honestly, I don't build a huge number of external projects (for the reason listed above), and so I've never really run into the issue you describe. It seems to me that the "language-specific package" thing is either a side-effect of wanting basically the same dev environment in Windows and MacOS as on Linux, or of people just not being familiar with distributions and seeing their value.


> So, what do you do when different projects need different versions of libfoo?

That’s an untenable situation. The package which depends on the older version of libfoo is either dead, in which case you should stop using it, or it will soon be updated to use the newer version of libfoo, in which case you’ll have to wait for a newer release. This is what release management is.


> The package which depends on the older version of libfoo is either dead, in which case you should stop using it, or it will soon be updated to use the newer version of libfoo

So you are suggesting that every time libfoo bumps its version I have to update the dependencies on all of my projects to use the latest, find and fix all the incompatibilities, test, release and deploy a new version? Seriously?


Yeah, I mean what's the alternative? Your code will bit rot if you don't keep up. You don't have a living software project if you don't do this.

You should read the release notes of the new version of your dependency, fix any obvious issues from that, see if your tests pass, and wait for bug reports to roll in for non-obvious things not caught by automated tests. Ideally you should do this before the new release of the dependency hits the repos of the distros most of your users use, so that it's only the enthusiasts that are hit by unexpected bugs.

Even if you could delay and batch the work together every several releases of a dependency, you're still doing the same amount of work, and it's usually simpler to keep up bit by bit than all at once.

One trick is to not use dependencies that have constant churn and frequent backward-incompatible changes, and to avoid using newly introduced features until it's clear they've stabilised. When you choose dependencies, you're choosing how much work you're signing up for, so choose wisely.

Of course you could go the alternate route and ship all dependencies bundled - but that is a way to ignore technical debt and accidentally end up with a dead project.

Also, your project should not demand a specific version of `libfoo`. If `libfoo` follows semver and a minor release breaks your project, that is a bug in `libfoo`. Deployments of production software should pin versions, but not your project itself.


> You don't have a living software project if you don't do this.

What I might have is a working piece of software that is an important part of company infrastructure. For mission critical software reliability is a much more important metric than being current. Unless there is some really compelling reason to update something it should not and will not get updated. There are mission critical services out there running on software that hasn't been changed in decades and that is a good thing.

> Also, your project should not demand a specific version of `libfoo`. If `libfoo` follows semver and a minor release breaks your project, that is a bug in `libfoo`.

Who the fuck cares if it is their bug or not? I need my service working, not play blame games. And if I have a well tested version deployed, why the hell would I want to fuck with that? And if I have tested my service when linked against libfoo 1.6.2.13.whatever2 I had better make sure that this is the version I have everywhere and that any new deployments I do come with this exact version.

But if I start a new project, I might want to use libfoo 3.14.15.whocares4 because it offers features X, Y and Z that I want to use.


Exactly. This is what we signed up for when we release software and commit to keeping it maintained. This is what we do.

If you instead just like writing software and throwing it over the wall/to the winds, you are an academian in an ivory tower, and have no connection to your users in the real world.


> you are an academian in an ivory tower, and have no connection to your users in the real world

Quite the opposite, actually. My responsibility is to my users. And that responsibility is to keep the software as stable as possible. So the only time I will consider upgrading my dependencies is when reliability requires it. If libfoo fixes some critical bug that affects my project, yes, maybe I should upgrade (although I am running the risk of introducing other regressions). If libfoo authors officially pronounce end of life for the version of libfoo I am using, maybe I should consider upgrading, even though it is safer to fork libfoo and maintain the well tested version myself. But it is irresponsible to introduce risk simply to keep up with the version drift. So if my project is used for anything important, I should strive to never upgrade anything unless I absolutely must.


> I should strive to never upgrade anything unless I absolutely must.

It seems that the choice is whether to live on the slightly-bleeding edge (as determined by “stable” releases, etc), or to live on the edge of end-of-life, always scrambling to rewrite things when the latest dependency library is being officially obsoleted. I advocate doing the former, while you seem to prefer the latter.

The problems with the former approach are obvious (and widely seen), but there are two problems with the latter approach, too: Firstly, you are always using very old software which are not using the latest techniques, or even reasonable techniques. This can even be considered to be bugs – like using MD5 hash for example, which, while being better than what preceded it, much software were using MD5 as a be-all-and-end-all hashing algorithm; this turned out later to be a mistake. The other problem is more subtle (and was more common in older times): It’s too easy to be seduced into freezing your own dependencies, even though they are officially unsupported and end-of-lifed. The rationalizations are numerous: “It’s stable, well-tested software”, “We can backport fixes ourselves, since there won’t be many bugs.” But of course, in doing this, you condemn your own software to a slow death.

One might think that doing the latter approach is the hard-nosed, pragmatic and responsible approach, but I think this is confusing something painful with something useful. I think that doing the former approach is more work and more pain from integration, and the latter approach is almost no work, since saying “no” to upgrades is easy. It feels like it’s good since working with an old system is painful, but I think one is fooling oneself into doing the easy thing while thinking it is the hard thing.

The other reason one might prefer the former approach to the latter is that by doing the former approach, software development in general will speed up by all the fast feedback cycles. It’s not a direct benefit; it’s more of an environmental thing which benefits the ecosystem. Doing the latter approach instead slows down all feedback cycles in all the affected software packages.

Of course, having good test coverage will also help enormously with doing the former approach.


> but I think this is confusing something painful with something useful

There is software out there that absolutely cannot break. Like "if this breaks, people will die". Medical software, power plant software, air traffic control software, these are obvious examples, but even trading and finance software falls into this category, if some hedge fund somewhere goes bankrupt because of a software bug real people suffer.

It doesn't matter how boring, inefficient and outdated these systems are. It doesn't matter how much pain they are to maintain and integrate. These are systems you do not fuck with. A lot of times people who do maintenance of these don't even fix known bugs in order to avoid introducing new ones and to avoid the rigorous compliance processes that has to be followed for every release. Updating the software just to bump up some related library to the latest version is simply not a thing in this context.

I am not working on anything like this. I work on a lighting automation system. If I fuck up my release, nobody is going to die (well, most likely, there are some scenarios), but if I fuck up sufficiently, a lot of people will be incredibly annoyed. So I have every version of every dependency frozen. All the way down the dependency tree. I do check for updates quite often and I make some effort to keep some things current, but some upgrades are simply too invasive to allow.


> I am not working on anything like this. I work on a lighting automation system. If I fuck up my release, nobody is going to die (well, most likely, there are some scenarios), but if I fuck up sufficiently, a lot of people will be incredibly annoyed. So I have every version of every dependency frozen. All the way down the dependency tree.

Most people’s systems are not that special that they absolutely need to do this, but it feeds one’s ego to imagine that it is. And, as I said, it feels more painful, but it’s actually easier to do this – i.e. being a hardass about new versions – than to do the legitimately hard job of integrating software and having good test coverage. It feeds the ego and feels useful and hard, but it’s actually easy; it’s no wonder it’s so very, very easy to fall into this trap. And once you’ve fallen in by lagging behind in this way, it’s even harder to climb out of it, since that would mean upgrading everything even faster to catch up. If you’re mostly up to date, you can afford to allow a single dependency to lag behind for a while to avoid some specific problem. But if you’re using all old unsupported stuff and there’s a critical security bug with no patch for your version, since the design was inherently buggy, you’re utterly hosed. You have no safety margin.


This is the danger of living on the edge of EoL, as you called it. Once you are forced to update a packages, usually with short notice at the most inconvenient time ever due to some zero-day vulnerability found in one of your depdendencies. Then the new version no longer supports another old version it has a subdependency to, which you also pinned, so you have to update the subdependency also. And to upgrade that package you have to update yet another subdependency, and so on.

Suddenly a small security-patch forces you to essentially replace your whole stack. If your test flags any error you have no idea which of the updated subpackages that caused it, because you have replaced all of them. Eventually you accumulate so much tech debt that it's tempting to cherry-pick the security patches into your packages instead of updating them to mainline, sucking you even deeper down the tech debt trap.

Integrating often means each integration is smaller, less risky and easier to pinpoint why the failure happens. Of course this assumes you have good automatic test coverage, which i assume you do if the systems are as life-critical as parent claim them to be.

There's also a big difference between embedded and connected systems here. Embedded SW usually get flashed once and then just do whatever they are supposed to do. Such SW really is "done", there is no need to maintain it or it's dependencies because it's not connected to the internet so zero-days or other vulnerabilities are not really a thing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: