Hacker News new | comments | ask | show | jobs | submit login

The situation with packages and dependency hell today is horrendous, particularly if you work in a highly dynamic environment like web development.

I want to illustrate this with a detailed example of something I did just the other day, when I set up the structure for a new single page web application. Bear with me, this is leading up to the point at the end of this post.

To build the front-end, I wanted to use these four tools:

- jQuery (a JavaScript library)

- Knockout (another JavaScript library)

- SASS (a preprocessor to generate CSS)

- Jasmine (a JavaScript library/test framework)

Notice that each of these directly affects how I write my code. You can install any of them quite happily on its own, with no dependencies on any other tool or library. They are all actively maintained, but if what you’ve got works and does what you need then generally there is no need to update them to newer versions all the time either. In short, they are excellent tools: they do a useful job so I don’t have to reinvent the wheel, and they are stable and dependable.

In contrast, I’m pretty cynical about a lot of the bloated tools and frameworks and dependencies in today’s web development industry, but after watching a video[1] by Steven Sanderson (the creator of Knockout) where he set up all kinds of goodies for a large single page application in just a few minutes, I wondered if I was getting left behind and thought I’d force myself to do things the trendy way.

About five hours later, I had installed or reinstalled:

- 2 programming languages (Node and Ruby)

- 3 package managers (npm with Node, gem with Ruby, and Bower)

- 1 scaffolding tool (Yeoman) and various “generator” packages

- 2 tools that exist only to run other software (Gulp to run the development tasks, Karma to run the test suite) and numerous additional packages for each of these so they know how to interact with everything else

- 3 different copies of the same library (RequireJS) within my single project’s source tree, one installed via npm and two more via Bower, just to use something resembling modular design in JavaScript.

And this lot in turn made some undeclared assumptions about other things that would be installed on my system, such as an entire Microsoft Visual C++ compiler set-up. (Did I mention I’m running on Windows?)

I discovered a number of complete failures along the way. Perhaps the worst was what caused me to completely uninstall my existing copy of Node and npm — which I’d only installed about three months earlier — because the scaffolding tool whose only purpose is to automate the hassle of installing lots of packages and templates completely failed to install numerous packages and templates using my previous version of Node and npm, and npm itself whose only purpose is to install and update software couldn’t update Node and npm themselves on a Windows system.

Then I uninstalled and reinstalled Node/npm again, because it turns out that using 64-bit software on a 64-bit Windows system is silly, and using 32-bit Node/npm is much more widely compatible when its packages start borrowing your Visual C++ compiler to rebuild some dependencies for you. Once you’ve found the correct environment variable to set so it knows which version of VC++ you’ve actually got, that is.

I have absolutely no idea how this constitutes progress. It’s clear that many of these modern tools are only effective/efficient/useful at all on Linux platforms. It’s not clear that they would save significant time even then, compared to just downloading the latest release of the tools I actually wanted (there were only four of those, remember, or five if you count one instance of RequireJS).

And here’s the big irony of the whole situation. The only useful things these tools actually did, when all was said and done, were:

- Install a given package within the local directory tree for my project, with certain version constraints.

- Recursively install any dependent packages the same way.

That’s it. There is no more.

The only things we need to solve the current mess are standardised, cross-platform ways to:

- find authoritative package repostories and determine which packages they offer

- determine which platforms/operating systems are supported by each package

- determine the available version(s) of each package on each platform, which versions are compatible for client code, and what the breaking changes are between any given pair of versions

- indicate the package/version dependencies for a given package on each platform it supports

- install and update packages, either locally in a particular “virtual world” or (optionally!) globally to provide a default for the whole host system.

This requires each platform/operating system to support the concept of the virtual world, each platform/operating system to have a single package management tool for installing/updating/uninstalling, and each package’s project and each package repository to provide information about versions, compatibility and dependencies in a standard format.

As far as I can see, exactly none of this is harder than problems we are already solving numerous different ways. The only difference is that in my ideal world, the people who make the operating systems consider lightweight virtualisation to be a standard feature and provide a corresponding universal package manager as a standard part of the OS user interface, and everyone talks to each other and consolidates/standardises instead of always pushing to be first to reinvent another spoke in one of the wheels.

We built the Internet, the greatest communication and education tool in the history of the human race. Surely we can solve package management.

[1] http://blog.stevensanderson.com/2014/06/11/architecting-larg...




Sure we can. I imagine first we'll need the ISO to create a project to begin the standardization of software management. Then there'll probably be a few years of research to identify all the kinds of software, platforms they run on, interoperability issues, levels of interdependencies, release methodologies, configuration & deployment models, maintenance cycles, and expected use cases. Then the ISO can create an overly-complex standard that nobody wants to implement. Finally somebody will decide it's easier to just create smaller package managers for each kind of software and intended use case and write layers of glue to make them work together.

So now that we know what to do, the big question is: who's going to spend the next 5-10 years of their life on that project?


So now that we know what to do, the big question is: who's going to spend the next 5-10 years of their life on that project?

But this is my point: We are already solving all of those problems, and doing almost all of the work I suggested.

All of the main package managers recognise versions and dependencies in some form. Of course the model might not be perfect, but within the scope of each set of packages, it is demonstrably useful, because many of us are using it every day.

All of the people contributing packages to centralised package repositories for use with npm and gem and pip and friends are already using version control and they are already adding files to their projects to specify the dependencies for the package manager used to install their project — or in many cases, for multiple package managers, so the project can be installed multiple different ways, which is effectively just duplicated effort for no real benefit.

All major operating systems already come with some form of package management, though to me this is the biggest weak point at the moment. There are varying degrees of openness to third parties, and there is essentially no common ground across platforms except where a few related *nix distributions can use the same package format.

All major operating systems also support virtualisation to varying degrees, though again there is plenty of scope for improvement. I’ve suggested before that it would be in the interests of those building operating systems to make this kind of isolation routine for other reasons as well. However, even if full virtual machine level isolation if too heavyweight for convenient use today, usually it suffices to install the contents of packages locally within a given location in the file system and to set up any environment accordingly, and again numerous package managers already do these things in their own ways.

There is no need for multi-year ISO standardisation processes, and there is no need to have everything in the universe work the same way. We’re talking about tools that walk a simple graph structure, download some files, and put them somewhere on a disk, a process I could have done manually for the project I described before in about 10 minutes. A simple, consolidated version of the best tools we have today would already be sufficient to solve many real world problems, and it would provide a much better foundation for solving any harder problems later, and it would be in the interests of just about everyone to move to such a consolidated, standardised model.


The problem you cited before is supposed to be easy, but in practice, software development that uses 3rd-party software is destined to run into conflicts. You found portability issues, platform-dependent issues, environment/configuration issues, and multi-layer software dependency issues.

These all happen regularly when OS maintainers have to package software for release. They spend thousands of hours to resolve [by hand] each one in order to support the various use-cases of their end users. If you are imagining some automated process just magically makes all your software come together to build you a custom development environment, you are mistaken. It's all put together by humans, and only for the use cases that have been necessary so far.

So yes, all these things exist. In small, bespoke, use-case-specific solutions. What you're asking for - universal software management standardization - can't practically be achieved in more than one use case. This is why we are all constantly stuck in dependency hell, until a bug is filed, and the system is once again massaged into a working state by a human. Frustrating, sure. But it works most of the time.


I think it’s a stretch to call a tool like npm, which currently offers 90,000+ packages, a “small, bespoke, use-case-specific” solution. I’m also fairly sure most people publishing their code via npm’s index aren’t spending “thousands of hours” resolving conflicts with other packages by hand; certainly no-one is manually checking over 4 billion pairwise combinations of those packages to make sure they don’t conflict.

And yet npm remains a useful tool, and mostly it does what it should do: download a bunch of files and stick them somewhere on my disk. The same could be said for gem, pip, Bower, and no doubt many other similar tools. They just all do it a bit differently, which leads to a huge amount of duplicated effort for both the writers/maintainers and the users of these packages.

I’m not arguing for magic or for orders of magnitude more work to be done. I’m just arguing for the work that is mostly being done already to be co-ordinated and consolidated through standardisation. To some extent I’m also arguing for operating systems that include robust tools to navigate the modern software landscape as standard, mainly because installing things with tools like apt has an unfortunate way of assuming there should be one global copy of everything, which is frequently not the case for either development libraries or end user software on modern systems, and because if the OS doesn’t provide good universal package management tools then someone else will immediately invent new tools to fill the gaps and now we are back to having overlapping tools and redundancy again.


Again, nothing you use works without it being designed specifically to work that way. You can't use Visual C++ to build software that was designed for Linux without writing portable abstractions and host targets for both platforms, and it definitely won't work on two different architectures without being designed for the endianess and memory width of each. It's bespoke because it's designed for each use case. It simply will not work on anything it wasn't designed for.

And no, it isn't code publishers that spend thousands of hours resolving broken and incompatible builds, it's release maintainers. Go look at bug lists for CentOS. Look at the test trees for CPAN. It is literally mind numbing how much shit breaks, but it makes total sense when you realize it's all 3rd party software which largely is not designed with each other in mind. Somebody is cleaning it all up to make it work for you, but it sure as shit ain't the software authors.

Once you develop enough things or maintain enough things you'll see how endlessly complex and difficult it all is. But suffice to say that the system we have now is simpler than the alternative you are proposing.


You can't use Visual C++ to build software that was designed for Linux...

Sure you can. Projects of all scales do this all the time. Have you never heard C described as being portable assembly language?

Unless you are writing low-level, performance-sensitive code for something like an operating system or device driver, usually details like endianness matter only to the extent that they specify external protocols and file formats. I would argue that this sort of detail is normally best encoded/decoded explicitly at the outer layers of an application anyway.

Obviously if you rely on primitive types like int or long in C or C++ having a specific size or endianness, or if you assume that they will be equivalent to some specific external format, you’re probably going to have problems porting your code (and any package containing it) across some platforms.

However, that issue does not contradict what I proposed. It’s perfectly viable — indeed, it’s inevitable — to have packages that are only available on some platforms, or packages which depend on different things across platforms. That’s fine, as long as your packaging system doesn’t assume by default that the same thing works everywhere.

And no, it isn't code publishers that spend thousands of hours resolving broken and incompatible builds, it's release maintainers.

Who is the “release maintainer” who made those jQuery libraries I mentioned in my extended example above play nicely together?

Again, this issue does not contradict what I proposed anyway. In my ideal world, if packages are incompatible or don’t have sufficient dependencies available on a certain platform, you just don’t list them as available for that platform in whatever package index they belong to. Once again, this is no harder than what a bunch of different package management tools do (or fail to do) right now.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: