Hacker News new | comments | show | ask | jobs | submit login

I think part of the problem is that many of these issues are left to "distributions" and "repositories".

Some guy releases a library or application, then it gets packaged one way into .debs, another way into .rpms, another into macports. Maybe the author does this work, maybe more likely distribution maintainers do it.

Or in the world of a specific programming language, there is a similar story with a language specific packaging system. Maybe it gets packaged as a gem, or a jar, or an egg, or a module, or maybe the new node package manager.

Often, installing a package involves spreading its contents around. Some of it goes in /var, some goes in /opt, some goes in /etc. Who knows where else?

Many of the reasons for the unix directory layout don't apply for most people today. How many people even know what those directories' purposes are? How many have actually read the Filesystem Hierarchy Standard document?

Typically, those directories were established so that sysadmins could save storage space by sharing files between sets of machines (the word "share" seems to have about a dozen different meanings in these discussions). So you slice it one way so that machines with different architectures can "share" the contents of /usr/share, and you slice it another way so that things that change can be backed up more often, so they get thrown together in /var (and then you can mount /usr read-only!)

Most of these considerations are not worth the effort for most people. I think they are outdated. We don't generally have these directories mounted on separate partitions. We just back up the whole damn hard drive when we need a backup.

Here's an idea: a package should be a directory tree that stays together. Each programming language should not have its own separate packaging system. A package should be known by the url where the author publishes it. That author should also declare his/her package's dependencies by referring to other packages by their urls. Then you don't need dependency resolution systems that function as islands unto themselves (one for debian, another for node etc).

Software is published on the web, in things like git or mercurial or subversion repositories. These have conventions for tagging each version. The conventions are gaining adoption (see semver.org for example) but not fast enough.

Some middle layers just add friction to the process: distributing tarfiles, distributing packages to places like rubygems or cpan or npmjs.org. Developers usually want the source straight from the source anyway -- users might as well use a setup that very closely mirrors developers'.

If you want to add a package into your system, the only piece of information you should need is the url for the project's main repository, with an identifier for the exact release you need. That's a naming system shared by the entire web. If there are issues, that information can go from the user directly to the author, with no layers in between.

Apple has a great install/uninstall process for many applications: you move them to the applications folder, or you drag them out of the applications folder into the trash. We need to strive for this level of simplicity. Deployed software should have the same structure as the package distributed by the developer, in almost all cases.

Have you heard of GoboLinux[1]? It revamps the directory structure for exactly this reason.


Yup; the key is adoption :)

My approach right now is to manage all my software within my home directory in a way not unlike what GoboLinux is doing. The home directory gets mounted on different machines with different operating systems. So the aim is to gradually work out a software packaging strategy that works well across all the existing OSes.

Similar to homebrew or GNU Stow, actually. But homebrew is mac specific, and weirdly tied to a github repo.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact