Hacker News new | past | comments | ask | show | jobs | submit login
Debian considers merging /usr (dralnux.com)
108 points by dengerzone on Nov 22, 2016 | hide | past | favorite | 96 comments



This move has been made a while ago by arch and didn't seem to cause any trouble.

On the other hand, it does not change much compared to the revolution of adopting the gobolinux filesystem that has been proposed more than 10 years ago. [1]

[1]: http://gobolinux.org/index.php?page=at_a_glance http://gobolinux.org/index.php?page=doc/articles/clueless


All of my Arch Linux nodes ate-shit when I did this upgrade years ago. Things didn't like when their shared libraries got moved. I had to manually recover every single one.

It was actually the straw that broke the camel's back for me as an Arch user. This was around the time Arch Linux was going through lots of breaking changes.


It's what happens with any rolling release distro given enough time. We thought it would solve the problem of having to upgrade the OS and migrate every few years but it didn't really.

Now we have docker. Probably not solved until we write all our applications as some sort of crazy monad to minimize maintenance pain. I think docker is a meaningful step toward that.


Of course, now you need to deal with docker itself updating constantly, and the client not being able to talk to the server if they're updated more than 3 months apart.


Or just use rkt and don't worry about it. The CoreOS team takes api stability pretty seriously.


> until we write all our applications as some sort of crazy monad

Desktop Linux seems to be finally moving towards self-contained formats like AppImage and Flatpak. They are clearly not oriented towards server apps, but who knows.


OMG. Self contained apps were there starting from Linux 0.1. They are reinvented every second year.


Yeah but they never got traction for this or that reason. This time it looks like several big players actually agree on a common way forward. I guess time will tell.


>It's what happens with any rolling release distro given enough time.

Not really, the KISS nature of arch means that stuff that could be automated isn't because reasons, while systemd gets a special pass.


because systemd is a personal passion of one of the lead arch devs...


Interesting. I had zero trouble with this change on any of my boxes. Did not even have to --force the upgrade.


Well, the instructions on the frontpage were very clear and simple how to process the update.


It wasn't that bad. It did require manual intervention to get the install to work (which isn't a great surprise given the nature of the update) but it installed pretty painlessly on all bar one machine. The one exception being a laptop which I put off the update for several weeks and then left myself with a much larger job than it should have been - so really that one was partly my own fault.

The glibc update from around the same time was a much worse update update. I think that one did break every Arch install I had at that time.


Not on Arch anymore, but I do remember that upgrade. I had no issues at all with it.


Interesting, I didn't know about GoboLinux, but it reminded me of Nix. Here's a comparison: http://sandervanderburg.blogspot.nl/2011/12/evaluation-and-c...


Interestingly both started roughly around the same time (though i only started hearing about Nix in the last few years).

Note though that Gobolinux is largely constructed from shell scripts.

Also, since this comparison was made Gobolinux replace /System/Links with /system/Index. The latter is more like the classic FHS, and came about because of various issues during software compiles (iirc).

Oh, one potentially interesting tidbit about Gobolinux history is that it apparently started out as one guy's way to manage compiles stored in his user directory on a university server. Sadly since the move to /System/Index the tool for bootstrapping such an environment, Rootless, has been nonfunctional.


I did not know about Gobo either. Why isn't everyone using it yet? Directories as packages, multi-versioning made easy... Use it now!


The "how can this possibly work" section shows the problem: System libraries are links to app directories. Basically, you end up bundling the same libs over and over again in your applications. Just imagine having several identical copies of gtk, qt, webkit, etc.


Not quite. As long as multiple programs use the same lib version, you only install it once just as with RPM or DEB.

but if another program needs a older or newer version you can install that version in parallel without it disturbing the rest.


If you can't distribute individual directories/packages as stand-alone apps this entire exercise seems pointless, it just becomes a fancy filesystem hierarchy.


Wait, neither debian nor ubuntu distribute packages as stand alone app. They distribute it without their libs and add a layer of dependency management (I did not look into Gobo's but I guess it is similar though apparently more permissive)

And I don't see either what prevents one to distribute an application as standalone by stuffing all the libs needed in the application directory.


That's my point, you don't gain anything except a different directory structure. And if you stuff everything into one directory, size.


Ah ok, now i get it. You are stuck thinking devops and containers...


Dunno. Maybe because it is largely made out of shell scripts and symlinks, and not laden with buzzwords?


Maybe because it's not ambitious enough?

Nix (and NixOS) does most of the same things, and then some, achieving more of the potential benefits. If you're going to break compatibility anyway, you might as well do it in a big way.

A 2x improvement isn't enough to be worth the cost. 10x might be.


They don't break the compatibility. The old hierarchy is still there.


I think one of the problems is that is a source-based distribution, which is a pretty niche thing nowadays.


I am more wondering about the filesystem organization than the distribution itself.


Using the filesystem as a package database just makes too much sense.


Yep, lets wrap it in multiple layers of daemons and kernel special sauce.


Ironic that a website proposing a cleaned-up version of the linux filesystem has 'garbage' in the URL

    index.php?page=at_a_glance
    index.php?page=doc/articles/clueless


I bet some of the advocates have messy desks too but that doesn't make it any more ironic and your URL point.


My URL point a whole coca-cola bottle?


HN's hug of death. Cached version is here: http://webcache.googleusercontent.com/search?q=cache:s8ApOo1...


It looks like a server-side misconfiguration.

The URL is 301 redirecting to itself.

My guess is the admin has a server side redirect to force http to https, but has CloudFlare configured to use http to the origin.


So what are the arguments actually?

A couple of days ago I was reading some POSIX book from 1991 and there the layout of /bin /lib /shared /usr/name/bin /usr/name/lib /usr/name/shared and so on was much more logical than what we have now which is just weird as far as I can see because I don't understand it.


The best explanation goes from Rob Landley:

http://lists.busybox.net/pipermail/busybox/2010-December/074...

Some fragment:

> When the operating system grew too big to fit on the first RK05 disk pack (their root filesystem) they let it leak into the second one, which is where all the user home directories lived (which is why the mount was called /usr). They replicated all the OS directories under there (/bin, /sbin, /lib, /tmp...) and wrote files to those new directories because their original disk was out of space. When they got a third disk, they mounted it on /home and relocated all the user directories to there so the OS could consume all the space on both disks and grow to THREE WHOLE MEGABYTES (ooooh!).

> Of course they made rules about "when the system first boots, it has to come up enough to be able to mount the second disk on /usr, so don't put things like the mount command /usr/bin or we'll have a chicken and egg problem bringing the system up." Fairly straightforward. Also fairly specific to v6 unix of 35 years ago.

It was discussed here few times:

https://news.ycombinator.com/item?id=3519952

https://news.ycombinator.com/item?id=9554134


Considering this, I wonder why the /usr would be kept at all, assuming we keep /home. Why not just have /bin etc. since we can generally fit everything there these days?


/usr can be made read-only. It is more convenient to have everything in /usr than to have /bin, /sbin, /lib, etc.


I would argue, that it's better to mount / as RO and remount /var, /home as RW.


It's not just amount mounting as RO. It could physically be RO (or use e.g. dm-verity or similar)... which would be quite inconvenient if you wanted to add-site-specific directories under /. (Or if a distro wants to add a directory, or whatever.).

Also, wouldn't mounting /var, /home etc. RW on top of a RO-mounted / require that they were actually on different file systems? I don't think you can have a RW bind mount on a RO-mounted file system. (Haven't tried it, though.)


I recently joined a project that uses OSTree to deliver atomic OS updates (read-only), and flatpak for applications. I'm still learning the implementation details, so I might not be able to provide much detail yet, but it seems like an interesting approach to reliably deliver Linux to non-technical users.

https://endlessm.com/for-developers/


> It's not just amount mounting as RO. It could physically be RO (or use e.g. dm-verity or similar)... which would be quite inconvenient if you wanted to add-site-specific directories under /. (Or if a distro wants to add a directory, or whatever.).

You could use two partition update scheme as ChromeOS. I am working on Linux distribution that have rootfs as a squashfs image and I want to have as painless updates as on ChromeOS.

> Also, wouldn't mounting /var, /home etc. RW on top of a RO-mounted / require that they were actually on different file systems? I don't think you can have a RW bind mount on a RO-mounted file system. (Haven't tried it, though.)

You can mount RW fs on RO fs. Distributions on Live-CDs do it all the time.


> You can mount RW fs on RO fs. Distributions on Live-CDs do it all the time.

Right, but that's with OverlayFS and such, right? (I.e. not just straight bind mounts.)

Seems awfully complicated to solve what essentially should be a non-problem... (I'm aware OverlayFS has other uses, but in this instance it seems like overkill.)

EDIT: I'll just note, I did say "separate filesystem" (i.e. non-bind), but I guess it might have been easy to miss.


> Right, but that's with OverlayFS and such, right? (I.e. not just straight bind mounts.)

Most probably yes, but there are probably distributions using ro rootfs + specific mounts. I think that in many cases embedded devices would use such a scheme instead of OverlayFS.

EDIT: But you probably have to mount workdir and upperdir somewhere over this ro rootfs for OverlayFS to work.

> EDIT: I'll just note, I did say "separate filesystem" (i.e. non-bind), but I guess it might have been easy to miss.

I was hoping that you could read between the lines also :) I mentioned my WIP distribution. I have mounted:

- squashfs ro on /

- ext4 rw on /mnt/rw

- /mnt/rw/var bind on /var

- /mnt/rw/home bind on /home


> > EDIT: I'll just note, I did say "separate filesystem" (i.e. non-bind), but I guess it might have been easy to miss. > I was hoping that you could read between the lines also :) I mentioned my WIP distribution. I have mounted:

Oooh, burn! :) I honestly didn't see any relevant mention of the WIP-distro you're talking about... but now that you mention it I had a look back. "I am working on Linux distribution" (etc.).

> I have mounted:[snip]

So... separate file systems.


Because initramfs effectively already is /.


> Why not just have /bin etc. ....

Surely there's a lot of horrible software out there that Makes Assumptions.


There was a brief revival during the era of NFS mounting /usr, back in the era when /usr/doc could be a significant fraction of a filesystem and shipping whatever-doc packages made sense because it takes a long time to download at 14.4. I actually had deployed production systems with /usr/doc and some other stuff NFS mounted.

It doesn't make much sense anymore but before the turn of the century a meg of doc files might cost you a buck per machine, which might add up to a lot of money given a lot of machines and a lot of software... And usually you don't need /usr/doc but when you do need it you REALLY need it and don't mind if its a little slower to access via NFS than on a local drive.

In the same old days, some brave people exported /usr/src. Oh and X window fonts too.


As an aside, Landley's focus have been for quite some time on the mobile/embedded uses of Linux.

The Linux "community" have a bad habit of turning solutions for special scenarios into generic ones by hook or by crook.

Never mind that this was addressed to the busybox mailing list, a project aimed at reimplmenting the unix coreutils as a single static binary.


Situations not unlike this repeated during the initial growth phase of Linux as well. You had a bunch of undergrads and recent grads upgrading their system one piece at a time, or even taking castoffs from friends and family. Typically you had one good drive that wasn't quite big enough for all the junk you thought you needed.

You also had people who would build a Linux box out of any old hardware they had left over. That's how I ended up with my first firewall.


Rusty Russell summarized this nicely a few years ago: http://rusty.ozlabs.org/?p=236 (careful, sarcasm follows).

The merge does involve some loss of flexibility (for instance, "traditionally", you could use the programs in /bin to recover a failing system), but there are fewer and fewer users relying on it. It also involves departing from the FHS, but Debian already does that (e.g. they use /lib/<GNU triplet> rather than /lib{32,64}). I haven't heard any super convincing arguments from either side; normally, I'd go by the "not broken -- nothing to fix" route, but it's also hard to poke holes in the "this additional complexity isn't needed anymore" line of reasoning. Plus, like most arguments that involve tradition, it's pretty hard to tell robustness from cruft.

Whether this change can be done cleanly and without breaking users' systems is an entirely different story and past experience has shown that moderation is a far better friend than optimism. It's particularly difficult to migrate existing installation to this scheme (e.g. Arch failed quite badly, as I remember from those particular two afternoons, thanks a lot Arch!, and it didn't go more smoothly for others, either), but Debian can benefit from the lessons learned by more up-to-da^H^H^H^H fast-moving distributions and do things better.


Using programs in /bin to recover a failing system is a lost cause on Linux.

Pretty much every binary in /bin in Debian (the OS I have at hand to check) links to a library of some sort. Therefore, if my /lib is hosed, I can't use any of the tools in /bin (I've had this happen, and I couldn't even use "ls". This would not have occured if those binaries were statically linked (I believe OpenBSD does that for /bin and /sbin).


> Using programs in /bin to recover a failing system is a lost cause on Linux.

Agreed. If distros wanted to try and champion the cause for a statically linked subset of critical coreutils, they could still do it in the merged tree. The only thing the separation would facilitate would be separate mount points or physical devices.


The point wasn't (only) that they were statically linked, but also that /bin and /lib were separate partitions (or even separate disks) and could be mounted read-only, could be used from tiny ramdisks etc.. In any case, though, I agree that even without the merge, it wouldn't be doable on any modern Linux system.

There are examples for both approaches. OpenBSD keeps /bin minimal and statically-linked, and it works. Solaris merged /bin and /usr/bin a long time ago, and -- unsurprisingly -- that works as well.

Frankly, as long as it doesn't bork my Debian machine when I move from Jessie to whatever Debian 9 is going to be called, they can merge all they want. At this point, seeing how many Linux systems are doing the merger, it's probably a good idea to do it, too. Packages whose developers give a flying fsck about portability (fewer and fewer nowadays because lockdown and unportability were bad only when Microsoft were doing it) already deal with both usr-merged and usr-unmerged systems (hardly rocket science in most cases), and the maintainers of packages whose developers don't give a flying fsck about portability and only develop for Linux can hope that four years from now, everyone will have merged, especially now that Debian is doing it too.


It's a trifle hard to align the idea of what is supposedly "traditional" with the /usr merger that was done in Solaris 2 in the 1990s and AT&T UNIX System 5 Release 4 in the late 1980s. (-:


Hence the " ;-). Indeed, the heritage isn't nearly as unbroken as its proponents think. I think it's perceived as so traditional because most of the free Unices (and, for a long time, Linux) did it, and some of them still do. FreeBSD started dynamically linking things in /bin much later, around 2004 IIRC, but I think OpenBSD and NetBSD still ship only static binaries there.

I think the fact that I can't even remember the last time I cared how things in /bin are linked is good proof of how important this "tradition" is, at least for me...


Lovely "con" side representation there...



That Fedora one is very biased, written by the advocates for UsrMove. A better way would be to search on Google for:

      site:bugzilla.redhat.com "UsrMove"
which produces page after page of bug reports.

The reality is it caused lots of breakage throughout the system which required many man-hours to fix over years.

The gain from it is pretty minimal. And there are still bugs everywhere, for example these two commands ought to the do the same thing, but in fact the first breaks and the second works:

    # dnf install /sbin/ifconfig
    # dnf install /usr/sbin/ifconfig
(It's something to do with either RPM or DNF not "knowing" that the two paths point to the same file, even though /sbin is a symlink to /usr/sbin. Various attempts have been made to fix this but obviously we're not there yet).

The good thing from Debian's point of view is that Fedora and Arch already fixed many of the bugs, so things will probably be smoother for Debian.


Way back when the world was in black and white, DEC PDP 11 RK05 disk packs could only hold 1.5 Meg.

/bin and /usr/bin were on different drives. Now all those weird splits and choices make sense.

Fuller explanation here: http://lists.busybox.net/pipermail/busybox/2010-December/074...


Originally, disks weren't that big, and / was meant to house what was required to boot and run a minimal system, and additional software was installed in /usr, often on one or more different disks. There's some additional benefit to running a separate /usr partition, which is that you can mount / as read-only for some additional security (but not much, IMO).


Originally /usr was the user data. Then the disks got full and someone stuck binaries into it. Then people moved to /u, /usr2, /home, or whatever your ix variant (or site) wanted for user data.

The ix file system standards are the retcons of a bunch of hacks made by sysadmins trying to stave off users with pitchforks. Philosophy my arse.


Eh, the seminal BOfH story: "AH! - You haven't got any files"

(Here: http://bofh.bjash.com/bofh/genesis2.html )


Added to any historical reason, the separation between the / dirs and /usr dirs made Unix networks manageable. You could have a minimal setup on every machine, and just mount /usr (and /var) from a centralized server.

With this change, this management format is gone. What is not a big loss nowadays.


As best i can tell, thanks to systemd requirements etc initramfs has become bloated to the point that /sbin is largely redundant.


The systemd devs claim it has nothing to do with it, but with a bunch of other software that expects /usr to be mounted at boot time: https://wiki.freedesktop.org/www/Software/systemd/separate-u...


That's why you mount /usr at boot time... in your initscript...


Additionally, I think a good argument could be made that anything that systemd requires to boot should be in sbin, not specifically because systemd requires it, but because it's required or useful for the boot process, as systemd has illustrated, and thus should not be relegated to /usr (depending on where you draw the line between useful and required and whether useful is sufficient to be included in /).


The argument is not to remove all trace of /bin /sbin /lib - but to move these folders into /usr/ and then sym-link back to the original locations. More of a detailed response here: https://lwn.net/Articles/670071/


The linked email is pretty sparse on details. More information: https://lwn.net/Articles/670071/


Personally I prefer the suckless[1] approach, but it's a step in the right direction.

1. http://sta.li/filesystem



That would suck hard for me:-)

I have a tmpfs on / and mount a ro-snapshot onto /usr. Fresh system in every reboot:-)


The initramsf case and the embedded system case are good point.

Can Computer Science help us decide if some people are right?

The measure of Informational entropy : a state with more compartment as more order, thus more information. By merging /usr you lose information, thus making the one in need losing it, whereas the one without a need for this gain nothing.

I laughed at the comment of philosophy by pitchforks of angry users, and I would claim these pitchforks are the one from the Maxwell Daemons telling us to remember that it is easy to lose information and that increasing entropy is messy.


By splitting / and /usr you create problems for packagers: Where do all the files need to go?

Does cryptsetup need to go into /sbin or /use/sbin? Does network code belong into / or /use?

Must users will not care, but some might need either of them to set up their file systems.

If you pace something into /, then you also need to put all libraries and binaries that needs into /. That will rapidly blue up the size of /. It is also not automatically testable, so distributions will get things wrong (as they repeatedly did before).

You could have a script to only put the stuff your system needs into /... Most distributions use such a script for their initrd, which contains everything necessary to check and mount your root file system. So you could just extract that into / and be fine with it:-)

Or just use your initrd as that rescue medium.


Wouldn't it make more sense to merge stuff into / than into /usr, i.e., move /usr/bin/* into /bin, /usr/share into /share &c.? The 'usr' portion of the path is weird historical cruft which adds no real information.

While we're at it, why don't we add a real Plan-9-style bind and use union mounts?


>While we're at it, why don't we add a real Plan-9-style bind and use union mounts?

There have been a few stabs at a union fs. The most recent one, OverlayFS was merged in 2014.


The thinking behind keeping the "weird historical" /usr is that it parallels other operating systems that (mostly) store their stuff in a subtree that is under a single subdirectory of the root. Think \OS2, \DOS, and \WINDOWS for starters. Also remember \LINUX from the world of LOADLIN.


Some programs may have strings like /usr/share hardcoded in them. I once tried renaming the root user to "boss" and it unearthed few issues of exactly this kind.


  # ln -s / /usr


No, some people want /usr as a separate partition.


This hasn't had a genuinely useful purpose for many years.

On a modern package-managed Linux distribution, both / and /usr are under the control of the package manager, and separating them doesn't make sense since they are modified in lockstep, and used as a coherent whole.

And while in the Linux world we're busy "unifying" these locations, other systems can have the whole system on ZFS, where these locations can be in separate datasets if desired, and the whole lot can be snapshotted at will. Since it's all in a single pool, there's no need to even consider splitting it.


This broke my debian live build! I couldn't work out why the process would fail during build (it couldnt find the elf loader!) I was clobbering the symlink /lib -> /usr/lib with a local included folder.

Gah!!


Page is down. Mirror:

http://archive.is/i2Jsu


So if your partition /usr fails and all that you have in /bin is a broken link instead real programs; your entire system fails.

Can't see any real advantage in this.


And that's a likely scenario? You actually have them on different bits of hardware or something? It's so arbitrary - why not split it further, what goes where? If you want to achieve splitting important stuff off to special storage, I'm sure that'd still be possible.

Also, why recover a system like that when I can just boot off a USB stick?

This is just a hack that's stuck around from when people had tiny drives and needed multiple partitions for these things, as far as I understand it.


Yes, of course. Is the common scenario in all old-school linux users and had saved me a lot of headaches. It simply works.

> Why recover a system like that when I can just boot off a USB stick?

Because I can. The basic parts of the system are still available and working. I don't need to reboot to fix the problem, I could mount a different and working /usr in seconds. Is not just "a hack", is a design.


How often do you provision systems where /usr and /bin are on different media? Do you have some reliability-tiering of the media or are they equivalent-but-separate? If it's the latter it seems like you're no better off.


Given that server's under heavy load, here's a copy-paste:

"The bootstrap utility for the upcoming release of Debian 9 “Stretch” will feature the ability to merge utilities from the root file system into the /usr file system. This essentially means directories like /bin and /sbin will simply be symbolic links to content stored in /usr/bin and /usr/sbin. Ansgar Burchardt has suggested this file system layout might be made the default behaviour for future versions of Debian:

“It has been previously suggested to make this the default for (at least) new installations. I think Russ’ earlier mail explains quite well why the split between / and /usr doesn’t really work out for Debian these days and that trying to maintain it for some configurations (which are not documented) is mostly busy-work. There is also a nice article on LWN summarizing earlier discussions. I found these arguments convincing enough and would like to see the default switched to merged-/usr for Stretch and later. Possibly also switching systems on upgrade to the new scheme (not necessarily already in the Stretch release cycle).”

Source: https://lists.debian.org/debian-devel/2016/09/msg00269.html"


“Considers” is old news—the Debian installer already merges /usr by default.

https://lists.debian.org/debian-devel-announce/2016/11/msg00...



I use Linux (ubuntu gnome) on my main computer since many years. IMHO, changing the FHS should be the lowest priority. The priority should be "hardware support" and "education".

By education, I think mainly about the different way to use Linux compare to Windows.

Imagine you are traveling and need a file that is at home on your desktop PC. With your phone, you can wake up your PC, connect using ssh, convert the document to pdf and copy it to your phone.

Imagine you are visiting a friend and want to show something running on your PC, you wake up your PC, launch putty and a VNC client (portable applications) and you are at home.

Imagine you want to keep your machine lean and clean. With docker you can switch between images of preinstalled independent development environments. With sshfs, you access your remote website without a local copy.

Imagine you want to change the hard drive. I copy very quickly the very few files in my home directory that are not on NFS to a NFS backup directory. I change the disk, reinstall Linux then uses the small installation diary I keep on google drive to configure disk montages and to reinstall non default applications.


I don't think adding symlinks to / takes away a meaningful amount of time from the people working on drivers.

I think you're gonna have a hard time convincing people these days that the answer to all the scenarios you listed shouldn't just be "lol cloud", too. :/


Of course, the whole point is that my desktop is part of the network like any remote server. You are not a client of servers, you are a member of a network.

AFAIK, rdesktop and logmein are far behind (more difficult to setup and to use).

https://code.google.com/archive/p/win-sshfs seems dead.

AFAIK, docker does not work on windows home edition.

My scenarios are not made up. These are my very usual and simple use cases of linux. I do not know any windows user that do the same. Do you have a backup copy of your 1TB hard drive on the cloud ?


I'm not saying your scenarios are made up and I sympathize with them, but I think most people choose to store their stuff in the cloud so they don't need to worry about the devices they physically touch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: