Hacker News new | comments | show | ask | jobs | submit login
Understanding the bin, sbin, usr/bin , usr/sbin split (busybox.net)
857 points by sciurus 2067 days ago | hide | past | web | 151 comments | favorite



Wow. As someone who was there (I know dating my self here) reading this is kind of like that scene in Sleeper where the person from the future is trying to understand artifacts from the past.

So during the BSD / System V merge (project Lulu at Sun) the /opt filesystem was introduced as a way to keep 'packages' separate from 'system'. The difference between /bin and /sbin was that sbin was 'static-bin' which is to say everything in it was statically linked and could run without any libraries being available.

The fact that Linux starts up differently is because Linux never was UNIX they are two different OSes, pretty much from the ground up. They use similar concepts, processes, file descriptors, Etc, but they are two different species. FreeBSD on the other hand is a derivative of UNIX and last time I checked it started up in a similar way.

The lack of space on the RK05s was indeed the reason for the addition of /usr/{lib, bin} and the general consensus at Sun and AT&T in the 80's was that the root file system contained the system, and the /usr file system contained stuff that was not-system.

AT&T (the guys that 'owned' UNIX) had some pretty detailed specifications about what lived in what directory and why. It was a "BigDeal" (tm) to add a new directory in the root file system so new directories, when they were proposed, appeared under /usr. And once /opt existed it gave people free reign to create their own trees. Early package managers would build /opt/<package>/{bin/lib/share/man} and the downside was that ones path variable got longer and longer, and there arguments about if there should be more constraints on opt.


I also have no memory of /home even existing in roughly the first decade of Unix's existence.


Actually /home was the first automounter map. Brent Callahagn at Sun had built this thing that, in conjunction with NIS (YP) or NIS+, could locate which server your home directory was on, and automatically mount it, decided that the automount point should be '/home' this was distinct from /usr/xx since /usr/xx was xx's home directory on this machine whereas /home/xx was xx's network home directory that would follow them around.


home directories iirc, lived in /u (/usr these days)


Depended completely on the site. The first UNIX machine I worked on (ca 1985-1986) still was putting them in /usr for instance.

There were certainly sites that used /u for what is now commonly /home (or /export/home on Solaris, /Users on OS/X, ...) This was not something that predated "/usr" though: /usr was already added back in the early 1970s (the linked article was correct about that, although it got many other details wrong)

Basically /usr was originally only for home directories (as the name implies) Then it became the place for anything you didn't want to use the precious space on the root filesystem for (hence /usr/bin popped up, and later things like /usr/dict/words) Finally, it became so cluttered that people started locating home directories elsewhere and now the name "/usr" is completely unrelated to its actual purpose.


Until its last breath, IRIX kept home directories under /usr.


/usr was originally only for home directories [...] /usr/bin popped up

I wonder if this is the origin of the 'bin' user as well, because that's a design decision that never seemed to make any sense.


Indeed, /usr was already in place in UNIX version 1, according to the 1971 UNIX Programmer's Manual.


> The fact that Linux starts up differently is because Linux never was UNIX they are two different OSes, pretty much from the ground up.

No, it really has nothing to do with that at all. Linux can boot in pretty much the same way the genetic Unixes do, and in fact did for a long time. initrd and then initramfs came later and took some time to become widely used. Historically, Linux just mounted whatever filesystem the bootloader told it to as /, and executed init, which usually followed either a BSD or System V model, depending on distribution.

Linux evolved. Lots of things evolved. The filesystem layout, unfortunately, has merely become more chaotic.


The FHS (Filesystem Hierarchy Standard) [1] is the go-to reference for this sort of thing. It explains that `/bin` is for binaries that are essential before other file systems are mounted (e.g. in single user mode), and `/usr/bin` is for "most user commands" (all others). This allows you to keep a minimal local filesystem containing only the binaries needed for init to get the system running, and then `/usr` can be mounted, say, from a network share. This is useful because then network admins can install software to the common `/usr` share and make it immediately available to all machines which mount that share.

The `/sbin` and `/usr/sbin` directories are for commands needed only by administrators, which will not normally be used by regular users.

Most systems don't really require this separation, but it does make sense. Perhaps the historical reason for doing it is no longer a factor, but that doesn't mean it's perpetuated merely because of tradition.

[1] http://www.pathname.com/fhs/pub/fhs-2.3.html


The central thing about all these conventions is they developed in the absence of good union mounts. In Plan 9 there is no $PATH: all such things are bound into /bin. This works both because Plan 9 has very good union mounts, but also because those union mounts can be localized to a specific process and as a result are user accessible. I seem to recall some work was done to make this possible on Linux but It didn't get accepted into the main kernel because it is at odds with how we see how a Unix system should behave.

It would be interesting to see a Linux distro that embraced this.


I wonder why rc still has $path in Plan9.

   term% echo $path
   . /bin
What happen with Ape ports whom expect the path environment variable?


Ape's execlp() tries first the program name passed to it (be it relative or absolute), and if that fails it prepends "/bin/" and tries again. It ignores environment variables entirely.†

Rc's path variable allows you to easily tell rc to check the current directory when looking for programs to exec. Doing that with bind after each cd would be clumsy. And if your working directory is a remote server, you can set path to just /bin so that you aren't statting the remote directory before each exec. Inferno's sh does use a path variable, but it is typically left unset and the default (/dis .) is used.††

† See http://plan9.bell-labs.com/sources/plan9/sys/src/ape/lib/ap/...

†† See /usr/inferno/appl/cmd/sh/sh.b:/^runexternal


rc has $path because rc predates Plan9, it first appeared in tenth edition Unix.

Inferno's shell doesn't use a $path variable.


I know about FHS and the rationale FHS uses for the continued existence of /usr, I just do not agree with it (nor with the existence of /usr/share).


I'm curious, what is your opposition to `/usr/share`? It seems quite logical to me as a place to store non-executable, read-only, shareable data like man pages and other documentation, fonts, keymaps, color profiles, game data, etc. Where else would you propose to store this sort of thing? I can't really think of a different location in the hierarchy that would make sense.


Because everyone's idea of what "makes sense" is different. Thus:

/bin

/usr/bin

/usr/share/bin

/usr/share/local/bin

/usr/local/bin

/usr/local/share/bin

/opt/bin

/opt/some/clever/path/system/bin

Followed by hacky symlinks so that programs can find what they're looking for in /usr/local/share/lib/x, which really resides in /usr/share/lib/x or maybe /var/lib or maybe /var/local/lib or maybe just /lib because it's considered "essential" on some systems but not others.

It reminds me of this: http://xkcd.com/927/


This is a little bit disingenuous. I can certainly see the point that all of this separation is unnecessary on a typical modern desktop system, but several of the paths you listed in support of your argument are not actually specified by the FHS, nor do they even make sense.

Of the paths you mentioned, there is no such thing as:

    /usr/share/bin
    /usr/share/local/bin
    /usr/local/share/bin
    /usr/local/share/lib
    /usr/share/lib
    /opt/bin
All the paths that involve `share` are for non-executable data, so `bin` and `lib` subdirectories of these don't make sense. That's not to categorically say that there isn't any OS that provides them anyway--I've seen plenty of Linux distros with disorganized file systems--but that's a problem with the distro, not the FHS as a whole.

The `/opt` hierarchy is pretty much the wild west, I'll give you that. It's basically like `Program Files` on Windows; packages get an entire hierarchy to themselves. Most of the software I've seen that installs to `/opt` is Linux ports of Windows software, where the authors were either ignorant of the Unix way of organizing things, or simply didn't want to bother conforming with the norms of a different platform.

Incidentally, one of the reasons I prefer Arch Linux over some other distros is that the organization of the file system follows the standard and actually makes sense. Things are always where I expect them to be.

Your overall point may have some merit, but it feels a little like you're reaching for support for your position by making up wacky, confusing paths that don't actually exist.


Every one of those paths I mentioned are paths I have encountered during my use of various unix distributions since 1991 (Solaris was the worst offender). And that's not even the complete list. I gave up trying to predict where software would install to a long time ago.

My point is that what "makes sense" is subjective. Each developer/distro manager who made one of those paths thought to himself "it makes perfect sense to do it this way". FHS does go a long way towards cutting back on the craziness (by arbitrarily dictating "do it this way"), but it's still clunky.


Your encountering them doesn't make them part of FHS.

Their being included in FHS does.


At no point did I ever say they were part of the FHS.


OP did, and by inference (with my pedant bit set) you were supporting his statements. http://news.ycombinator.com/item?id=3520178


Hmm that was not my intent.


Most of us are more concerned about the real world than a "standard" that is routinely ignored.


I put Java in /opt/java/, and embedded toolchains in /opt/[platform]/. Not exactly Windows ports, but they're the sort of thing that can get out of hand easily. :)


Perhaps you have some misbehaving software, but I've never seen /usr/share/bin, or /usr/local/share/bin or a lib directory in either of those locations. I just checked FreeBSD boxes (which, admittedly, follow their own standards closely), and a Redhat box.

var is a directory which may be found under /usr or /usr/local, but won't contain a local directory.

Think of it as 3 roots: / is required for boot, /usr is stuff maintained by an administrator (say, company wide), /usr/local is where files installed by local users go. Within each of these 3 roots, you'll have some subset of bin, etc, lib, libexec, sbin, and var. The root directory has some additional singleton directories which don't make sense in the other directories: dev, root, mnt, boot, proc, rescue.


/usr is maintained by the OS, and contains stuff that isn't required for boot but to round out the system.

/usr/local is where local modifications to the OS go, these can be site-wide (company wide) and can be hosted on NFS for example for network boots. User installed stuff belongs in /usr/home/<name>/*. Users shouldn't have root access to install stuff in /usr/local.


Side point: it really bothers me that on freebsd the ports system takes over /usr/local. I expect /usr/local to consist of things that I as admin have installed by hand, and nothing else.


You can change where packages are installed using some make.conf flags. You can then install the software anywhere you want. Your users will need to add the new path to their PATH and you'll need to set some flags in rc.conf to pick up the new rc.d path as well (so software installed through ports gets auto-started).

Then again, when you install ports you as the sysadmin are installing those by hand ...


Agree totally.

NetBSD uses the more sensical /usr/pkg for package-managed software, leaving /usr/local for the admin.


I really like the way FreeBSD separates system and third party software, the latter typically being installed to /usr/local.

Linux' failure to differentiate between system and non-system binaries is IMO one of it's worst features.

Neither FreeBSD nor Linux accommodate network mounted NFS mounted applications as well as SunOS did so successfully by the late '80s. This is probably due to the corrupting influence of Unix and Linux engineers who came from MS Windows backgrounds. Windows admins waste countless hours having to install software locally on every machine. Sad that only Solaris provides for a standard network-based application filesystem. Odd too that Solaris is home to the single most ill-thought-out non-system directory, /opt.


BSD is in a position to differentiate in a way that Linux just isn't. In Linux all* software is third party, except for the kernel and util-linux (and maybe fileutuls or some other set of GNU software).

To me the BSD distinction doesn't make much sense anyway. You still have to just know which bits are "local" and which are not, plus anything I pull in that isn't a port gets lumped in with them anyway.


Actually it reminded me of http://xkcd.com/981/

But i do remember seeing /usr/share/bin and /usr/share/local/bin and thinking wtf... But then i play around too much with my OS to reliably blame it on the distro...


You forget my favorites: /usr/libexec and /usr/lib/$progname


Well, first let me extend your explanation of the rationale for /usr/share: the reason for having a place to store not-executable shareable data is that in some network environments, it helps to have to maintain only one copy of this data, on some file server somewhere, which then gets mounted over the network to every machine. We need to keep /usr/share separate from /usr/bin, the rationale continues, because our network might be heterogenous in executable formats. In other words, some networks cannot maintain only one centrally-maintained copy of /usr/bin because some machines need, e.g., the x86 version of /usr/bin and some need the ARM version. In contrast, every machine can mount the same version of /usr/share.

Now suppose you are an admin for a network in which the numerical software foocode is a big deal. the software contains a large read-only database of numbers, which according to the logic of the FHS goes into /usr/share/foocode/. Now suppose that some machines upgrade to a new version of foocode which for performance reasons stores integers in some specialized format, not the ones-complement format that has become something of a standard. So now, if we continue to apply the logic that motivated the creation of /usr/share in the first place, we need a place to put numbers of the new format so that they are kept separate from the numbers in the old format. Thus /usr/share/ones_complement/foocode and /usr/share/new_format/foocode are created. Suppose further that yet another version of foocode is released, and again that some machines are upgraded to it and some are not. This new version introduces a new, more performant format for the database in which the numbers are stored. Well, if the old format is called the 'hyperbolic' format and the new format is called the 'elliptical' format, then the logic that led us to create /usr/share leads us to create /usr/share/ones_complement/hyperbolic/foocode, /usr/share/ones_complement/elliptical/foocode, /usr/share/new_format/hyperbolic/foocode and /usr/share/new_format/elliptical/foocode. My point is that at some point you need to move to some way of assigning 'attributes' (and "read-only, shareable, non-executable" would be an example of an attribute) to files in some way other than putting those attributes in the name of the file. It would have been better for that point to have arrived before the FHS caused programmers around the world to have to type /usr/share/emacs/23.3/lisp 500 million times when /emacs/23.3/lisp would have done.

What would this other way of assigning attributes to files be? A detailed explanation would be too many words for a HN comment. For the specific problems mentioned in this comment, namely, centrally-adminned file servers, you probably make your net-mount command more complicated like they did in Plan 9.

ADDED. div and mod are well know numeric functions, right? The mere fact that some group of people would like some convenient way to refer to all the numeric functions does not justify requiring every programmer to write numeric/div and numeric/mod (or numeric :: div and numeric :: mod) every time they want to use or refer to one of those functions. I say the same argument applies to /usr and /usr/share.


I very much appreciate your added comment. I feel like banging my head against a wall every time I see a new programming language that boasts a 'hello, world' along the lines of:

  System.Console.WriteLine
or

  System.out.println
or

  Ada.Text_Io.Put_Line
just to get a little text output.


FYI, the linked post completely contradicts what you wrote.


`quote'


> I'm still waiting for /opt/local to show up...

Wait no longer: http://guide.macports.org/


If someone else didn't say this already, I was going to. Although, I've got to admit that Fink's solution of /sw isn't much better.


As someone who comes from a Windows (and further-back, OS/2) background, the directory structure of -nix systems is baffling. It's really interesting to read the background on why it came to be structured a certain way, but I feel that the current structure doesn't jive with how we use computers in a modern way. The structure seems to be optimized for single-file command-line based applications and are not well suited to today's much more complicated GUI applications.

Mac OS X, I think, has done a decent job of structuring the file system to be more user friendly despite the -nix background.

Modern use cases typically revolve around either installation and use of specific applications, often with dozens or hundreds of files needed, and data storage. In Windows/Mac, applications (for the most part) are installed into their own individual application folder and a GUI (as opposed to the PATH) is used to provide easy access to the application. This makes it easy to 1) know where to put a program you're installing, 2) know how to locate a program after install, and 3) keeps all the application components in a single place for easy move or removal.

In my somewhat limited experience with Linux, I find that the complicated nature of the file system makes package management systems necessary to simply keep track of where all the files are: executable in /usr/local/share/bin, configuration files in /etc, libraries in /usr/lib, and I'm not sure where non-binary resources of an application get stored.

I once installed my favorite browser (Opera) in Ubuntu. It didn't make a desktop or Applcation menu icon for some reason, so I figured I'd just go make a icon to point to it. It took quite a while to just figure out where the executable was at.

This could very well be one of the reasons that many people find Linux on the desktop difficult to use. They don't understand where anything goes.

I hope that one day one Linux distribution will at least step up and consider restructuring the file system to be more friendly and straight forward and to take advantage of the availability of long file names.


I come from a DOS / Windows background, but now use Cygwin as my primary interface on Windows, and also regularly use Nexenta (Debian-flavoured Solaris) and Linux. I find the Unix way more understandable, particularly when things break.

Only the most simple of Windows applications get away with putting everything into a single folder under $PROGRAM_FILES. More usually, lots of stuff is in $COMMON_FILES, and may or may not be shared. And these common files are tied in to the program files via a giant complicated global variable, the registry; the program looks up ProgIDs and CLSIDs to instantiate COM objects, so the registry becomes a very sensitive source of failure, particularly when you have different versions of $COMMON_FILES that break the various linkages.

Furthermore, these days Windows Installer is often used to manage the Windows equivalent of packages. These too are sensitive to corruption; if you end up missing the MSI files from c:\windows\installer for one reason or another, you can easily get into a situation where you can't even reinstall an application cleanly, because the installer notices that you already have it installed, and tries to uninstall it first, but fails (usually quietly, in the middle of the installation, where the progress bar goes into reverse and at the end you get a cryptic and unhelpful error message). To fix this, you must venture forth once more into the registry, find out the cryptic (hex) name for the relevant package, and surgically excise it. Not even the MSI/MSP/etc. files have meaningful names. It's tortuous.

All that said, I by far prefer working in Windows to any other OS. If it had a faster performing yet fully POSIX compliant layer as well integrated as Cygwin (i.e. not off in its own parallel little world like SFU/SUA), it would be even better.


"and a GUI is used to provide easy access to the application" How do you think the GUI finds the application? There is a PATH. Open up a cmd and type the name of any installed program.

1) Programs put themselves in various directories on my windows 7 machine, some are under \Program Files, some are under \Program filees (x86), some are \Programs and so on, variations and variations.

2) to locate a program you need to know its corporation, which people often dont know. There are setup.exe start.exe fireup.exe and run.exe and variations, which one should the user use to start the program?

3) all the applications components are not in a single place but are spread out over \Program-Files, \Windows\system, \windows\xx\xx and the registry at the least. Then they put stuff in \users\application data\ and \users\xx\applications or whatever.

You and many people dont understand where anything goes because unlike for Windows, GNU/Linux systems actually make sense and actually have a structure (that is different from windows) which you need to learn to operate the system, just as you learned to navigate the mess of windows.


I thought the many of the shortcuts used fullpaths in windows I'm pretty sure not all of them can be accessed with name in cmd. Looking at the .desktop files under /usr/share/applications (ubuntu) most execute by name but some especially games(both third party and some gnome games) use fullpaths.


In Windows from a cmd.exe prompt you rely on the contents of %PATH% to access programs. This is a holdover. In Windows proper there are other methods; from cmd.exe you can use the 'start' program to invoke executables in a Windowsy way. The most convenient/reliable way to set up an easy to use/remember way to launch an executable in Windows is to go to the registry and locate hklm/software/microsoft/windows/currentversion/app paths/, inside of which are a series of keys. Each key is the name of an alias and its default reg_sz value is the path to the executable to invoke when the alias is passed to 'start' (or entered into the run prompt, or what have you). You will find a number of these already in existence on any Windows box, most notably iexplore.exe.


While the system is, indeed, complicated, there is a reason changing it has not been Ubunutu's highest priority: almost all installation is done via the package manager. I haven't used Ubunut much, but at least on Fedora 95% of what I use is in the repositories and the rest is available as an rpm file which installs itself. I have never had to worry about where to put executable files.

Of course, I have installed things manually from source. But when I do, I only install them locally in my home directory.

So the complexity is there, I just haven't had to deal with it.

Also, I think that GUI-centrism is short-sighted, but that's another matter altogether. In short: command-line tools are just as "modern" as GUI tools; they're just harder to learn but tend to be more powerful.


As a user, this may well be tolerable. As soon as you start developing for Ubuntu (or any other OS), these kinds of things will make you suffer.


I learned Linux pretty well before I learned Windows (I was an early adopter) and I felt equally baffled by Windows at a time. I felt nothing made sense. :)


I don't think either make any sense. I guess you could give Windows the nod for not really even making any pretense of making sense (my favorite detail is how 64-bit system files go in "System32" and 32-bit system files go in "SysWOW64"). OTOH while I never got the /bin /sbin /usr/bin /usr/sbin thing, I always vaguely assumed there was some deep reason behind it that I just didn't understand.


For future reference, you can list all of the files associated with a package using this:

    dpkg-query --listfiles package-name
Or use Synaptic's 'Installed Files' tab in the package detail dialog.


I often find myself looking for the opposite mapping; to which package does /usr/share/foo/bar.blob belong?

   $ dpkg --search /etc/apt/
   debconf, apt: /etc/apt


I prefer "apt-file search" which is much faster than "dpkg -S" at the cost of some disk space and having to periodically do "apt-file update".


apt-file search is [also] for uninstalled packages. If all you care about are packages actually installed use dlocate


Or:

    dpkg -L package-name


I still have various partitions: /, /var, /usr, and /tmp (on a single slice). When I am in single user mode the only binaries I have available are in /bin. Unless I mount /usr that is all I have access to, so the split still makes perfect sense.

A lot of Linux distributions by default suggest using the entire disk and creating a single partition named /. In that case it doesn't make sense to have the various different locations since mounting / means you have /usr/bin as well.

I don't want a user being able to fill up the hard drive stopping me from writing my logs, stopping me from logging in or various other things (yes, i've filled up my / partition at one point and was unable to log in because SSH was failing to log something or other). There are also security reasons and being able to set various security flags on mount makes it easier to secure a machine as well (such as noexec on /tmp and or /var).


If this proposal ends up being widely adopted, you're only losing the ability to keep /usr on its own filesystem. You still can (and very much should) keep /var and /tmp on dedicated volumes. (And should probably also symlink /var/tmp to /tmp, so that you don't have a world-writable directory on the filesystem you want to keep safe for writing logs to...)


'Nix allows mounts at any arbitrary point in the filesystem hierarchy. While you could symlink /var/tmp to /tmp, you could also dedicate a filesystem to it. Given that the strict definitions differ (/var/tmp does persist across reboots, /tmp may persist), you risk annoying/disapointing someone at some point if you do otherwise.


I am sure I have used systems with tmpfs /var/tmp.


In that case they were not FHS compliant.


Except that /usr is where /usr/local lives (all software installed through the FreeBSD ports tree) and I don't want that on my / partition, so now I have to create a partition just for /usr/local? How about the various other files under /usr, they now have to be moved to / as well? On FreeBSD /usr also houses /usr/home. /home/ is a symlink to /usr/home!

/ on a FreeBSD system is generally kept small. It doesn't have softupdates on when using UFS2 since it is also the location where your kernel lives, along with rescue utilities and your main /bin. Nothing else.

The proposal as put forth is misguided (talking about Fedora's in this case) and I don't think that the system should be re-engineered just because that is the default use case these days. I remember when I first started using Linux and I created a separate, /boot, /, /usr, /home, and /tmp. Just because udev and others now fail to work correctly doesn't mean it isn't a good idea to keep those separate...

This is just going to cause more trouble, more fragmentation between unix/unix-like systems and Linux and it's various distributions.


You can make a good case that /usr/local should be separate from /usr, where as /usr belongs on the same partition/slice as / on FreeBSD. /usr is the whole system as maintained by FreeBSD, where as /usr/local is where your ports are installed. /usr is going to grow slowly, /usr/local could get huge. Separate /usr/local out from /usr on a different disk.

FreeBSD recommends softupdates on all filesystems, including /.

Also, the LSB doesn't really apply much to the BSD's.


The defaults for FreeBSD are / as UFS2 with no softupdates, see picture 2-19 in http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/in... for an example.


It has been a few months since I had to administer a FreeBSD machine, but personally since at least FreeBSD 7 I have had softupdates on on /.

The installer may still not have softupdates on, but the installer is very very conservative about defaults, and probably hasn't changed since, well ever. However, from the handbook http://www.freebsd.org/doc/en/books/faq/disks.html#SAFE-SOFT...

"9.4. Which partitions can safely use Soft Updates? I have heard that Soft Updates on / can cause problems.

Short answer: you can usually use Soft Updates safely on all partitions.

Long answer: There used to be some concern over using Soft Updates on the root partition. ..."


Given today's disk capacities it rarely makes sense to create any partitions whatsoever within a disk other than for swap. Partitioning /usr, /var or anything else on the same disk only increases the chances of one or more partitions filling. Admins and OSs who create inter-disk partitions without an explicit need are operating on legacy superstition.


If /tmp fills up, things start mysteriously not working (a surprising number of things create temp files, including things you wouldn't expect to), so it needs to be on a partition that won't fill up. I usually put it in its own partition, so that accidentally filling up / won't bring cause mysterious problems.

/var should go on a separate partition so that the log files don't fill up the partition that /tmp is on. A server can easily generate large log files quickly, and it would be unfortunate if that caused the server to fail in weird ways at inconvenient times.


> If /tmp fills up ... so it needs to be on a partition

Makes no logical sense. When you create an (inter-disk) partition you reduce the blocks available to all remaining partitions on that disk. By reducing partition size you _increase_ the probability of all remaining partitions filling.

If you had said adding disks for /tmp and /var were indicated I might agree, but would still be wondering what kind of applications you have.

Looking back at Unix' history we see that partitioning was only added before RAID and only to accommodate additional disks. It is the failure to understand this history and rational that leads certain sysadmins to believe that inter-disk partitions accomplish anything other than reducing robustness (in absence of a badly behaved application, which is 99% of the time a badly configured server or application, per the log rotation example).


Partitioning has some security and performance gains. Different filesystems for different workload (e.g. perhaps xfs for database files and reisterfs for /tmp), mounting with noexec or ro in some cases.


Under the proposal, you can still keep /usr on it's own read-only filesystem. The difference is that it has to mounted in single-user mode.


much of the original traditional unix file system hierarchy is basically redundant and unnecessary in the modern age. for a good overview (from the author of a linux distribution which departs completely from this tradition), see:

http://www.gobolinux.org/index.php?page=doc/articles/clueles...


Oh man... if only we had all statically-compiled Linux systems these days. Sure it'd be a pain to deploy changes in libraries, but less dependency-breaking consequences means you can push a patch to a single application without testing a whole suite of dependent apps.

The really hacky solution to that seems to be building versioned packages in versioned directory paths (e.g. "/opt/lib/db/4/4.2/4.2.52/libdb.so") and mess with linker paths and create a sprawling tree of symlinks and wrappers for weird use cases. With a custom package manager it works really well: run 6 conflicting versions of the same library and just build apps against the library you know works, instead of fighting to get everything running on one compatible library.


And the exactly corresponding security problem: when a problem is found in a library, you'll have to update all your applications, not just the library, otherwise you'll have an insecure system.


> Oh man... if only we had all statically-compiled Linux systems these days.

Would you mind buying me a memory upgrade when this happens? I'll need one.


I honestly have no clue how much extra memory this would cost. I can see it being a big issue on embedded systems. But, with 512 megs being considered very low on modern desktop/server systems, I always thought the vast majority of recent memory use was data rather than code.

Roughly how many processes are you running? Can anyone give a wild-assed guess how much memory it take to boot and load gmail in FireFox on a statically linked Linux?


You shouldn't see that much difference, at least on a system designed for static linking (modern GNU/Linux is explicitly not such a system, see http://www.akkadia.org/drepper/no_static_linking.html). There's still quite a bit of potential for interprocess resource-sharing.

If you somehow managed to get a statically linked Firefox (I think it would be very difficult, given the degree to which Drepper has abandoned the idea) on a modern Linux system, though, the resource usage would admittedly probably be quite impressive.


You'd need a bigger hard drive rather than more memory: dynamically shared libraries are paged-in to be used just as statically linked libraries are and are also paged-out when not in use, in just the same way.

Thus, to run a dynamically linked program you need to use the same amount of memory as you would a statically linked program.

However, if you are lucky (or have enough RAM), there is a chance that you won't have to load the page containing the library as it may already be in memory.

So there is a valid argument that to take full advantage of dynamically linked libraries you need more memory than for statically linked libraries.


While not exactly what you are proposing, NixOS does something similar. It's not perfect, but it's far from hacky.

http://nixos.org/


Let's not forget the whole partition split situation due to the 1024 cylinder limitation. Once upon a time, you couldn't get to certain parts of the disk from your bootloader (using BIOS calls), so you had to make something like a tiny /boot which would hide < 1024.

This situation has only improved a little. There are still lingering bits of it here and there, depending on how deeply you poke and which distribution you have installed.


It annoys the hell out of me that people are still imposing the 486-era 1024Cyl and 8GB limitations in 2012.

ArchLinux, for example, still really, really wants you to make a /boot.


There are plenty of good reasons to make a /boot. Encrypted laptops for example.


I ran into problems when /boot was part of the larger xfs filesystem, so had to create an ext3 /boot instead.


And in general, if you want to use any filesystem that your operating system supports but your bootloader does not, you need a /boot


Excellent point, I forgot that use case.


People who are serious about having an encrypted system should have absolutely none of the bootstrap process residing unencrypted on the disks of that system because somebody could take out the drive, look through the boot process, and log your passphrase.

I have an encrypted laptop that boots from a read-only USB key that is attached to my keyring. It will only boot from this keyring (and a backup CD-R that I have), and the system and the boot media are never stored together. Before USB keyrings became common, I would have the boot media be a CD-R.


These limits are chiefly imposed by the BIOS (int 10h) calls available to a boot sector program (the 512 bytes of machine code you get inside the master boot record of a disk), and require a lot of cleverness to escape from.


There's an interesting piece of advice at the bottom of this post - the author symlinks /bin, /sbin, and /lib to /usr/whatever. Anybody else have an opinion on that practice? It's kind of unnecessary, but it also doesn't break anything.


That's an over simplified version of what the next Fedora is going to be doing. http://fedoraproject.org/wiki/Features/UsrMove

The Fedora move is why this link got posted and is getting upvotes.


While they're at it, they should move /usr/src somewhere else (/var/src?)


There are a variety of tools that live in /bin but are symlinked in /usr/bin, at least on my 10.04 LTS box:

  $ for f in /bin/*; do [ -e "/usr$f" ] && echo /usr$f; done;
  /usr/bin/dumpkeys
  /usr/bin/ksh
  /usr/bin/less
  /usr/bin/lessecho
  /usr/bin/lessfile
  /usr/bin/lesskey
  /usr/bin/lesspipe
  /usr/bin/loadkeys
  /usr/bin/mail
  /usr/bin/nano
  /usr/bin/tcsh
  /usr/bin/touch
  /usr/bin/which
  /usr/bin/zsh
This has the potential to blow something up during the installation procedure if it is not carefully crafted to realize that these files are the same and to ignore link's failure.


It would break FreeBSD's single user mode when using the defaults during installation for partitioning.

FreeBSD's default install creates: /, /usr, /tmp, /var and swap space.

When you boot in single-user mode all you get is / and nothing else. If everything was in /usr you wouldn't be able to mount /usr ... :P


Presumably if FreeBSD went down the same path as Fedora, they could also change the default partition layout.


Wouldn't that interfere with the core BSD system vs p ackage binaries/libs paradigm?


Packages/binaries/ports tree installs into /usr/local/.

See man hier [1].

For example, on a FreeBSD install, it is perfectly safe to rm -rf /usr/local/. Your system will still boot without issues.

[1] http://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0&...


Which I find is actually quite problematic; "make install" by default puts things into /usr/local, and I would prefer to distinguish between things I've installed and things ports have installed. (Specifically, I need to build my own mplayer, but I also need the ports version installed as some other ports depend on it).


Feel free to change the prefix passed to the packages you install by hand. /usr/local is the default used by almost all packages I am familiar with, if for example you want /opt you have to modify that yourself.


It drives me crazy every time I'm on a Linux system and ifconfig is in /sbin, and not on users' PATH, even though the no-argument form works perfectly fine as a user.


use 'ip' instead, ifconfig is deprecated. 'ip addr show' is the simple replacement for 'ifconfig', but ip is pretty flexible in what it can do.


Why? What exactly does ip bring to the table that ifconfig doesn't have? Why am I required to learn YET another tool to do something that ifconfig has no problems doing.

On linux to configure a wireless device you have iwconfig and ifconfig. And you have to use each in a different order to get it work. Whereas in FreeBSD I have ifconfig and it does all of it.


Most newer advanced networking is unavailable in ifconfig. ifconfig and route are now just there for backwards compatibility. The ip tool lets you do all of what ifconfig and route can do, and more.


I would typically agree with you; but the thing that I have really enjoyed in the transition to iproute2 is that ip handles all of: link state, address configuration, arp, tunnels, routing, transformation and a few other obscure things. It brings these together into one tool and unifies the argument syntax for all of them, I find it much easier to play with the network stack using 'ip'.


Ifconfig will fail completely in some cases.

Eg, an alias on a vlan on a bond.


I don't know if I'd prefer a non-portable way to do the same thing :)

Does 'ip' have a form that just prints the current addresses without all the surrounding cruft? That would be pretty useful for the occasional shell script.


'ip addr show dev eth0' will show you the cruft just for eth0, but I'm not sure if there's a way to limit it to just listing the ipv4 or v6 addresses.


just add -4 or -6:

  ip -4 addr show dev eth0 or ip -6 a s eth0


I've been liking the new location /srv, where you stick things that are custom to that machine. No more trying to guess where the previous admin thought files should go (/usr/share? /usr/local? /usr/foo?).

http://www.linuxtopia.org/online_books/linux_beginner_books/...


You dont stick things that are custom to that machine in /srv, thats not what the article you linked to says.

Its for serving using various services, so you would have /srv/http /srv/ftp and so on, you dont put packages/programs into /srv


Looks like I might be misusing it then. I've been migrating some old spaghettied servers to single-use VMs recently, each server has had almost half a dozen admins all with their own weird ideas. The idea of saying 'hey, all the custom stuff is in this directory' for future admins is seductive. Basically I'm replacing half a dozen admins' peculiarities with one admin's peculiarity...


I think that the motivations for shared libraries was once valid but these motivations are obsolete, and they're destructive. I think there's a lot to be said for the simplicity and reliability of static linking.

There is an argument that it's nice to be able to upgrade libraries and have everyone pick them up, but in practice that's mostly a myth because such upgrades are vulnerable to nasty small failures. Another argument is that is saves hard disk space - not an issue these days (perhaps it is in a small number of embedded systems still).

Any good counters to that?

Update: thanks for response. I found the link to the Drepper article linked elsewhere in this thread v informative also.


Well, consider GUI libraries like Qt, GTK or Cocoa. Cocoa in particular does not have any concept of a user modifyable "theme" or "style". However, it changes its looks with OS upgrades and all GUI applications change with it.

With static linking, that would clearly not happen.

Also, these libraries/frameworks are not exactly small, not even for todays hard drives, so the de-duplication does make some sense (and that is not factoring in the smaller size of SSDs). That said, I guess you could make this into an argument for filesystem level deduplication instead of dynamic linking.


Hard disk space isn't an issue, but internet bandwidth is. Also, being able to fit a fairly complete operating system on a CD.


great sig:

GPLv3: as worthy a successor as The Phantom Menace, as timely as Duke Nukem Forever, and as welcome as New Coke.


The title isn't quite correct. He explains /bin vs. /usr/bin, but not /bin vs. /sbin.

My understanding for the latter is that /bin is "normal stuff", while /sbin is system maintenance. But, hey, maybe that split is actually there for obsolete historical reasons, too. Does anyone know?


My understanding is that bin is used for programs that do not require super user privileges, where as sbin programs do.


And the reason for splitting them is so that normal users don't have the super-user programs in their $PATH, since they would be useless there. Of course, this is arguably obsolete now since the advent of sudo, policykit, and so on.


For some reason traceroute is in /sbin, but it works for normal users. Probably another historical accident (did it used to require root?).


Conventional (that is, without capabilities) traceroute and ping were setuid root because it there was no provision for emitting ICMP packets in the network APIs and only root could assemble raw packets.


The particular way traceroute runs is implementation-defined. Some use ICMP echo request packets (ping), which you have to be root to send on most Unices. Most of them, however, use UDP packets instead.


"The advent of sudo"...

As someone with only 2 years Linux experience, this blows my mind


Cargo cult driven development


So, /usr really means "user", and the "Unix System Resources" acronym was put together afterwards. Interesting, thank you !


I don't think I have ever heard that acronym before. I don't think it is common usage.


I've heard "user shared resources" which is similar. I'm glad "usr" meant "user", though, I can go back to naming my home directories on Windows "usr" instead of "home" :)


Everyone's talking about initramfs as if it would replace a self-contained /. Have you ever been there? Usually all the relevant repair tools are missing and the shell gives you a headache. Its the point where you usually give up, walk into office and boot a rescue disc.

So no, it does not replace a working /.


Why did the Fedora team choose to move /bin -> /usr/bin etc instead of moving stuff out of usr into root (/usr/bin -> /bin)? What is the point of having a usr directory when there is no separation between stuff in usr and stuff in root?


If you want to mount system directories read-only or over a network, it's easier if you put them all in a single mount point. This way you can just mount /usr, instead of having to mount /bin, /lib, and /lib64 this way.


So /usr is now the directory for where all programs installed by the system or package manager gets installed. That actually does make sense.


Directories that are based on objective criteria don't have this problem.

For example /dev is defined by objective criteria, and thus there's not much argument to what goes into /dev. We should only have core directory structure defined by objective criteria.


Except for /dev/MAKEDEV (required to be there by the FHS and for hysterical raisins). And then there was all the crap udev used to put in /dev/.udev/, which has thankfully migrated to /run.


They start like that, like /proc did, and then get other stuff.


Glorious.

More history bits like this please.

At the same time, makes you think about dropping FHS .. yeah I said it.


A distribution that does just that is NixOS (http://nixos.org/nixos/). It has to, because it's purely functional (new stuff shouldn't overwrite any of the old stuff) and it has to support several different versions of the same package being installed at the same time.

It is awesome, especially the easy rollback and that you can specify an entire system with a recipe. Still, it feels strange for us who are used to mainstream Unix systems.


GoboLinux does it too.

And for that matter, another "unixy" OS uses "bundles" to keep all of an apps files together instead of splattering them all over the OS: Mac OS X.


OS X isn't just "unixy", Snow Leopard is UNIX certified...


Funny I just installed my first nixos yesterday. They indeed can move away from FHS even if they support it.

As said before GOBO does that too; there was a handful of distros at max.


And why / /usr split still makes sense:

If it's needed to boot, it goes in root: boot images (including root filesystem) can be initrds, bootp images, flash sticks, or other similar tools. Maintaining the discipline of keeping what you need in / and what you don't need to boot in /usr helps when you're trying to minimize boot images, troubleshoot, and/or just simply keep things comprehnsible.

Different partitions can be mounted differently: There are still a few things in the root FS which are written periodically, especially in /etc. By contrast, /usr is largely static. They can be mounted writeable vs. read-only (dittos /boot BTW). Root may require device permissions. Both require suid (but /home doesn't). For various degrees of security and self-inflicted foot-gunshot incidents, mounting with minimal permissions can be useful.

Not all bootloaders handle all filesystems and storage: Applies more to /boot, but particularly for exotic / networked storage, ensuring that early-stage bootstrapped filesystems are accessible with a minimum of fuss can be useful.

The arguments from Fedora about the ability to manage a system from within an initramfs are particularly amusing given RHEL's traditional use of a non-interactive, script-only shell: Yes, that's right, you can't exit out of the initramfs shell to do maintenance. Debian's 'dash' shell is not only smaller than the RHEL equivalent, but supports interactive use. Go figure. (Apologies if this has changed recently but it was true as of the past year or so).

Shared/network mount purposes: A read-only, shared /usr filesystem can be used and accessed by multiple systems. Maintaining the root /usr split ensures that local system commands (if necessary) can be provided independently of the shared bits.

While the origins of /bin vs. /usr/bin lie in what are now largely irrelevant disk capacity constraints, there are a number of reasons why maintaining the split continues to make sense. As has been noted, a fair bit of hierarchy persistence is on account of differentiating between differently-managed packages at different parts of the system. As the guy who gets to come in, comprehend, rationalize, and clean up systems afterward, I can assure you that a logical ordering and seggregation does help markedly.

For distros with a decent package management policy and toolset, there's no particular problem to maintaining this. $PATH variables already make the end-user impact essentially nil.

For those who wish to combine things, union mounts or symlinks can certainly be used, again, with little or no end-user impact. For some embedded/small systems this makes sense. There's no reason to force one-size-fits-all on everyone, however.

I'm also generally opposed to arbitrarily adding top-level directory trees. The naming rarely stays consistent over time (business unit / institutional name changes are notorious). And it tends to complicate matters especially concerning backups and where essential local data lives.

Tempest in a teapot.


For those who haven't seen the proposal for the / /usr merge:

"Fedora (and other distributions) have begun work on getting rid of the separation of /bin and /usr/bin, as well as /sbin and /usr/sbin, /lib and /usr/lib, and /lib64 and /usr/lib64. All files from the directories in / will be merged into their respective counterparts in /usr, and symlinks for the old directories will be created instead"

http://www.freedesktop.org/wiki/Software/systemd/TheCaseForT...


Thanks. These seems like one of those things that while it might be sensible, wouldn't provide much of a win, so why bother with the mess?


The feature implementation and details have seen some refinement, so it's much less of a mess in its current state (e.g. the directories are symlinked, not the files): https://fedoraproject.org/wiki/Features/UsrMove

You get wins on packaging (don't need to worry about where to scatter executables, although this doesn't affect many packages), as well as "all software is under /usr", so everything the package manager touches except config file updates is under a single directory. This makes both centralized updates and pre-update snapshotting much more feasible.

I'm really looking forward to this feature. With appropriate support from the file system and yum, it seems like it brings us one step closer to truly transactional multi-package system updates/software installations. Yes, some of this was already possible with snapshotting support in a plugin for Yum, but I think the consolidation will likely make implementing such features simpler and more robust.


Probably because fedora has a six month release cycle and they feel the need to make some kind of revolutionary change on every release.

Of course, /bin will never be removed since there are millions of scripts depending on /bin/sh in the wild. So they'll basically end up having to maintain this symlink spaghetti for a long time.


I agree with that symlink spaghetti for a long time comment.. It seems harsh to fedora to say they feel a need for a revolution.. i really don't follow release cycles/linux versions... close enough.. but my upgrade experiences so far(Fedora 8-13 vs Ubuntu 8-11.10) indicate it's ubuntu that feel a need to make revolutionary change.. i.e: to say, i have gone wtf more times searching for some custom setting on ubuntu menu's than fedora's


I'd rather see /lib64 being collapsed into /lib. I never quite got why they have to exist. If you really need to support 32-bit legacy libraries on 64-bit machines (a big if) I'd much rather see a lib32 folder instead. Then it would be much cleaner to just kill it when it no longer makes sense.


/opt/local is already used (by MacPorts).


The Fedora changes are bikeshedding. They start with wanting to change something, and then find a justification. Why not just put all the executables files in /Program Files/?

All this to save a few bytes in $PATH, to avoid problems with systemd, and to avoid fixing udev.


That would break compatibility with a lot of existing scripts, which look for things in /bin, /lib, /usr/bin etc.

There are specific reasons why having a single mount point for all OS-provided binaries is a good idea, for example because maintaining a split that differs across distros and unixes is a maintenance nightmare. The main counterargument -- that it is desirable to have a minimally functional system without /usr mounted -- has been obsolete for years.


why not put all .txt files in /txt? we have .../man for man pages, .../lib for libs, .../src or source and .../include for C includes after all.


I know, I know! Why not let the extension indicate which package a file belongs to? Then we could have /txt/readme.GTK


Way, way too long. Go read the FHS:

- If it's needed to boot the system, it belongs in /

- Binaries for normal users are in bin, system (i.e., root user only) binaries are in sbin

That's all.


How about we stop mincing around and make some gut-wrenching modifications? As long as you're going to go through all the trouble of shuffling things around let's kill more than one bird at a time. Let's not worry about legacy needs, let's just worry about current needs. We no longer care about disk space and (for the most part) things can be tab-completed, so there's little reason to keep anything small if there's a down side.

What's attractive about /usr? A lot of things, but mostly: single export-point, possible to separately mount (from a network, read only, whatever), logically nice to have all those directories not polluting /.

I propose that / should contain the following directories:

/cfg/ /home/ /local/ /mnt/ /system/ /tmp/

You would mv /boot /dev /bin /sbin /var /root /proc /sys /usr /lib /run /system, then mv /opt /local

/etc/ is renamed /cfg/ just because I can and left in / because some things really are global configuration (and we can't be mounting /etc ro all the time).

But don't stop there, because now /system/ is a mess. You obviously still can't treat /system as /usr because e.g. /dev is there, so put /dev, /proc and /sys under /system/kernel, because these things are figments of the kernel's imagination anyway. Under /system/boot/ throw in a directory for your bootloader, initrd and whatnot and one for any statically-linked binaries you have, if you have any (hey, it's optional). No need for a lib, because it's all static or in the initrd. Just because I'm a mean curmugeon who hates greybeards and love n00bs, let's mv /system/var /system/data. Inside /system/usr let's merge files from games, bin and sbin into bin and get rid of the empty directories. Then mv /system/usr/local/ /local/usr/.

Speaking of /local/, it'd have two subdirs: /local/opt/ and /local/usr/. The former would contain an opt-style directory hierarchy and the latter would be like /system/usr, only with the purpose of the FHS /usr/local. Okay, so the /local/ stuff isn't to be found in an exported /system/usr/, but /local/ is just a fetish of mine. You could put it in /system/usr/, too. And yes, /system and part of its structure is required to be on / during boot. So sue me.

Now you have:

/cfg/ /home/ /local/ /local/usr/ /local/opt/ /mnt/ /system/ /system/data/ /system/boot/loader/ /system/boot/sbin/ /system/kernel/dev/ /system/kernel/proc/ /system/kernel/sys/ /system/root/ /system/run/ /system/usr/bin/ /system/usr/lib/ /system/usr/share/ /tmp/

Now you have a tree for homes, a tree for the whole system, which has a splits where you might need them for partitioning and exporting, a tree for configuration, which also has a name users can understand, a tree for non-packaged software, which allows for both crazy-opt style layout and traditional, and you have your global tmp and temporary mount point root.

Did I leave anyone out?


Reason #9364 why I am soooo glad I stick with FreeBSD.


How does FreeBSD handle this?



This email is clearly wrong. Anyone, who knows anything about Unix knows that the world didn't start till 1970, so all this stuff about things happening in 1969 is clearly impossible.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: