Hacker News new | past | comments | ask | show | jobs | submit login
GoboLinux (gobolinux.org)
315 points by frabert 72 days ago | hide | past | favorite | 178 comments



Wow, this model addresses a lot of the issues I have with different Linux distributions. Mainly, finding where the heck it installed that program in the maze of file/folder locations. Sometimes it's etc, opt, var/lib, bin, usr/bin, apps... Gwahhhhh!

Then, you go to modify the configuration, and it turns out that the configuration file is being copied and generated from a different location. It's endless misdirection which changes in each distribution based on the package manager.

I think I may try out Gobo the next chance I get. This is pretty cool.


The typical convention these days is:

- /usr/ is read-only data. /usr/bin/ contains OS-supplied binaries, /usr/lib/ contains OS-supplied libraries.

- /var/ is read-write state data and will persist between boots. /var/log/ is for log files.

- /run/ is read-write scratch on a tmpfs, and will be cleaned up between boots. So, UNIX sockets and other temporary files go here.

- /etc/ is per-system configuration overrides. There is a goal to move to "empty /etc/", meaning you should be able to remove all files in /etc/ and still have a functioning system.

- /opt/ is "vendor space". The OS should never touch /opt/.

The names are weird, but that's the intention behind the new scheme. If you want to change configuration, the default config might be shipped read-only in /usr/, but it should be overridable in /etc/. This is why systemd ships read-only unit files in /usr/, which are symlinked into /etc/ to be "enabled".


> /etc/ is per-system configuration overrides. There is a goal to move to "empty /etc/", meaning you should be able to remove all files in /etc/ and still have a functioning system.

I find this goal to be rather frustrating.

It means that software will hardcode it's default configuration when it cannot find a file in `/etc`, and it makes it very hard to deduce what exactly can be configured.

I strongly dislike software that will simply apply a default state when it cannot encounter a certain configuration in it's configuration file for this reason.

I favor that software simply refuse to start with a clear warning as to why, when it cannot find a configuration option in it's configuration file.

Applying a default on a missing key is another case of weak typing; I would hope that lessons were learned by now that guessing what the user wants in case of error is a bad idea.

What is even worse is software that simply ignores malformed keys, and applies a default value anyway, giving rise to not only innocent mistakes by typoes, but some malicious exploits that were found by putting zero-width spaces into keys to trick administrators viā social engineering.


I partially agree, but I think a good middleground is for default configuration to be in a place like /usr/share/, and allow for admin configuration to happen in /etc/. This way, if you upgrade, you don't need a complicated merge to merge new defaults + old configuration. I've seen a lot of configuration migrations gone wrong in my life.


That would mitigate the problem of easily finding out what options there are, but it would still be a form of weak typing where a best guess is substituted for a mistake.

If the system administrator truly intend for a value to be default, he should explicitly declare it so in the configuration file.

Interpreting the system administrator's act of forgetting something as his intention for it to be a default value, seems like a bad idea to me.

As for merge, I find it as annoying if software not support configuration directories. `foo.conf` should exist alongside `foo.conf.d`, and if the latter exist, it's contents should be parsed as well.

On an update, the system can simply install a file in `foo.conf.d` which contains the default of any new keys added from the last version.


> I strongly dislike software that will simply apply a default state when it cannot encounter a certain configuration in it's configuration file for this reason.

Isn't sensible default what make certain software a lot easier to use then others? Apple seems to be good at this. Ten years ago Ubunutu brought this to the Linux world to an extent. Personally I like the configurability of Linux, but I am past the days of spending hours tweaking it and I am quite happy if someone supplies me with sensible defaults.

Having said that, as a developer I agree with what you said, it's often easier if a library call gives an error than tries to do something smart / just carries on (JavaScript comes to mind), so there is a balance depending on what the user wants to do with the software.


Default configuration exists for multiple reasons: 1) to provide minimally working software so that the user isn't burdened by unnecessary configuration, 2) to facilitate easier testing of minimal functionality, 3) to keep configuration DRY, and 4) to stop-gap missing configuration.

Of those, only 4) relates to typing. But even then it often bears no relation to real typing, because often there's no schema or typing at all, and usually application has no idea what version a given configuration relates to.

In addition to achieving the above 4 points, we also need to be able to differentiate between default and modified configuration, so the application can override defaults successfully. You can keep default and modified configs separate a number of ways. You can hard-code it in the application, which is fine for the application. But it's a pain in the ass for the user. And as a packager, you cannot expect files in /etc/ to always be "defaults", as the user/administrator expects these to be configured by themselves at any time. And if you want to upgrade software, there needs to be a way to differentiate the versions of default configuration to compare the modified configuration to.

The best way to do this is to package default configuration in /usr/share/. The user then has the job of upgrading their modified configuration in /etc/, and they can use /usr/share/ as a reference, in combination with some UPGRADING guide.

Distros have long had a convention to package a default distro-specific configuration in /etc/, so that when the app first starts, it can work out of the box. But then when you'd upgrade a package, the distro would have to figure out what to do with the once-default-but-now-modified files in /etc/. This just adds unnecessary burden on the distro. Instead, the user needs to look at their own manual configuration and update it if necessary. But the default configuration for the system should remain virgin and independent, in /usr/share/.

And of course, none of this applies to configuration that can't be default. The application should always die if it doesn't find required configuration.


Yes. But generally many software allow configuration in home directory. That's least of my problems with Linux.


/usr/local/etc is a common pattern, though.

So maybe I would add that /usr/local is a special case of /usr for locally built* stuff, with sub-nodes imitating the corresponding nodes in /.

* Not always locally built, though. *BSDs put binary packages there too.


Except more often than not packages do their own thing and don't follow the convention. this is even worse if you compile from source.


GoboLinux install layout is a cool concept.

However, it's not hard to figure out where files were installed, and what files are from. The package manager keeps track of all of that, and you can ask it. Example for debian or ubuntu:

  # list all files from package
  dpkq-query -L <package>
  dpkg-query -L <package> | grep bin/  # see just commands

  # find what package a file is from
  dpkg-query -S /path/to/some/thing
I use equivalent commands in Archlinux (pacman ...) and they certainly exist for redhat/fedora but I'm not familiar with those.

For stuff not installed as packages, hopefully it's either in a single directory in /opt, or among a small collection of stuff spread in /usr/local, or in a self-contained directory in your home folder.


That of course assumes that the post-install scripts don't move files around. Mac OS has a similar utility called "pkgutil" which supposedly tracks all files belong to programs that come as "pkg" files -- typically system utilities that don't make sense to ship as clickable bundles. But the pkgutil database is often useless. For example, the MacGPG installer used to "install" its files under "/tmp/private", but that was simply a temporary install directory which was immediately moved by a postinstall script to an untracked location.


macOS has App Folders, since the beginning when it was call OS X, and they're kinda like GoboLinux, but even better! Just copy this folder anywhere and it runs stand-alone and self contained.

But almost nothing is just a plain App Folder on macOS anymore. Starting over a decade ago, anything from Microsoft or Adobe or Google even Apple themselves, requires you to double-click run an installer that puts suspicious bits everywhere, just like on Windows. They want to auto-launch and auto-update and integrate and engage etc. And if you want to run Photoshop or Office or Chrome, what are you going to do, not let it install all its junk all over? Every regular person certainly will just let it, and all non-trivial companies will take maximum advantage. Skype and Office were automatically installing crappy Firefox extensions and stuff, so Firefox had to do some annoying complex extension-install-signing thing to try to prevent that ...

This is why Apple and Microsoft went towards sandboxed App Stores. The user won't choose less-abusively-packaged software, but even Big Software Corp can feel pressure to get its app into the store and sandboxed.

Linux distros never had this problem or needed this solution, because they re-package the software they distribute. If xscreensaver or calibra or virtualbox are programs people want to run, but their authors do annoying or dumb things, that can be patched by the distro packager, and the package manager file database will remain in quite good shape.


It kinda seems like your example says more about the specific behavior of the MacGPG installer than about the general reliability of macOS's package database.


I have now memorized all commands (and stopped switching distros), but I used to regularly look at this rosetta stone of package management commands:

https://wiki.archlinux.org/index.php/Pacman/Rosetta


You said it, stop switching distros, master it and enjoy. By mastering the distro I mean knowing enough to be productiva with it. I almost only use debian or alpine(docker and servers) and gentoo on mi laptop


It's great to master a distro, until your license expires and your IT department decides to switch... Two times in 5 years.


You can certainly list files in packages in other distros, but the beauty of the GoboLinux approach is that it dispenses with loads of complexity. You don't need much of a package database at all.

Reducing complexity is good in and of itself and is always a win as long as you don't lose important functionality.

The notion of packages or installers vomiting files all over the place on systems really needs to die. Mobile OSes mostly got this right, and MacOS has been moving in this direction. Windows and older Linux distributions are the only things left that treat the system like a stir fry of random shit. Of the two this is probably easier to fix on Linux, since Windows is hamstrung by a ton of legacy requirements that are much harder to work around.


Not least of all on Windows is the long standing user expectation that software is installed by double clicking setup.exe.


RPM-based distros (redhat, etc.)

    # List all files from q package
    rpm -ql <package>
    # Where did file come from
    rpm -qf <path>


And just because I have to be THAT guy, with Arch linux you use pacman -Ql and pacman -Qo to do the same.

Also, man! Do I love the Arch wiki:

https://wiki.archlinux.org/index.php/Pacman/Rosetta


I'll be THAT guy as well! On Gentoo, "equery f <package>" displays a list of files installed by that package, "equery b <path>" says which package owns that file or directory, if any. (f and b are the short versions of "files" and "belongs", respectively)


I prefer `qlist` and `qfile` from app-portage/portage-utils [0] as the q versions tend to be faster.

[0] https://wiki.gentoo.org/wiki/Q_applets


I mostly use RHEL and SLES because that's what's required by my job. Their package databases are helpful sometimes. But usually it's just a symlink nightmare hidden across 5+ different directories. That is if you can even install anything past the license checks that fail to phone home because you are in a private VPC.

I do like Arch based Linux distributions. In my opinion they are the best all around for servers and even containers. Pacman is an excellent package manager.


FHS and distros that follow it aren't perfect, but it does make a big difference. I prefer Fedora-based and Arch-based flavors for that reason. It's usually pretty easy to find stuff.

Backtrack used to do something like this and while it was easier compared to a lot of other debian/ubuntu based things, I still ran into some pain points.

A great command to keep handy is

    ls -la `which <program>`
That will usually find you the binary right away, and if it's a symlink or something you'll see it in the ls output. It's especially useful to me because (generally speaking) I put stuff I build myself in ~/bin rather than `/usr/local/bin` so I can quickly tell if I'm using my own build or the package manager's.

Edit: FHS == Filesystem Hierarchy Standard - https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard


trivia: `which` is not the same which on all distros - on Ubuntu for example it's a super simple shell script named `which` which supports one single param (-a), not the compiled binary that you have on say Arch - and the features are different between them - I can't for example run `which --skip-tilde which` on Ubuntu, invalid usage.

Many scripters who need wide compatibility of "which" fall back to the builtins `type` and `command` as they resolve any bash aliases set and give you a more truthful (sic) output than relying on `which` and bash is bash on all platforms:

    # command -v which
    /usr/bin/which

    # type which
    which is hashed (/usr/bin/which)
$0.02 for your next adventure down the `which` rabbit hole. :)


It can also be a shell builtin:

    akira@akira:~ which which
    which: shell built-in command


Very true, I missed that thanks - to illustrate how that could be important (to the scripter):

    kaneda@tetsuo:~ which type

    kaneda@tetsuo:~ type type
    type is a shell builtin

    kaneda@tetsuo:~ command -v type
    type
This was on Ubuntu, the which script doesn't resolve type or command silently, while on say Arch it's verbose ("can't find type in (paths ... ...)" so even a failure-to-find emits differently for the `which` crowd.


> Many scripters who need wide compatibility of "which" fall back to the builtins `type` and `command` as they resolve any bash aliases set

When wide compatibility is needed it is best to avoid bash specific solutions.


`command -v` is standard POSIX sh [0]

[0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/c...


Interestingly Arch address multiple bin directories mess by symlinking /bin, /sbin, and /usr/sbin to /usr/bin. Stali, an experimental distro (originally a suckless.org project), goes that a bit further by symlinking /sbin to /bin and /usr to /.


Its the same in Ubuntu and Centos with symlinks of the bin directories.


Big picture, this is known as "the usr merge" which has been happening for probably a decade, each distro is approaching it at their own speed as they're not all exactly identical in their FHS layout (some distros add a bit of this and that on top).

You can read about it at a meta level over here, each distro has it's own wiki/working page if you Google around: https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...


The FHS really only makes sense when you use Linux as traditionally intended as a multi-user system.

Most uses of Linux are single-user, running as a server, or even more just a container.


I'm not sure I agree that most uses of Linux are single-user. I know this (likely) isn't what you meant, but at a minimum most every system, even containers, will have at least two users: root and the non-privileged user account. It may not be two humans but it is two users. On my current desktop (where I'm the only human user) I have 53 entries in /etc/passwd. Each have different permissions and access to resources.

But even if you consider the above to be a single-user setup (which is reasonable) I still think there's value in consistency. I personally get a ton of value from having CentOS and RHEL systems being laid out the same as my Fedora Laptop and Desktop system, and having all that knowledge transfer both ways. Being able to add a user to a "single user" setup too is very useful. It would suck if adding a second user required completely changing everything about the filesystem.


FHS makes sense from a system developer standpoint — entire system is a project

    /lib for libraries
    /usr/include for C includes
    /etc for configs


> I prefer Fedora-based and Arch-based flavors for that reason. It's usually pretty easy to find stuff. > Backtrack used to do something like this and while it was easier compared to a lot of other debian/ubuntu based things, I still ran into some pain points.

FTA you linked:

> Most Linux distributions follow the Filesystem Hierarchy Standard and declare it their own policy to maintain FHS compliance. GoboLinux and NixOS provide examples of intentionally non-compliant filesystem implementations.

Debian and Ubuntu adhere to FHS; they're not more difficult in this regard than Arch or Fedora (or distributions based on these). GoboLinux and NixOS don't because they serve a higher goal. GoboLinux for user-friendliness/explorability and has backwards compatibility (which is hidden via optional kernel extension), NixOS aims to solve dependency hell (among various other goals). Learning curves are an obvious drawback.

> ls -la `which <program>`

Backticks should be avoided; use ls -la $(which <program>) instead (avoid the $ in Fish).


Thank you, that's informative and you and GP have made me feel better about this particular aspect of learning Linux.


My pleasure! It can feel brutal at first to learn Linux, especially with so much choice out there (aka fragmentation?), but honestly it's the best thing I ever did for myself both personal and career-wise. Can't recommend at least basic Linux proficiency enough if you are in a technology sector especially.


Sadly Gobo has few users, so it doesn't enjoy as much support as other mainstream distributions.

You may want to consider NixOS, which is equivalent to Gobo except for the fact that linking across packages is static instead of dynamic. This has very deep consequences.

Nix has tons of developers, it's one of the most active projects on Github. Past the initial difficulties, most common usecases are really easy.


> Sadly Gobo has few users

Honestly, there's nothing sad about this.


In almost any system, finding out where what files belong to a package is a trivial operation for the package manager to list.

The idea that a package is “installed somewhere” rather than “installed” and has files that “belong to it” seems to be a Windows-ism in mentality, where it is common to start applications by navigating to the folder they are installed in, and starting an execution there from an absolute path.

Many packages' files consist of nothing more than adding more fonts to the system, it seems a bad idea to me, to have all that in a per-package folder, rather than simply in `/usr/share/fonts`, and have the package manager track what file therein belongs to what package.


This runs somewhat counter to my practical experience. On Windows, most programs come with an installer and end up launchable with a shortcut from the start menu. On Linux, on the other hand, unless you're pulling from the package manager, then you've probably got a .tar.gz archive to manually extract somewhere. I see things have been changing only recently with the advent of Snap/Flatpak/AppImage.


I find that most Unix software installs via a `make install` directive, which keeps track of what files are installed where, and are then uninstalled with a `make uninstall`, if one not rely on a package manager that installs does the same internally.


/bin and /usr/bin make less sense these days since the root partition is usually big enough to boot and to hold all the programs the system needs. But it’s important to have somewhere outside the package-manager’s control for manually installed binaries. Most installers default to directories like /opt/foo/bin and I tend to put user-built programs into /usr/local/bin.


/bin and /usr/bin would actually make plenty of sense these days - the new root partition is called initrd and gets hidden by pivot_root.


I ran across it ages ago. I think you could use something like GNU Stow to achieve the same thing unless I am missing something. You install programs to ~/.local/pkg, then you link it to ~/.local/{bin,etc,include,lib,man,share,var}. Unless of course it is an Electron app.

I use zpkg to handle this, it has been made specifically for this use case.


As another solution in that space, equery and other tools are excellent when administering a Gentoo box.

https://wiki.gentoo.org/wiki/Equery#Listing_files_installed_...


`man hier` is usually a helpful reference for things like this.


Exactly. It's always annoying to use, sometimes I just give up. This is why I never feel comfortable using Linux.


> Mainly, finding where the heck it installed that program in the maze of file/folder locations

whereis ?


Security guy here. I had the chance to look over this project and try it out today. I have no less than 5 ways to acquire root privileges from an unprivileged user in the 15 minutes I looked at it. Has anyone audited this distro in the past 10 years?


With 5 ways to get root, the attacker will be rendered harmless via decision paralysis.


Do you plan on documenting and contributing an actual critique or just making an unverifiable internet jab? Because if you're serious then the project would probably welcome knowing the specific so they can evaluate their system design.


Yes, I mentioned in another comment (at the time of your comment) that I planned on doing a more comprehensive writeup at a later date. If what you're saying is my claims are unverifiable, you're correct. I generally don't publish any of my research unless there's a way to keep my lights on. The good news is, I should be able to soon.


Thanks security guy. For reference, how many ways to break in do you find per 15 minutes when looking into other distros?


Depends on their popularity. For Ubuntu, Debian, RHEL, Mint, MX Linux, etc. it took me 10 years to find a single LPE. Special case Linux distros, like those for audio editing or signal processing, have about 1-3 LPE's in the first 30 min to an hour I look at them. And GoboLinux.. broken permissions, broken trust of library load paths, root SUIDs with 5 LPE issues at a glance of the source. They even gave some regular binaries root privileges because why not. I didn't look into the root services running or writable paths because I assumed those would be vulnerable too. The question is not if you can get root, but how many ways there are to get root. I was running GoboLinux in a 15 minute session, and once it expired I didn't start a new one.

I'll come back to it and do a more comprehensive audit. I haven't released any zerodays I've found, ever, but recently I was thinking about making a blog. This might be some good material.


Thanks, good to know that this is not the norm. Yes please do a more comprehensive audit. And start a blog.


Have you audited Tails by any chance?


I have. Only so far as to test if another zeroday I created worked. It did because it shares a lot of similarities with other Debian-based targets. I haven't tested Tails further, because there are no ethical buyers of Tails vulnerabilities.


Any experience with opensuse?


It's the last distro on my bucket list. I have vulnerabilities but nothing to drop a root shell.


just wanted to thank you for the update


How does this compare to other "weird" distros like NixOS or GuixSD?


I spent about an hour on this after your comment. Honestly, I was pretty lost in the filesystem for half of that time. I have never used either distro.

My copy of NixOS had no password on the root user by default, which is not ideal but I assume most deployments aren't like that (right?). I was able to become other users on GuixSD using the SUID's the distro ships with, but not root. Not yet. The surface is much larger on both of those distros than the mainstream OS's. I may be able to pull of a root LPE, but I'd need to look for a full day at least.


> My copy of NixOS had no password on the root user by default

That isn't the default behavior for NixOS. From the docs[1]:

> If set to null (default) this user will not be able to log in using a password (i.e. via login command).

The installer also asked for a root password when I installed NixOS years ago and it still does[2].

[1]: https://search.nixos.org/options?channel=20.09&show=users.ex... [2]: https://github.com/NixOS/nixpkgs/blob/a3a531071598cad0c60485...


Can you send a message to <guix-security@gnu.org>?

https://guix.gnu.org/en/security/


Thanks for the insight. I believe you are expected to set a root password in the installer :) but other than that I wouldn't know.


> My copy of NixOS had no password on the root user by default, which is not ideal but I assume most deployments aren't like that (right?).

By no password, do you mean:

- it is impossible to log in as root with a password (a good thing, I'd think, as people should be using sudo), or

- it's possible to log in as root without any authentication (a bad thing, obviously)?


I recall that by default it is impossible, and you are expected to use 'sudo'. Of course that means you can also use 'sudo su'.


Ugh. Any specific examples, just for the record, for someone who doesn't care enough to try the system?


Ah, I remember 7 years ago trying to decide whether I'd switch to GoboLinux or Nix. I choose the latter and don't regret for a moment, but I'll continue respect GoboLinux and have the found memory.

I think the big difference is:

- Nix: the filesytem is rarely the right datastructure (I agree) other things are needed.

- Nix: Store and Nix too is more freeform, package / distro norms provided by higher layers (Nixpkgs, NixOS)


I've been using NixOS for 2 years now, my general impression is that even if Nix isn't the popular solution, it'll probably inspire the right way of doing things for the next years. It might not look like much, but representing the whole system's dependencies as a Graph database, then using a purely functional language to define such database has happily prevented me from shooting myself in the foot many times over the past year.


NixOS is the distribution much how Haskell and Scheme are the programming languages in that in their lust for theoretical perfection and elegance, they do not see widespread adoption, but often have a lasting influence on other projects that take the useful parts from their elegant ideas and work them into something more practical.


I've never bought this, I use Haskell and Nix all the time for GUIs and other nerd-phobic tasks. And it's definitely the most practical way I know of doing things.

Most people just want to cargo-cult what others do, and most things initially get popular because some way of deploying software requires it / makes it the path of least resistance. None of that is about practicality, just short-sightedness and ecosystem effects.


Good point. Today most popular languages (or at least those with garbage collection) use lexical scoping of variables, which they copied from Scheme.


Have you tried any of the containerized distros? I haven't tried Nix but want to.

Just wondering if you can compare/contrast NixOS with something like Fedora Silverblue or Fedora CoreOS.


I used both and even though their purpose is very similar (have an "immutable" system tree that you create and switch into it), the day to day is very different.

Silverblue is still pretty much imperative, you install/remove RPM packages and that's it, you use Flatpak for everything else. NixOS you have to describe your entire system in a programming language. NixOS gives you so much more freedom to do what you want, but you have to work for it, learn a language, learn it's constructs, etc.

I enjoy both though, feels like the right direction to go, you just need to choose how you want to interact with your OS.


I haven't tried, but I get the sense that those are more about cleaning up the run time than the build process. At least the flatpak/snap world also is kind of anti-integrative in that everything is vendoring everything else.

I personally think containerization is just too much a PITA to use for software that doesn't live in a bubble, e.g. desktop software. The capabilities revolution (Capsicum or CloudABI) would be more secure and easier to program with, so we just need to force our way there, and then everything will take care of itself.


Nope, but it's not the same thing. NixOS can easily provide me with free rollbacks and is mostly reproducible (https://r13y.com/). The funny part about NixOS is that once someone reports a weird behavior, almost everyone will have it and can verify/fix.


I'd like to try NixOS, but from what I've seen, there are still a lot of issues trying to run it as a daily driver on a laptop.

Has that been your experience?

With Ubuntu, Debian, and even Arch, it's pretty much install and go.

Also, it requires significant disk space, which I usually don't have on my laptops.


I've been using it as the daily driver on my laptop for a few years, and it's been great! Using it in general is a steepish learning curve, but I think it's very worth it. Having rollbacks for every upgrade just makes me sleep better, and it's saved my bacon more than once. Being able to customize the packages running on my system in a safe and reproducible way is great as well. It's also helped me do research, it's easy for me to build different versions of a package with different patches and compare their performance, etc.


Same. Rollbacks are awesome. Not only rollback in the boot menu, but also just keeping your OS configs in Git. It is so nice that I can mess with my configs without concern, then when I am done I can diff them too see what I changed, continue messing to clean them up a bit, then commit (or reset if the experiment didn't work out). When I used Arch I would edit a handful of configs trying to set up something new or tweak a setting and I could not get back. I would need to take notes or just try my best (I have force-reinstalled packages countless times).

I'm also waiting for the parts of my new desktop to come in and I'm excited that all it takes is cloning my config repo and applying to get to exactly the same setup. With Arch I would have reinstalled to clear the cruft that has accumulated, however with NixOS I can just audit my configs to see an entire list of cruft and easily undo it by removing lines.


I run NixOS on a variety of laptops as my daily driver. While there's initially a steep learning curve, the time is well spent developing a reproducible configuration that you can quickly install on a new machine. I adapt the same configuration to generate a custom ISO that I write to a USB drive so I can boot into a familiar environment in a few minutes on almost any machine (running in RAM if there's enough room). It doubles as an installer (as does any NixOS installation). This approach is also handy for booting ephemeral servers on any hardware (DNS, DHCP, etc.).

One of my hobbies is recycling older machines. I've installed NixOS on Chromebooks with 16GB internal drives. It's possible if you remember to garbage collect after each successful upgrade (and stick to 64-bit machines).

I tend to treat my laptops as disposable and keep my important data on a file server so I don't rely on local disk space.


> I tend to treat my laptops as disposable and keep my important data on a file server so I don't rely on local disk space.

I do the same, except I use an extensive dotfiles, config, and bootstrap script repo[0].

So I like the idea of Nix for that reason. It's going to be hard to let go of Debian/Ubuntu, but I'd really like to have a perfectly reproducible environment like Nix.

Thanks for your reply.

0. however, I also unfortunately have to use Windows sometimes,so I dual boot, and that takes up significant room on my smallish laptop drive


I also have my dotfiles in portable repositories that must work on a variety of unices (so I don't use the popular-but-nix-specific home-manager package). My custom ISO has no user data, it just creates my immutable account with my public SSH key (in case I need to remote into the machine immediately) and has a global alias named "bootstrap" that fetches and installs my dotfiles. After that, I can pull any updates made on other machines, log in and out of the DM, etc. until I turn off the computer. I've gone weeks on the same session in RAM (waking from sleep works fine on most machines).


You should consider trying home-manager as the other reply mentioned, because it lets you stay in non-NixOS land until you get comfortable. I use my home-manager config both on NixOS and on Ubuntu to not only to handle stereotypical dotfiles like Neovim and Tmux, but also things like setting dconf keys so that GNOME is instantly set up the way I want it on any new installs.


Tbh I'd recommend trying nix on top of another distro or on top of OSX first.

If you just want the ability to install arbitrary versions of packages along side each other/have painless roll backs/bake config files into your install you can do that there.

Once you get comfy with nix on top of another distro then think about whether you want that approach with the entire stack.


I've been using it for almost 3 years on my laptop with no issues. As far as disk-space 50GB is generally enough if you collect the garbage periodically (my workstation is using a bit over 100GB after almost 2 years of running NixOS, but I don't think I've ever done garbage collection on it).


I've had pretty much no problem on my ThinkPad. It always depends on how custom your setup is.


I'd love to use Nix, but I'm using a lot of embedded devices which are tied to specific Linux releases (e.g. NVidia Jetson <-> Ubuntu). Trying to get e.g. a GPU to work on these systems with Nix is probably going to be a bad idea.


Nix also has a hosted mode where it piggie-backs off the underlying system: I’ve been using it day to day on macOS and various Debian installations, and it’s working pretty well.

Nix/lorri/direnv, in particular basically obsoletes nvm, rbenv, etc.



I have been waiting to try that one for a long time. I always found it annoying that the linux filesystem is not self-explanatory. I have been given unsatisfactory explanations about it so many times but could never understand the rationale between the /bin, /usr/bin, /usr/local/ etc. Turns out it is just historical cruft : (oops can't find the link) I am so glad I switched to Linux but I really miss Window's 'Program files' directory.


I find the complains funny, for one you should really never need to care where certain programs are installed (and I don't remember the last time I had to). Moreover you can always run which executable and it tells you the location of the executable.

It's also funny how Windows is supposedly better, there is at least program files and program files (x86) some programs still just install into C: (I think python used to do that), and there are the different windows folders. Let's not forget the myriad of places where programs drop data. And because Windows has no proper package management, you actually need to where things are.


> I find the complains funny, for one you should really never need to care where certain programs are installed

Well Linux is not a walled garden with a company that takes care of everything behind the scenes so it should be easy for a beginner to wrap its head around a system and understand at least its logic. It's good not to have to rely on specialists for simple things.


I really don't understand what you mean. A beginner should be able to sit in front of the linux machine and just understand everything without needing to do any background reading?

And to what level does that apply? Should the user understand everything? Also why is a directory system of /Firefox /Chrome /Gimp ... easier? Where do libraries sit in this system (is that easy to understand?)? How do I determine if I build something what library I'm going to use ...

I think the perception of a certain way being easier is largely based on what one is used to.

I certainly am always lost when I sit in front of a windows machine. Just the other day I was trying to get ctypes to find a library and ended up just ending lots of things to LIB or PATH (I'm sure there are better way to do things). BTW I was asked by a windows user for help. This is not a dig at windows, just my argument that filesystem layouts are largely conventions (even though often for good reasons) and yes one has to invest some time to understand them.

Edit: For someone who wants to understand how Linux works and many things work together I highly recommend giving linux from scratch a try. Most people fall into the trap of just copy pasting the commands (I was one of them as well), so you probably want to build a couple of times, and try to update your system. Hopefully you run into a couple of roadblocks, which force you to thing a bit more about what you are doing and is were the learning experience comes from IMO.


Writing a basic python script:

#!/bin/python

print("hello world!")

Oh wait, /bin might not exist. It's in /usr/bin.

Wait, what even is /bin? No, it's not the recycling bin - see, it's short for binary. No, there isn't a separate directory for plaintext scripts, why do you ask?

Well surely it's a directory for user-specific programs then, right? Nope, it's read-only stuff.

Well, why not symlink it to /ro? Well, because symlinking and transitioning are too much effort, and because that's very english-language-centric of you, what about other languages where /usr and /to are random gibberish?

Well, how's other-language support then? For instance, is it possible to reliably use Korean characters in your username without a lot of programs breaking? Nah, see you're better off just writing your Korean name in english characters. Much more reliable that way.

But wait a sec, let's go back to that symlinking thing: if symlinking and migrating is too much effort, then how did we ever switch from /bin to /usr/bin? Well duh, we just symlinked /bin to /usr/bin.

Great, can we do that for /usr and /ro? Nope.

*****

Can we please just admit that Linux's ease of use is burdened by tons of historical baggage?

Then we'll have to either say "we just don't care that much", or actually fix it.


OK I bite in which operating system would I just know where Python is installed?

Moreover, most of the things you are saying are wrong. 1. Any linux distribution who follows the FHS (pretty much all the big ones) has a /bin directory. It doesn't contain the python usually though. 2. bin is also not necessarily read-only (I'm not sure where it is). Maybe you should have a look at the FHS? 3. not sure what you mean by symlinking /usr and /ro you can certainly symlink to /usr to /ro, but not sure what you are trying to achieve 4. UTF-8 usernames are certainly possible on Linux distributions that have full UTF-8 support (which I imagine are again most).

Your proposal of just fixing the "historical baggage" for some claimed ease of use (I still claim that directory naming is really at the bottom of the list for changes to improve ease of use. And linux has changes that are required to improve ease of use), would actually break lots of peoples work flow for questionable gain.

Moreover, if directory naming is important to someone, they can use Gobolinux, so it's not like you have to follow the old "historical baggage".


What I mean is that a system should be discoverable, that is exposing its logic to the user. A good counter example is the French orthograph and grammar : Byzantine rules, exceptions to the rules that have to you just know about, letters and accent you don't pronounce that are there for ideological reasons, etc. It tends to favor those who just know and to put the learner at a disadvantage. I am NOT saying that Windows is a good example of discoverable system either and I am not switching back to it anytime soon.

> I think the perception of a certain way being easier is largely based on what one is used to.

I think the perception of things being "not that hard" is bias one acquires once accustomed to the quirkiness of a system, not perceiving anymore what has made learning painful or slow. I like to program in C now that have shot myself in the foot so many times, I am even proud that I have mastered it and I learned a ton in the process. Nevertheless I am glad that Zig is approaching 1.0 and would recommend it to any beginner over C.


I used to think it would be a holy mess of symlinks. Spinned up live image in a VM and pleasantly surprised. For compatibility there are traditional /bin but its a symlink to /System/Index/bin, which contains links to actual executables in respective /Programs directory.

I wonder why its such a niche distro, with time this should have caught on and be more popular!


> For compatibility there are traditional /bin but its a symlink to /System/Index/bin

Then why does the readme say:

    > cd /
    > ls
    Programs
    Users
    System
    Data
    Mount
Are they hiding directories?


Yes, they are.

There is a kernel module that ensures that listing hides, but files can still be accessed by their path.

I'm not sure to what level this is an absolute disaster.


For starters, I guess that an rsync of / will not backup the entire system then.


Yes, quite a bit of software is written around the assumption that what does not show up with a listing truly not exist, —

Including, I would assume, rather security sensitive software.


The kernel module that hides the standard dirs under / is entirely optional, and is only there for aesthetic purposes.

If you don't want it, don't load it. Absolutely nothing in gobolinux requires that kernel module to be running. Again, its just purely for aesthetic purposes.


A lot of system designers are insufficiently empathetic to the plight of those who do not yet know as much about the system as they do. This directory-hiding "feature" is kind of a "gotcha" in that a new user could waste hours if he has somehow failed to learn of its existence and he is taking an experimental approach to learning about the system (GoboLinux) as opposed to a "read the documentation" approach.

Sometimes requiring the user to know things that cannot be discovered by experimenting with the system is a necessary evil, but there should be a good reason for the requirement.

Contrary to what you seem to believe, the fact that purely aesthetic reasons motivated the designers of GoboLinux to create this footgun is a bad sign, not a good sign.


Honestly, thats just nonsense ...

Anyone who wants to try out gobolinux for real would at least do some basic reading first. Its that distinct a distro that you kind of have to, and no problems with that. Not everything innovative can be expected to be digested without a modicum of work up front.

I mean, if people install NixOS without any research up front, and throw up their hands in disgust when it doesn't look like Red Hat Enterprise for Big Boys v10, does that mean that NixOS are messing up, and they should change how they do things?

I think not.

Sometimes the simplest alternative implementations require the most up-front understanding, because their simplicity challenges long-held preconceptions about how things should be.

Thats Gobo in a nutshell.

At its heart, its actually a very simple distro. Which uses simple tools, and the filesystem, to lay everything bare in front of the user.

Funnily enough, Gobo is one of the few distros where you can refer to /bin/<any-linux-executable>, and as long as actually have that pkg installed (yes, under /Programs), then /bin/xyz WILL be found. Guaranteed.

Thats by design.

This entire subthread is a bikeshed, started by mistaken assumptions ("security is broken", "some pkgs won't work", etc, etc) over their GoboHide kernel module.

None of it is true, and all too often its a curse which innovative distros have to deal with, somewhat unfairly. I mean, everything is explained in excruciating detail on their website (and was over a decade ago, when I first used it).

And still, years later, we have folks on Hacker News opining at length about how the GoboHide kernel module causes pkgs to break because they can't find /bin/whatever (FALSE), or how gobo doesn't have the ability to build pkgs (someone else brought up this gem on another subthread).

People, if you haven't come across this distro before, then please at least do the basic once-over of their website before jumping onto Hacker News and coming up with X technical reasons why this project is a failure ...

Eg: there's a fantastic essay by Hisham from long ago explaining the motivations behind Gobo:

    https://gobolinux.org/doc/articles/clueless.html


I have a friend that uses GoboLinux. It's not a mainstream distro but my friend did seem able to hook it to the overhead projector without problems, so it works well enough from at least that point of view :)


Is finding applications on the fs a big problem people actually have?


It's not about finding applications, it's more about the ability to have multiple versions of the same package installed and used concurrently. Best you can have with FHS is something like /usr/bin/python3.8 vs /usr/bin/python3.9, and this requires package maintainers to manually support this. Most packages don't do this, so if you happen to need - just a random example - an ancient Firefox version installed separately, "traditional" solutions like rpm or dpkg won't really help you.

Plus, a different organization of the filesystem, where everything that belongs to a package is stored under the same base path, rather than being spread across the whole FS. I don't really remember GoboLinux ideas, but IIRC they had something about leveraging use of the filesystem as a part of package manager.


The better way to do this is to use flatpak. The sandboxing will be a major benefit for your ancient firefox version as well.


Yeah, I don't think that's a problem I ever run into. Also they claim:

> Say goodbye to the old problem of having the package manager complain that libXYZ is not installed even though you can see that it is there.

That is... also a problem I have never had.

I like the idea of GoboLinux just for giving a new take on something more streamlined (because yes, the FHS and de-facto systems before it have accumulated a lot of cruft), but I think it would be more productive for the GoboLinux folks to advocate for their system based on its own merits, not on semi-imagined faults of the more common system.

Being able to have multiple versions of programs and their dependencies (which would help avoid dependency hell) installed is a big thing! That alone is a really nice benefit of GoboLinux's system.


I read the "At a glance" documentation and came upon this:

> /bin is a link to /System/Index/bin. And as a matter of fact, so is /usr/bin. And /usr/sbin... all "binaries" directories map to the same place. Amusingly, this makes us even more compatible than some more standard-looking distributions.

This just feels wrong to me. I appreciate that Linux is all about choice, but I really wouldn't want this.


Why?


How does GoboLinux distinguish between packages of the same version compiled under different environments, or compilers, flags etc.? In Nix such changes cause the resulting build to be prefixed in /nix/store with a different hash.


I'll venture a guess: It doesn't need to. All of the "first-class citizen" packages, which are visible to everyone, were built in the same environment, just like in other distros. Alternative versions should just be held separately.

... but come to think of it - that means that you can't just put your package in /programs/MyApp/12.3.4 - because if you switch your system to that version your dependencies may well break. Hmm.


It's all compile on machine, there's no package repository, only a Recipe one.


First time reading about a distro like this. Looks pretty neat! Isn't this what /opt is for though?


Hi! One of the creators of the distro here! (Super surprised it made it to HN, someone just sent me a heads up!)

> Isn't this what /opt is for though?

Pretty much yes; the idea was "/opt all the things!" (well, before the all-the-things meme existed — it was a long time ago!) and use symlinks to make everything appear as if it was installed in the "regular" Unix places (so that we wouldn't have to patch/reconfigure every single program).

GoboLinux has a long and cool story (I still use it!) but that's the tl;dr version of the motivation!


Sounds a bit like how Homebrew works (or used to, anyway). Based on my experiences there, one of the big issues with just keeping all versions of everything is that you end up with problems when you're trying to build/use packages with a large pool of dependencies, especially when the dependencies are interlinked.

We used to get problems like this when building ROS on homebrew where you'd have a bottled version of Gazebo (a robot simulator) which had been linked against specific versions of boost, python, opencv, whatever. And then a new version of one of those dependencies would come out, and something else in the tree would update ahead of Gazebo; now suddenly you've got Gazebo plugins which crash with ABI issues and you have to go spelunking to find out which binaries are linked to what and where the conflict is.


> Sounds a bit like how Homebrew works (or used to, anyway).

It's not by accident! In its original docs, Homebrew described itself as package management "the GoboLinux way".

Given how Homebrew has become this super-popular tool, its success makes me proud of Gobo's legacy (and a bit vindicated from all the people who told us "this model is crazy, it will never be usable!" :) )

> keeping all versions of everything is that you end up with problems when you're trying to build/use packages with a large pool of dependencies

Yes, it can be a pain! In recent years, we introduced Runner (https://gobolinux.org/runner.html) in GoboLinux as a way to address this kind of issue; a virtualization layer to present the expected dependencies at the right places.


> In recent years, we introduced Runner (https://gobolinux.org/runner.html)

Clever. Seems like it's essentially a way to "pause" the rolling distro at a known-good point, and then use that environment to build whatever high-level stuff it is with your set of frozen dependencies— this would definitely do the thing, and would have been very helpful for the ROS on Homebrew effort.

OTOH, how different is this from Debian Sid as the rolling release that is occasionally broken when a new low level dep comes in, with the supported releases of Debian, Ubuntu, and others as the shared pause-points? I think ultimately I find it easier putting the work of finding a set of interoperable versions of things onto my distro maintainer (and in exchange knowing that if I want the cutting edge I'll have to do my own backport or go to PPAs), rather than taking on that work myself and hoping for the best or risking getting stuck with a bunch of super-old stuff and no clear migration path forward.


Honestly, I have never really grokked dynamic linking. As far as I understand it, it was birthed whenever disk space was a huge limiting factor so it was better to share dependencies between programs. But, golang's model of hermetic builds sits much better with me.

I want to produce an executable asset that has everything necessary to run, regardless of the wacky local environment that it's being executed in. I have lost so much time trying to debug environment related issues because (at least in my experience), debugging them requires a deep system knowledge of the dependencies and how Foo tool/program/framework is expecting them to be.


It's not just about disk space, but also:

1. Security patching in one place

2. Memory usage

3. Allowing a program to be improved without it needing to be recompiled

Most obviously without dynamic linking, the concept of an operating system upgrade wouldn't make any sense. But, users quite like clicking "Update" and then their programs get new features.


Is it based on the work of any other distro or its own thing ? What init system has it got ?


Hi! It isn't derived from any other distro; it's built from scratch using its own file system arrangement. The most recent release comes with GoboLinux' own init system, although we have been running some experiments with systemd as well.


Thanks. What's the name of the init system ? Any reason not to use Void's runit ? Any reason to switch to systemd ? (I am truly unopinionated on init systems,not trying to start a flame war, juste curious)


This seems to be NixOS minus the Nix language, and traditional approach to package installation, configuration etc.


The only similarity is the non-standard filesystem hierarchy but even how that comes to be differs conceptually.


Might be a really stupid question, it's late here, be gentle: but why not just dockerize all applications and let them bring their own libraries etc by default? Wouldn't address similar issues with "everything neat and tidy"?


Take a look at Fedora Silverblue, or on the server Fedora CoreOS (FCOS)/RHEL CoreOS (RHCOS) if you want to see serious mainstream effort at it (no shade at the other projects whatsoever btw, I think this is cool).

I don't currently run Silverblue full time but I could see myself doing so in the future.

Regarding docker containerization specifically (or a different OCI-compliant runtime like Podman), it gets very hard because of things like video and sound (Pipewire should help immensely), but also there are lots of apps that you want to have uncontainered access. Like Vim becomes a lot less useful to me if it's filesystem is contained.


I don't know how well Docker works with X11, but just the fact that it would be nearly impossible to make global settings changes would be frustrating as hell.

Having a full Gnome install on every app, a full KDE install on every app. Having to install a given theme into every single docker container.

Maybe I'm missing something. I'm not even sure how this could make sense.


You wouldn't use docker, Flatpak would be the choice for containerized gui applications. I'm not an expert but it looks like the kde and gnome env gets shared between flatpak applications.


This makes a ton more sense. Though I'm not a big fan of flatpack either, it's not quite as crazy weird as using Docker for the works.


Well, for starters, GoboLinux is far older than Docker. It's built on similar (sometimes identical) tools to what Docker uses, though.


By the way, if you scroll all the way down in the "Docs" page of the website there are links to older articles about the early history of Gobolinux. Some interesting discussions about the design of the Unix file system hierarchy.


Gobo likes the classic, four-month-old TXR from 2009.

https://github.com/gobolinux/Recipes/blob/master/TXR/025/Rec...

They jumped on it shortly after I made the project public with version 9, picking up version 13. They updated to 18 and then one more time to 25 and that was it.

Guess I just don't make `em like I used to.


Wow, I'm usually very suspicious about "new takes" on Linux distros but this does look very cool indeed. And it comes with a simple WM as default rather than a DE? Double cool!


It's not a new take. It's been around for more than a decade (18 years, actually).


Yeah, I remember GoboLinux from when OSNews was still a happening place.


Still my overall favorite distro. Just wish it would "take off" and become mainstream -- it really has the right ideas, particularly with regards to package management.


If you are against traditional package management, what is the argument for this approach over something like NixOS? This doesn't seem to do anything for trying to roll back to a previous known working state, which is probably my biggest issue with the way most mainstream linux distributions package management works today.


They are contemporaries: GoboLinux is as old as Nix, and older than NixOS. There were other similar ideas around (GNU Stow), but I think Gobo was the first that went "let's make a full OS out of this".

I would say Gobo and NixOS follow roughly the same philosophy, with Nix adding the functional aspect on top, which is pretty neat. But yes, one of the motivations for Gobo was to make it easy to revert versions of programs, with commands for enabling/disabling their symlinks, and even keeping multiple versions at the same time. We didn't do full-system snapshots, but handled it on a program-by-program basis, which was our main interest at the time (tinkering with the latest window manager!).


https://github.com/NixOS/rfcs/pull/17 proposes lots and lots of symlinks in a way that reminds me of GoboLinux :).


For handling rollbacks you can use a normal layout and an A/B partition scheme (https://source.android.com/devices/tech/ota/ab) or apt-btrfs-snapshot. The problem with NixOS's approach is you have to rebuild the world on every small change, because the folder hash changes. GoboLinux's layout at least avoids the cascading rebuilds.


We did something like this 20 years ago at a company I worked for. It's effectively pointless, and there are better, more modern methods. Here's why.

First of all, the filesystem isn't a database, it's a hierarchical index of files. It has a couple fancy features specific to managing files. Trying to force it to do more than that will result in pain, and eventually building something else around it. But the filesystem isn't even the biggest limiting factor.

Managing both the build-time and run-time dependencies of software is more complex than a filesystem can handle. To execute an application in an operating system, there must be one "environment", which is a combination of a kernel, an OS's environment and resources, and a set of applications and dependencies. The dependencies are set at build time and run time, and can't be interchanged at will without causing instability (if it works at all). You can juggle symlinks around all you want, but it doesn't result in any useful functionality other than looking at a bunch of differently versioned files that are not - cannot - be in use.

The applications themselves can be programmed to interact with multiple versions of dependencies, but if they are not already built to do so, you cannot force them to. This underlying limitation is what forces an application environment to depend on a static tree of files. It doesn't matter how many versions of a dependency you have installed, you can only use one at a time, as each application allows you to. That's why you can only install one version of a package, and why we don't have trees of versioned file hierarchies for the dependencies. They'd be pointless. You can only use the files you can use.

All popular Linux distributions today know this. They have worked along with the Linux Filesystem Hierarchy Standard over the past 26 years, and understand these issues well. Distros may implement it differently, but mostly they conform to it, because they know nothing else really makes a difference. At the end of the day, the current standard (when properly used) is as simple and effective as we can get.

Some software developers (cough Systemd cough) basically don't give a shit about the standard, and generally make the users' and distros' lives harder. This is the one place where the distros could make a difference, by forcing a structure that makes more sense. But enforcement is haphazard, so some apps can force really stupid file structures on distros, and people wonder why the filesystem is so confusing.

So you may ask, what's a better solution? The answer is containers (or really, chroot environments, but that's what a container is). I have a longer post which goes into more detail about why package management literally cannot solve any other problem than it already does, and why containers are the only "advancement" without radically redefining application development: https://gist.github.com/peterwwillis/e96854532f471c739983c0b...


Based on your experience then, do you feel something like Fedora Silverblue will ultimately be more successful than a NixOS approach?


I think silverblue will take off because it seems a lot simpler to use. I haven't tried out either yet but it looks like silverblue simply asks you to use flatpak instead of dnf while nix requires you to learn a complex config programming language.


I mean, both are going to die eventually. But Nix will be more "successful" because its niche will keep die-hards around a lot longer to maintain it. Silverblue will be abandoned much earlier due to lack of interest in maintainership when they realize it's so complex that most people won't want to deal with it.

As a comparison: Slackware is still around. It hasn't had a stable release in 5 years, but it's in active development (the latest sudo 0-day was patched the day it was reported; how's that for long-term support?). Is that "successful"? I dunno, but I'll bet Nix ends up the same way, because it's run for and by 'server people'.

Nix's approach is more for server people, and Fedora's is more for desktop people. But both are unnecessarily complicated and don't provide any significant benefit over a traditional distro's packaging for the average user. Regular-old packaging is "good enough" for 99.95% of use cases, and that which has problems is usually just crap software that's hard to make work in any situation.

Linux has always been, and will always be, a toy rather than a practical tool. But for server administrators and embedded engineers (the drivers of its most common use cases) they'll never use either of those distros, because they aren't KISS. Regular-old package management and FHS isn't sexy, but it's very well understood and fairly easy to work with. Modern industry standard is to treat old-style distros and old-style packaging as cattle, but the above distros are pets. Though Android devs probably love the two distros.

As a side note to all this: I tried a 12-year-old copy of a Linux Desktop on a 12-year-old machine the other day, I think it was Gnome 1 based. Everything on it blew the fuck away everything on my 2018 Linux laptop for speed. I was depressed the rest of the day. Not just because new software is bloated and buggy and slow as shit, but because I knew I couldn't use that old software on the modern web, and all applications are now web applications. No modern web browser will work on such an old machine, and you can't browse the web at all without a modern browser.

So Flatpak on modern distros is supposed to fix this, right? Run any version of any app? But not if new Flatpaks, or the software inside of them, eventually change so much they won't run on old Flatpak-running systems. Or people just don't package their software in a Flatpak. And there is not a single guarantee in the entire project that they will keep compatibility for a long time, because Flatpak has nothing to do with future-proofness. So this is "the future of Apps on Linux": the same BS a new box.

Additional aside: Why the fuck is Android 10's "system files" taking up 19 fucking gigabytes on my phone? What the fuck is in there, the fucking Library of Congress? An 8K video of Linus Torvalds giving me the finger?


Another solution, which I've happily adopted, is: just use a database!

I put all modules in an SQLite database, so if I want to constrain a module to use a specific version of another module, I just do it.

It doesn't solve Unix's problem. But it does solve mine, and that's good enough for me.


It's interesting and crazy to me that Gobo is still going. I played with it back in highschool in like 2004.


"each program resides in its own directory, such as /Programs/LibX11/1.6.9 and /Programs/GCC/9.2.0."

Brings back some memories from the old days of using windows. Thanks, I'll pass.

FHS exists for many reasons.


How does this compare to Guix/Nix?


Is there a WSL version of this?


Please, a fork of GoboLinux called gobolinux without the silly mixed-case directory names.


It feels really petty and bike-shed-y to focus on this, but this was my first thought too when I saw the sample directory listings.


I agree, but the whole gobolinux idea is the epitome of bike-shed-y! You just rename things and the behavior is 100% identical as before.


I think I'd prefer that as well. However it is easy to configure a modern shell for case-insensitive completion.


Completion is not an issue. The problem is that upper case just looks ugly. Like huge fluorescent decals on an otherwise streamlined matte black road bike. They put a lot of effort to create an über-elegant naming scheme, and then they spoil it for no reason.


First thing I thought of too. Mixed cases when all the directory names are single words seems completely unecessary.


End users are used to mixed-case system directory names.

  C:\Windows\System32

  C:\Users\bob\Downloads


You're using Windows examples. Directory names are case insensitive on Windows. On Linux system directories and command names have always been lowercase only for good reason.

Why should I have to memorize the fact that the folder for the package sudo starts with a capital S but the command sudo doesn't. It's not like the capital S carries any information, and as others have pointed out, it's ugly.


just use zsh, their autocomplete also ignore case '__')? no?


the file system as a database for files... what a novel concept


That is funny though because GoboLinux is 18 years old.


Does it need to be novel to be a good idea ?


Have a look at the FAQ: https://gobolinux.org/faq.html

It says:

> To use GoboLinux today, the user is expected to be proficient with the command-line and willing to compile and configure their software.

If I need to compile and configure my software myself, then this is not really a distribution IMHO.


So you don't consider Gentoo or Slack distributions?


You don't really "compile stuff yourself" on Gentoo. There's still a package manager, and it does all the building and installing for you. Yes, compilation happens as a result of the user invoking the package manager, but there's a huge usability difference between "emerge foo", and download foo.tar.gz -> unpack -> configure/build/install -> track updates and repeat as needed. I would call the latter "compiling foo yourself", but not the former.


It is similar on GoboLinux as well. There is a Recipe repository that contains build scripts that you invoke with Compile, which automatically compiles and installs the package.


What kelnos said...

I respect Gentoo fine (have not tried Slack TBH); and a distribution can very well be compilation-based - but that doesn't mean that I have to sort things out manually. That's what packagers and distribution maintainers are for. Gobo seems to tell you that I need to do that work myself.


One of the "first-class-links" on the top of the homepage explains their build-from-source manager:

https://gobolinux.org/recipes.html

Might have been a good idea to just click on that, first?

FWIW, they used to have a very nice rootless install of their compile framework, allowing you to use gobolinux within your home dir, within another distro.


> Might have been a good idea to just click on that, first?

Not really. That is, they declare that I would need to be able to to build management/configuration manually. So, either the FAQ is incorrect, or the build-from-source manager doesn't really cover everything. If it's the former, then that's good to know, but I wouldn't assume that is the case.


I'm not sure what you're talking about.

Its a build-from-source system. Akin to ports, or emerge. Where they do the fetching of sources, compiling and installing of global symlinks for you.

No, it doesn't write your config files for you.


From the FAQ

> Is GoboLinux "ready"?

> Yes, it is ready in the sense that you can, today, have a full operating system running 100% on GoboLinux, like many people around the world do.

> Note, however, that it is neither a beginner-oriented distribution, or an end-user binary-oriented distribution like Ubuntu. To use GoboLinux today, the user is expected to be proficient with the command-line and willing to compile and configure their software.

Yeah, nah. I recently watched a streamer try to compile some nvidia stuff for CUDA in order to use her DSLR camera at >18fps. A very nice reminder that it's never "just ./configure && sudo make install".

Especially uninstalling stuff installed that way is a horrible pain. I used to create .deb's from it, but boy am I glad those days are over.

Besides that, the motivation and sane folder names seem great.


> Especially uninstalling stuff installed that way is a horrible pain.

That was indeed one of the motivations for GoboLinux after tinkering with hand-compiled software a lot. Uninstalling is a mere "rm -rf /Programs/ProgramName" :)


I wouldn't say it's never "just ./configure && sudo make install". I compile a fair number of projects myself and (as long as deps are installed properly) it usually is just configure/install, on mainstream distros at least (Fedora, Ubuntu, Arch).

No doubt though a lot of projects could do much better. I find anything needing GTK and Qt are often much harder, usually because the required packages aren't documented anywhere, or if they are it's for Ubuntu that's 4 years old.

Nvidia and other things requiring proprietary are a nightmare. I will never buy another machine with an Nvidia card after the kernel 5.9 hell I've been through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: