Then, you go to modify the configuration, and it turns out that the configuration file is being copied and generated from a different location. It's endless misdirection which changes in each distribution based on the package manager.
I think I may try out Gobo the next chance I get. This is pretty cool.
- /usr/ is read-only data. /usr/bin/ contains OS-supplied binaries, /usr/lib/ contains OS-supplied libraries.
- /var/ is read-write state data and will persist between boots. /var/log/ is for log files.
- /run/ is read-write scratch on a tmpfs, and will be cleaned up between boots. So, UNIX sockets and other temporary files go here.
- /etc/ is per-system configuration overrides. There is a goal to move to "empty /etc/", meaning you should be able to remove all files in /etc/ and still have a functioning system.
- /opt/ is "vendor space". The OS should never touch /opt/.
The names are weird, but that's the intention behind the new scheme. If you want to change configuration, the default config might be shipped read-only in /usr/, but it should be overridable in /etc/. This is why systemd ships read-only unit files in /usr/, which are symlinked into /etc/ to be "enabled".
I find this goal to be rather frustrating.
It means that software will hardcode it's default configuration when it cannot find a file in `/etc`, and it makes it very hard to deduce what exactly can be configured.
I strongly dislike software that will simply apply a default state when it cannot encounter a certain configuration in it's configuration file for this reason.
I favor that software simply refuse to start with a clear warning as to why, when it cannot find a configuration option in it's configuration file.
Applying a default on a missing key is another case of weak typing; I would hope that lessons were learned by now that guessing what the user wants in case of error is a bad idea.
What is even worse is software that simply ignores malformed keys, and applies a default value anyway, giving rise to not only innocent mistakes by typoes, but some malicious exploits that were found by putting zero-width spaces into keys to trick administrators viā social engineering.
If the system administrator truly intend for a value to be default, he should explicitly declare it so in the configuration file.
Interpreting the system administrator's act of forgetting something as his intention for it to be a default value, seems like a bad idea to me.
As for merge, I find it as annoying if software not support configuration directories. `foo.conf` should exist alongside `foo.conf.d`, and if the latter exist, it's contents should be parsed as well.
On an update, the system can simply install a file in `foo.conf.d` which contains the default of any new keys added from the last version.
Isn't sensible default what make certain software a lot easier to use then others? Apple seems to be good at this. Ten years ago Ubunutu brought this to the Linux world to an extent. Personally I like the configurability of Linux, but I am past the days of spending hours tweaking it and I am quite happy if someone supplies me with sensible defaults.
Of those, only 4) relates to typing. But even then it often bears no relation to real typing, because often there's no schema or typing at all, and usually application has no idea what version a given configuration relates to.
In addition to achieving the above 4 points, we also need to be able to differentiate between default and modified configuration, so the application can override defaults successfully. You can keep default and modified configs separate a number of ways. You can hard-code it in the application, which is fine for the application. But it's a pain in the ass for the user. And as a packager, you cannot expect files in /etc/ to always be "defaults", as the user/administrator expects these to be configured by themselves at any time. And if you want to upgrade software, there needs to be a way to differentiate the versions of default configuration to compare the modified configuration to.
The best way to do this is to package default configuration in /usr/share/. The user then has the job of upgrading their modified configuration in /etc/, and they can use /usr/share/ as a reference, in combination with some UPGRADING guide.
Distros have long had a convention to package a default distro-specific configuration in /etc/, so that when the app first starts, it can work out of the box. But then when you'd upgrade a package, the distro would have to figure out what to do with the once-default-but-now-modified files in /etc/. This just adds unnecessary burden on the distro. Instead, the user needs to look at their own manual configuration and update it if necessary. But the default configuration for the system should remain virgin and independent, in /usr/share/.
And of course, none of this applies to configuration that can't be default. The application should always die if it doesn't find required configuration.
So maybe I would add that /usr/local is a special case of /usr for locally built* stuff, with sub-nodes imitating the corresponding nodes in /.
* Not always locally built, though. *BSDs put binary packages there too.
However, it's not hard to figure out where files were installed, and what files are from. The package manager keeps track of all of that, and you can ask it. Example for debian or ubuntu:
# list all files from package
dpkq-query -L <package>
dpkg-query -L <package> | grep bin/ # see just commands
# find what package a file is from
dpkg-query -S /path/to/some/thing
For stuff not installed as packages, hopefully it's either in a single directory in /opt, or among a small collection of stuff spread in /usr/local, or in a self-contained directory in your home folder.
But almost nothing is just a plain App Folder on macOS anymore. Starting over a decade ago, anything from Microsoft or Adobe or Google even Apple themselves, requires you to double-click run an installer that puts suspicious bits everywhere, just like on Windows. They want to auto-launch and auto-update and integrate and engage etc. And if you want to run Photoshop or Office or Chrome, what are you going to do, not let it install all its junk all over? Every regular person certainly will just let it, and all non-trivial companies will take maximum advantage. Skype and Office were automatically installing crappy Firefox extensions and stuff, so Firefox had to do some annoying complex extension-install-signing thing to try to prevent that ...
This is why Apple and Microsoft went towards sandboxed App Stores. The user won't choose less-abusively-packaged software, but even Big Software Corp can feel pressure to get its app into the store and sandboxed.
Linux distros never had this problem or needed this solution, because they re-package the software they distribute. If xscreensaver or calibra or virtualbox are programs people want to run, but their authors do annoying or dumb things, that can be patched by the distro packager, and the package manager file database will remain in quite good shape.
Reducing complexity is good in and of itself and is always a win as long as you don't lose important functionality.
The notion of packages or installers vomiting files all over the place on systems really needs to die. Mobile OSes mostly got this right, and MacOS has been moving in this direction. Windows and older Linux distributions are the only things left that treat the system like a stir fry of random shit. Of the two this is probably easier to fix on Linux, since Windows is hamstrung by a ton of legacy requirements that are much harder to work around.
# List all files from q package
rpm -ql <package>
# Where did file come from
rpm -qf <path>
Also, man! Do I love the Arch wiki:
I do like Arch based Linux distributions. In my opinion they are the best all around for servers and even containers. Pacman is an excellent package manager.
Backtrack used to do something like this and while it was easier compared to a lot of other debian/ubuntu based things, I still ran into some pain points.
A great command to keep handy is
ls -la `which <program>`
Edit: FHS == Filesystem Hierarchy Standard - https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
Many scripters who need wide compatibility of "which" fall back to the builtins `type` and `command` as they resolve any bash aliases set and give you a more truthful (sic) output than relying on `which` and bash is bash on all platforms:
# command -v which
# type which
which is hashed (/usr/bin/which)
akira@akira:~ which which
which: shell built-in command
kaneda@tetsuo:~ which type
kaneda@tetsuo:~ type type
type is a shell builtin
kaneda@tetsuo:~ command -v type
When wide compatibility is needed it is best to avoid bash specific solutions.
You can read about it at a meta level over here, each distro has it's own wiki/working page if you Google around: https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
Most uses of Linux are single-user, running as a server, or even more just a container.
But even if you consider the above to be a single-user setup (which is reasonable) I still think there's value in consistency. I personally get a ton of value from having CentOS and RHEL systems being laid out the same as my Fedora Laptop and Desktop system, and having all that knowledge transfer both ways. Being able to add a user to a "single user" setup too is very useful. It would suck if adding a second user required completely changing everything about the filesystem.
/lib for libraries
/usr/include for C includes
/etc for configs
FTA you linked:
> Most Linux distributions follow the Filesystem Hierarchy Standard and declare it their own policy to maintain FHS compliance. GoboLinux and NixOS provide examples of intentionally non-compliant filesystem implementations.
Debian and Ubuntu adhere to FHS; they're not more difficult in this regard than Arch or Fedora (or distributions based on these). GoboLinux and NixOS don't because they serve a higher goal. GoboLinux for user-friendliness/explorability and has backwards compatibility (which is hidden via optional kernel extension), NixOS aims to solve dependency hell (among various other goals). Learning curves are an obvious drawback.
> ls -la `which <program>`
Backticks should be avoided; use ls -la $(which <program>) instead (avoid the $ in Fish).
You may want to consider NixOS, which is equivalent to Gobo except for the fact that linking across packages is static instead of dynamic. This has very deep consequences.
Nix has tons of developers, it's one of the most active projects on Github. Past the initial difficulties, most common usecases are really easy.
Honestly, there's nothing sad about this.
The idea that a package is “installed somewhere” rather than “installed” and has files that “belong to it” seems to be a Windows-ism in mentality, where it is common to start applications by navigating to the folder they are installed in, and starting an execution there from an absolute path.
Many packages' files consist of nothing more than adding more fonts to the system, it seems a bad idea to me, to have all that in a per-package folder, rather than simply in `/usr/share/fonts`, and have the package manager track what file therein belongs to what package.
I use zpkg to handle this, it has been made specifically for this use case.
I'll come back to it and do a more comprehensive audit. I haven't released any zerodays I've found, ever, but recently I was thinking about making a blog. This might be some good material.
My copy of NixOS had no password on the root user by default, which is not ideal but I assume most deployments aren't like that (right?). I was able to become other users on GuixSD using the SUID's the distro ships with, but not root. Not yet. The surface is much larger on both of those distros than the mainstream OS's. I may be able to pull of a root LPE, but I'd need to look for a full day at least.
That isn't the default behavior for NixOS. From the docs:
> If set to null (default) this user will not be able to log in using a password (i.e. via login command).
The installer also asked for a root password when I installed NixOS years ago and it still does.
By no password, do you mean:
- it is impossible to log in as root with a password (a good thing, I'd think, as people should be using sudo), or
- it's possible to log in as root without any authentication (a bad thing, obviously)?
I think the big difference is:
- Nix: the filesytem is rarely the right datastructure (I agree) other things are needed.
- Nix: Store and Nix too is more freeform, package / distro norms provided by higher layers (Nixpkgs, NixOS)
Most people just want to cargo-cult what others do, and most things initially get popular because some way of deploying software requires it / makes it the path of least resistance. None of that is about practicality, just short-sightedness and ecosystem effects.
Just wondering if you can compare/contrast NixOS with something like Fedora Silverblue or Fedora CoreOS.
Silverblue is still pretty much imperative, you install/remove RPM packages and that's it, you use Flatpak for everything else. NixOS you have to describe your entire system in a programming language. NixOS gives you so much more freedom to do what you want, but you have to work for it, learn a language, learn it's constructs, etc.
I enjoy both though, feels like the right direction to go, you just need to choose how you want to interact with your OS.
I personally think containerization is just too much a PITA to use for software that doesn't live in a bubble, e.g. desktop software. The capabilities revolution (Capsicum or CloudABI) would be more secure and easier to program with, so we just need to force our way there, and then everything will take care of itself.
Has that been your experience?
With Ubuntu, Debian, and even Arch, it's pretty much install and go.
Also, it requires significant disk space, which I usually don't have on my laptops.
I'm also waiting for the parts of my new desktop to come in and I'm excited that all it takes is cloning my config repo and applying to get to exactly the same setup. With Arch I would have reinstalled to clear the cruft that has accumulated, however with NixOS I can just audit my configs to see an entire list of cruft and easily undo it by removing lines.
One of my hobbies is recycling older machines. I've installed NixOS on Chromebooks with 16GB internal drives. It's possible if you remember to garbage collect after each successful upgrade (and stick to 64-bit machines).
I tend to treat my laptops as disposable and keep my important data on a file server so I don't rely on local disk space.
I do the same, except I use an extensive dotfiles, config, and bootstrap script repo.
So I like the idea of Nix for that reason. It's going to be hard to let go of Debian/Ubuntu, but I'd really like to have a perfectly reproducible environment like Nix.
Thanks for your reply.
0. however, I also unfortunately have to use Windows sometimes,so I dual boot, and that takes up significant room on my smallish laptop drive
If you just want the ability to install arbitrary versions of packages along side each other/have painless roll backs/bake config files into your install you can do that there.
Once you get comfy with nix on top of another distro then think about whether you want that approach with the entire stack.
Nix/lorri/direnv, in particular basically obsoletes nvm, rbenv, etc.
It's also funny how Windows is supposedly better, there is at least program files and program files (x86) some programs still just install into C: (I think python used to do that), and there are the different windows folders. Let's not forget the myriad of places where programs drop data. And because Windows has no proper package management, you actually need to where things are.
Well Linux is not a walled garden with a company that takes care of everything behind the scenes so it should be easy for a beginner to wrap its head around a system and understand at least its logic. It's good not to have to rely on specialists for simple things.
And to what level does that apply? Should the user understand everything? Also why is a directory system of /Firefox /Chrome /Gimp ... easier? Where do libraries sit in this system (is that easy to understand?)? How do I determine if I build something what library I'm going to use ...
I think the perception of a certain way being easier is largely based on what one is used to.
I certainly am always lost when I sit in front of a windows machine. Just the other day I was trying to get ctypes to find a library and ended up just ending lots of things to LIB or PATH (I'm sure there are better way to do things). BTW I was asked by a windows user for help. This is not a dig at windows, just my argument that filesystem layouts are largely conventions (even though often for good reasons) and yes one has to invest some time to understand them.
For someone who wants to understand how Linux works and many things work together I highly recommend giving linux from scratch a try. Most people fall into the trap of just copy pasting the commands (I was one of them as well), so you probably want to build a couple of times, and try to update your system. Hopefully you run into a couple of roadblocks, which force you to thing a bit more about what you are doing and is were the learning experience comes from IMO.
Oh wait, /bin might not exist. It's in /usr/bin.
Wait, what even is /bin? No, it's not the recycling bin - see, it's short for binary. No, there isn't a separate directory for plaintext scripts, why do you ask?
Well surely it's a directory for user-specific programs then, right? Nope, it's read-only stuff.
Well, why not symlink it to /ro? Well, because symlinking and transitioning are too much effort, and because that's very english-language-centric of you, what about other languages where /usr and /to are random gibberish?
Well, how's other-language support then? For instance, is it possible to reliably use Korean characters in your username without a lot of programs breaking? Nah, see you're better off just writing your Korean name in english characters. Much more reliable that way.
But wait a sec, let's go back to that symlinking thing: if symlinking and migrating is too much effort, then how did we ever switch from /bin to /usr/bin? Well duh, we just symlinked /bin to /usr/bin.
Great, can we do that for /usr and /ro? Nope.
Can we please just admit that Linux's ease of use is burdened by tons of historical baggage?
Then we'll have to either say "we just don't care that much", or actually fix it.
Moreover, most of the things you are saying are wrong.
1. Any linux distribution who follows the FHS (pretty much all the big ones) has a /bin directory. It doesn't contain the python usually though.
2. bin is also not necessarily read-only (I'm not sure where it is). Maybe you should have a look at the FHS?
3. not sure what you mean by symlinking /usr and /ro you can certainly symlink to /usr to /ro, but not sure what you are trying to achieve
4. UTF-8 usernames are certainly possible on Linux distributions that have full UTF-8 support (which I imagine are again most).
Your proposal of just fixing the "historical baggage" for some claimed ease of use (I still claim that directory naming is really at the bottom of the list for changes to improve ease of use. And linux has changes that are required to improve ease of use), would actually break lots of peoples work flow for questionable gain.
Moreover, if directory naming is important to someone, they can use Gobolinux, so it's not like you have to follow the old "historical baggage".
> I think the perception of a certain way being easier is largely based on what one is used to.
I think the perception of things being "not that hard" is bias one acquires once accustomed to the quirkiness of a system, not perceiving anymore what has made learning painful or slow. I like to program in C now that have shot myself in the foot so many times, I am even proud that I have mastered it and I learned a ton in the process. Nevertheless I am glad that Zig is approaching 1.0 and would recommend it to any beginner over C.
I wonder why its such a niche distro, with time this should have caught on and be more popular!
Then why does the readme say:
> cd /
There is a kernel module that ensures that listing hides, but files can still be accessed by their path.
I'm not sure to what level this is an absolute disaster.
Including, I would assume, rather security sensitive software.
If you don't want it, don't load it. Absolutely nothing in gobolinux requires that kernel module to be running. Again, its just purely for aesthetic purposes.
Sometimes requiring the user to know things that cannot be discovered by experimenting with the system is a necessary evil, but there should be a good reason for the requirement.
Contrary to what you seem to believe, the fact that purely aesthetic reasons motivated the designers of GoboLinux to create this footgun is a bad sign, not a good sign.
Anyone who wants to try out gobolinux for real would at least do some basic reading first. Its that distinct a distro that you kind of have to, and no problems with that. Not everything innovative can be expected to be digested without a modicum of work up front.
I mean, if people install NixOS without any research up front, and throw up their hands in disgust when it doesn't look like Red Hat Enterprise for Big Boys v10, does that mean that NixOS are messing up, and they should change how they do things?
I think not.
Sometimes the simplest alternative implementations require the most up-front understanding, because their simplicity challenges long-held preconceptions about how things should be.
Thats Gobo in a nutshell.
At its heart, its actually a very simple distro. Which uses simple tools, and the filesystem, to lay everything bare in front of the user.
Funnily enough, Gobo is one of the few distros where you can refer to /bin/<any-linux-executable>, and as long as actually have that pkg installed (yes, under /Programs), then /bin/xyz WILL be found. Guaranteed.
Thats by design.
This entire subthread is a bikeshed, started by mistaken assumptions ("security is broken", "some pkgs won't work", etc, etc) over their GoboHide kernel module.
None of it is true, and all too often its a curse which innovative distros have to deal with, somewhat unfairly. I mean, everything is explained in excruciating detail on their website (and was over a decade ago, when I first used it).
And still, years later, we have folks on Hacker News opining at length about how the GoboHide kernel module causes pkgs to break because they can't find /bin/whatever (FALSE), or how gobo doesn't have the ability to build pkgs (someone else brought up this gem on another subthread).
People, if you haven't come across this distro before, then please at least do the basic once-over of their website before jumping onto Hacker News and coming up with X technical reasons why this project is a failure ...
Eg: there's a fantastic essay by Hisham from long ago explaining the motivations behind Gobo:
Plus, a different organization of the filesystem, where everything that belongs to a package is stored under the same base path, rather than being spread across the whole FS. I don't really remember GoboLinux ideas, but IIRC they had something about leveraging use of the filesystem as a part of package manager.
> Say goodbye to the old problem of having the package manager complain that libXYZ is not installed even though you can see that it is there.
That is... also a problem I have never had.
I like the idea of GoboLinux just for giving a new take on something more streamlined (because yes, the FHS and de-facto systems before it have accumulated a lot of cruft), but I think it would be more productive for the GoboLinux folks to advocate for their system based on its own merits, not on semi-imagined faults of the more common system.
Being able to have multiple versions of programs and their dependencies (which would help avoid dependency hell) installed is a big thing! That alone is a really nice benefit of GoboLinux's system.
> /bin is a link to /System/Index/bin. And as a matter of fact, so is /usr/bin. And /usr/sbin... all "binaries" directories map to the same place. Amusingly, this makes us even more compatible than some more standard-looking distributions.
This just feels wrong to me. I appreciate that Linux is all about choice, but I really wouldn't want this.
... but come to think of it - that means that you can't just put your package in /programs/MyApp/12.3.4 - because if you switch your system to that version your dependencies may well break. Hmm.
> Isn't this what /opt is for though?
Pretty much yes; the idea was "/opt all the things!" (well, before the all-the-things meme existed — it was a long time ago!) and use symlinks to make everything appear as if it was installed in the "regular" Unix places (so that we wouldn't have to patch/reconfigure every single program).
GoboLinux has a long and cool story (I still use it!) but that's the tl;dr version of the motivation!
We used to get problems like this when building ROS on homebrew where you'd have a bottled version of Gazebo (a robot simulator) which had been linked against specific versions of boost, python, opencv, whatever. And then a new version of one of those dependencies would come out, and something else in the tree would update ahead of Gazebo; now suddenly you've got Gazebo plugins which crash with ABI issues and you have to go spelunking to find out which binaries are linked to what and where the conflict is.
It's not by accident! In its original docs, Homebrew described itself as package management "the GoboLinux way".
Given how Homebrew has become this super-popular tool, its success makes me proud of Gobo's legacy (and a bit vindicated from all the people who told us "this model is crazy, it will never be usable!" :) )
> keeping all versions of everything is that you end up with problems when you're trying to build/use packages with a large pool of dependencies
Yes, it can be a pain! In recent years, we introduced Runner (https://gobolinux.org/runner.html) in GoboLinux as a way to address this kind of issue; a virtualization layer to present the expected dependencies at the right places.
Clever. Seems like it's essentially a way to "pause" the rolling distro at a known-good point, and then use that environment to build whatever high-level stuff it is with your set of frozen dependencies— this would definitely do the thing, and would have been very helpful for the ROS on Homebrew effort.
OTOH, how different is this from Debian Sid as the rolling release that is occasionally broken when a new low level dep comes in, with the supported releases of Debian, Ubuntu, and others as the shared pause-points? I think ultimately I find it easier putting the work of finding a set of interoperable versions of things onto my distro maintainer (and in exchange knowing that if I want the cutting edge I'll have to do my own backport or go to PPAs), rather than taking on that work myself and hoping for the best or risking getting stuck with a bunch of super-old stuff and no clear migration path forward.
I want to produce an executable asset that has everything necessary to run, regardless of the wacky local environment that it's being executed in. I have lost so much time trying to debug environment related issues because (at least in my experience), debugging them requires a deep system knowledge of the dependencies and how Foo tool/program/framework is expecting them to be.
1. Security patching in one place
2. Memory usage
3. Allowing a program to be improved without it needing to be recompiled
Most obviously without dynamic linking, the concept of an operating system upgrade wouldn't make any sense. But, users quite like clicking "Update" and then their programs get new features.
I don't currently run Silverblue full time but I could see myself doing so in the future.
Regarding docker containerization specifically (or a different OCI-compliant runtime like Podman), it gets very hard because of things like video and sound (Pipewire should help immensely), but also there are lots of apps that you want to have uncontainered access. Like Vim becomes a lot less useful to me if it's filesystem is contained.
Having a full Gnome install on every app, a full KDE install on every app. Having to install a given theme into every single docker container.
Maybe I'm missing something. I'm not even sure how this could make sense.
They jumped on it shortly after I made the project public with version 9, picking up version 13. They updated to 18 and then one more time to 25 and that was it.
Guess I just don't make `em like I used to.
I would say Gobo and NixOS follow roughly the same philosophy, with Nix adding the functional aspect on top, which is pretty neat. But yes, one of the motivations for Gobo was to make it easy to revert versions of programs, with commands for enabling/disabling their symlinks, and even keeping multiple versions at the same time. We didn't do full-system snapshots, but handled it on a program-by-program basis, which was our main interest at the time (tinkering with the latest window manager!).
First of all, the filesystem isn't a database, it's a hierarchical index of files. It has a couple fancy features specific to managing files. Trying to force it to do more than that will result in pain, and eventually building something else around it. But the filesystem isn't even the biggest limiting factor.
Managing both the build-time and run-time dependencies of software is more complex than a filesystem can handle. To execute an application in an operating system, there must be one "environment", which is a combination of a kernel, an OS's environment and resources, and a set of applications and dependencies. The dependencies are set at build time and run time, and can't be interchanged at will without causing instability (if it works at all). You can juggle symlinks around all you want, but it doesn't result in any useful functionality other than looking at a bunch of differently versioned files that are not - cannot - be in use.
The applications themselves can be programmed to interact with multiple versions of dependencies, but if they are not already built to do so, you cannot force them to. This underlying limitation is what forces an application environment to depend on a static tree of files. It doesn't matter how many versions of a dependency you have installed, you can only use one at a time, as each application allows you to. That's why you can only install one version of a package, and why we don't have trees of versioned file hierarchies for the dependencies. They'd be pointless. You can only use the files you can use.
All popular Linux distributions today know this. They have worked along with the Linux Filesystem Hierarchy Standard over the past 26 years, and understand these issues well. Distros may implement it differently, but mostly they conform to it, because they know nothing else really makes a difference. At the end of the day, the current standard (when properly used) is as simple and effective as we can get.
Some software developers (cough Systemd cough) basically don't give a shit about the standard, and generally make the users' and distros' lives harder. This is the one place where the distros could make a difference, by forcing a structure that makes more sense. But enforcement is haphazard, so some apps can force really stupid file structures on distros, and people wonder why the filesystem is so confusing.
So you may ask, what's a better solution? The answer is containers (or really, chroot environments, but that's what a container is). I have a longer post which goes into more detail about why package management literally cannot solve any other problem than it already does, and why containers are the only "advancement" without radically redefining application development: https://gist.github.com/peterwwillis/e96854532f471c739983c0b...
As a comparison: Slackware is still around. It hasn't had a stable release in 5 years, but it's in active development (the latest sudo 0-day was patched the day it was reported; how's that for long-term support?). Is that "successful"? I dunno, but I'll bet Nix ends up the same way, because it's run for and by 'server people'.
Nix's approach is more for server people, and Fedora's is more for desktop people. But both are unnecessarily complicated and don't provide any significant benefit over a traditional distro's packaging for the average user. Regular-old packaging is "good enough" for 99.95% of use cases, and that which has problems is usually just crap software that's hard to make work in any situation.
Linux has always been, and will always be, a toy rather than a practical tool. But for server administrators and embedded engineers (the drivers of its most common use cases) they'll never use either of those distros, because they aren't KISS. Regular-old package management and FHS isn't sexy, but it's very well understood and fairly easy to work with. Modern industry standard is to treat old-style distros and old-style packaging as cattle, but the above distros are pets. Though Android devs probably love the two distros.
As a side note to all this: I tried a 12-year-old copy of a Linux Desktop on a 12-year-old machine the other day, I think it was Gnome 1 based. Everything on it blew the fuck away everything on my 2018 Linux laptop for speed. I was depressed the rest of the day. Not just because new software is bloated and buggy and slow as shit, but because I knew I couldn't use that old software on the modern web, and all applications are now web applications. No modern web browser will work on such an old machine, and you can't browse the web at all without a modern browser.
So Flatpak on modern distros is supposed to fix this, right? Run any version of any app? But not if new Flatpaks, or the software inside of them, eventually change so much they won't run on old Flatpak-running systems. Or people just don't package their software in a Flatpak. And there is not a single guarantee in the entire project that they will keep compatibility for a long time, because Flatpak has nothing to do with future-proofness. So this is "the future of Apps on Linux": the same BS a new box.
Additional aside: Why the fuck is Android 10's "system files" taking up 19 fucking gigabytes on my phone? What the fuck is in there, the fucking Library of Congress? An 8K video of Linus Torvalds giving me the finger?
I put all modules in an SQLite database, so if I want to constrain a module to use a specific version of another module, I just do it.
It doesn't solve Unix's problem. But it does solve mine, and that's good enough for me.
Brings back some memories from the old days of using windows. Thanks, I'll pass.
FHS exists for many reasons.
Why should I have to memorize the fact that the folder for the package sudo starts with a capital S but the command sudo doesn't. It's not like the capital S carries any information, and as others have pointed out, it's ugly.
> To use GoboLinux today, the user is expected to be proficient with the command-line and willing to compile and configure their software.
If I need to compile and configure my software myself, then this is not really a distribution IMHO.
I respect Gentoo fine (have not tried Slack TBH); and a distribution can very well be compilation-based - but that doesn't mean that I have to sort things out manually. That's what packagers and distribution maintainers are for. Gobo seems to tell you that I need to do that work myself.
Might have been a good idea to just click on that, first?
FWIW, they used to have a very nice rootless install of their compile framework, allowing you to use gobolinux within your home dir, within another distro.
Not really. That is, they declare that I would need to be able to to build management/configuration manually. So, either the FAQ is incorrect, or the build-from-source manager doesn't really cover everything. If it's the former, then that's good to know, but I wouldn't assume that is the case.
Its a build-from-source system. Akin to ports, or emerge. Where they do the fetching of sources, compiling and installing of global symlinks for you.
No, it doesn't write your config files for you.
> Is GoboLinux "ready"?
> Yes, it is ready in the sense that you can, today, have a full operating system running 100% on GoboLinux, like many people around the world do.
> Note, however, that it is neither a beginner-oriented distribution, or an end-user binary-oriented distribution like Ubuntu. To use GoboLinux today, the user is expected to be proficient with the command-line and willing to compile and configure their software.
Yeah, nah. I recently watched a streamer try to compile some nvidia stuff for CUDA in order to use her DSLR camera at >18fps. A very nice reminder that it's never "just ./configure && sudo make install".
Especially uninstalling stuff installed that way is a horrible pain. I used to create .deb's from it, but boy am I glad those days are over.
Besides that, the motivation and sane folder names seem great.
That was indeed one of the motivations for GoboLinux after tinkering with hand-compiled software a lot. Uninstalling is a mere "rm -rf /Programs/ProgramName" :)
No doubt though a lot of projects could do much better. I find anything needing GTK and Qt are often much harder, usually because the required packages aren't documented anywhere, or if they are it's for Ubuntu that's 4 years old.
Nvidia and other things requiring proprietary are a nightmare. I will never buy another machine with an Nvidia card after the kernel 5.9 hell I've been through.