Then study for the RHCSA and RHCE v7 (v8 is all Ansible now). Arch Linux and RHCSA and RHCE taught me a ton about Linux and made me super comfortable with my Linux desktops and servers.
Goes even deeper than arch. The desktop profiles make it almost set-and-forget now.
I think that the main sticking point is the documentation. It was certainly the reason why I went with gentoo at first, not because of the control over the system, but the fact that there was a great wiki (that was often down :D) and great official documentation that lead me every step of the way. I'm of the firm believe that every linux install can teach you everything as long as the documentation doesn't skip on things. You won't learn anything from lfs if you just copy paste commands and you can at that point install ubuntu.
Back in the day there we used to say: "when you know slackware you know Linux. When you know red hat all you know is red hat".
Effectively when I got into slackware my computer wasn't 100% working all of the time but I learned a lot. I then moved on and years later I got rhcsa certified and that's very worth it too.
Not sure about arch, I never really used that, but I can vouch for slackware, Gentoo and the RedHat/centos/rhcsa thing.
If you’re looking to just learn about Linux, you can’t go much further than Slackware. It’s as Linux as you can get: you can learn distros (which admittedly is easier now that many are built on a few different platforms - GNOME, xfce, systemd, etc.), but Slackware gives you a kernel, core-utils, and a bunch of packages - can’t get much simpler than that.
Also once you track -current and get around SlackBuilds you can install almost anything you want, it’s a full-fledged and modern distro. Without dependency management to be sure, but if you’ve got a well-made distro and you compile most of the software you install, who needs it?
I remember needing to use Clang for ccls but found out that the version of CMake included didn't support C++17, then when I tried upgrading CMake I found that some library named libarchive was too old to use as a dependency, so I essentially had to spend hours of time upgrading things to versions not supported on Slackware just to get to the point where I could use Clang. I'm pretty sure I wouldn't have to spend all that time if I was using pacman instead where the important packages are all up to date and I could get to work within a few minutes.
Honestly only the distros with a large enough community like Arch or Gentoo would probably have enough people to submit and maintain even the most obscure packages.
Also, there was no easy way to do a system upgrade. In Arch Linux, which is what I'm used to, you can usually do 'pacman -Syu' without too much fuss, but the Slack document for upgrading the kernel is a multistep process that I couldn't successfully complete. (It even wishes you "good luck".) Third-party packages on slackbuilds have no automatic dependency resolution either (!) which was very painful every single time I needed a package not in the official repository, which was all the time in my case. I could literally feel my heart sink every time I needed a slackbuilds package and it required upgrading a system dependency. Apparently this was a political decision instead of a technical one, which I don't understand because pacman handles it fine.
I also had integrated graphics issues that caused Firefox to hang on startup or Love2D to freeze, forcing me to kill the X server, for which I found no solution. Arch ran them fine. This was on a ThinkPad, by the way.
It could just be a philosophy thing, or that people with my specific needs are better served by other distros, or maybe I've been hopelessly spoiled by Arch and can't see the benefit Slack brings over it yet. But I have to admit I am still mildly curious about it, just not in the sense of using it as a daily driver.
1. format a disk a bootable active partition with ext4
1. download the kernel, make defconfig && make
1. copy the kernel to the partition with syslinux mbr.bin and probably extlinux.conf (probably copy modules too)
1. copy busybox over
I’m not saying you won’t learn, but I learnt a heap by avoiding systemd using gentoo. I eventually gave in when it became clear that it was no longer a viable option for various reasons, but the knowledge I learnt was mostly relevant.
That being said, every time I try and do something remotely advanced with network manager I fail. I can no longer have static ips, or customise DNS on a DHCP controlled connection without google
I miss just editing /etc/network/interfaces and /etc/resolv.conf by hand. I get that that sucks for laptops, but why can't the rest of us just have an exceedingly simple and pleasant setup?
Here's a sample config file from my home desktop using a static IP:
$ cat /etc/systemd/network/lan.network
netplan is one of those Canonical projects that'll probably disappear in a year or two.
I feel like after all these years, Canonical has still not figured out how to make Open Source work.
Nearly everything they do leads to controversy, and them eventually accepting the community alternative with a huff.
Does 'ip' tool update one of those files? How do I tell systemd to re-load the configuration? Is NetworkManager running or not? Does it matter? Does NM update these files, or do something magical to the interface? Are these even the right files? The best answer at which I could arrive is "who knows?"
Every time I attempted to change the network interface settings, it only stuck around until the machine rebooted. Something was resetting it back to a static IP, and I had no tools to tell me what.
Why did distro maintainers ever allow things to become this confusing? Why do we have so many tools and config files for the same thing?
In the end I reinstalled a fresh copy of Fedora on the machine and set the DHCP configuration in the setup wizard. It's that bad.
My experience tells me to anticipate a comment along the lines of "Hey, those programs and config files are all actually the same tool, and you must have just been configuring it wrong." If that's the case - it's an even stronger point that this chaos has reached a critical mass, where parts of the same networking configuration tool are so disjoint that they don't visibly interoperate at all.
It tries to be helpful and rewrites network configuration files to align them with its own configuration.
NetworkManager used to be configured to save it's config to /etc/sysconfig/network-scripts on Fedora, for backwards compatibility (see /etc/NetworkManager/NetworkManager.conf).
/etc/network/interfaces is the Debian alternative to network-scripts. I don't use Debian based distros much so I don't know too much about that.
There's also systemd's networkd, which has it's own config in /etc/systemd/network.
Honestly I'm surprised you've got a Fedora machine with networkd. Usually NetworkManager has been the goto choice (and is what my Fedora 32 install has, although this install has been upgraded from Fedora 23 so take that for what it's worth).
Most of this mess comes from ifconfig (which again, only operated on runtime state), which people then wrote scripts around to automate the setup of the networks. Then ip came along to access networking features that ifconfig couldn't.
You best off sticking to NetworkManager (in my opinion) for desktop usage. It has a far better GUI integration than networkd (which is fine for server/embedded use).
Redhat has used /etc/sysconfig/network-scripts.
Then the network stack was changed in the kernel, and traditional tools (ifconfig) carried on working, but didn't expose all the new features, so a new tool to configure your ethernet link was created called "ip" which you had to use despite not running IP on that interface.
But things still worked. Those with complex requirements would drop in extra things into post-up hooks - e.g. I add route tagging for my dual homed machines so that traffic coming in on eth0 would go back out of eth0's gateway, and traffic on eth1 would go back out of eth1, and sockets specifically bound to eth0 (or eth1) would use the correct gateway.
(oh yes, upto about 5 years ago network cards were called eth0 through ethx, now they have unpredictable names)
However as is common across software nowadays, this wasn't good enough, and had to be redone. By dozens of different groups. Because cloud.
I also felt wpa-supplicant was a very dramatic name.
You should probably check out `systemctl list-units` to see what is actually enabled and running on your machine.
What's also interesting is I've somehow managed to have the networkd-wait-online.service fail, despite having an internet connection.
This is mostly a curiosity at this point, nothing's broken - but it is a really humbling experience to realize how little I know about the modern Linux network stack after "using" desktop Linux for a decade or so.
It's super weird that they didn't just do the thing with NetworkManager since all they needed to write was a plugin a la ifcfg-rh to process turn YAML into NM settings and then you're done. And since they have NM as a backend they clearly are perfectly capable of doing this.
I've tried and failed with LFS so many times, I can't count - I tend to just run out of time, and by the time i go back to it, I've totally lost where in the process I was.
For example, "disk" space and memory?
How long does compilation take (depends on system resources)?
It was fun booting to a graphical desktop in a few seconds, opening activity monitor and seeing < 70mb memory used. This was with XFCE around 2010. Now it is 2020 and Task Manager on my daily Windows 10 machine shows 27gb memory used.
My problem with OSes like Gentoo is they give you the illusion of teaching you how OSes work when you're actually just learning the structure of a decoupled Linux system. RAM caching is basic stuff (and there's basically no reason not to do it), if Gentoo was a good educational pipeline users should not be coming out with gaps in their knowledge like that.
IMO OSes like Gentoo mainly teach you how to plug together artificially isolated components (which preclude cross-cutting concerns & optimisations that rely on the structure of large sections of the system being known & predictable) and give you the illusion of understanding, while wasting a huge amount of your time debugging problems that are superficial. It's like installing a bunch of (fragile) pip packages and thinking you've learned Python. I would bank on the average Gentoo user being unable to actually write a significant OS component, and I think that's a much more useful metric than knowing how Linux plugs together.
I think you might be mistaken. Task Manager seems to generally consider cached memory as available. Most Linux monitoring tools I have used also do this.
> Since cached pages can be easily evicted and re-used, some operating systems, notably Windows NT, even report the page cache usage as "available" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.[^1]
So both the Linux/Gentoo and Windows systems in question should most likely be caching files. The Windows system might even be caching less aggressively, just because almost all memory is in use by processes.
It seems like there's some dissonance here and I'm not sure where this comes from.
It's because of ego at play.
Users of such operating systems (which are essentially sysadmins), and, more in general, a certain type of tinkerers (for example: experimental/edge technologies), don't see their tools as a mean, but as an end.
This is totally fine of course (it's an interest like any other), but it has some perverse side-effects, in particular, there are a couple of phenomenons.
1. "IT TOTALLY WORKS". Making such systems work is complex, so there's often pride in showing off the success (exacerbated by the communities, who laud such accomplishments). It doesn't matter if such success is solid, or it's held on with duct tape (like it often is), because what matters is that it works, or, more specifically, it worked at least once on one system (:rolling_eyes:).
Additionally, the moral burden of breakages moves to the user. If something in Ubuntu breaks, it's Canonical's fault. If something in Gentoo break, it's the user's fault. And it's not fun for users to openly admit that they caused a breakage.
2. "IT REQUIRES NO MAINTENANCE!". Routine operations get so embedded in the workflow of such systems, that users forget that they actually do them. Pride also takes part.
So... "the system doesn't require ANY maintenance", because update burdens (eg. conflict solving) are, hmmm, "not maintenance", because they're... forgotten (:rolling_eyes: again).
I've been in a few such communities, and I left, or, to be more specific, and found long-lasting solution and stuck with them, rather than chasing the latest and greatest for the sake of it. I used to get _really_ annoyed when something that was supposed to "totally work" was either just a bunch of commands run by somebody who clearly didn't know what they were doing, or was hiding other things that correspondingly would not work.
Once. Once is usually enough.
I can't see myself using a LFS system in production.
I've been trying to work on getting my own distribution started from scratch myself, while LFS is a great starting point I needed to have multiple architectures. (For now I'm targeting just x86_64, aarch64 and riscv64) Which the current implementation of LFS is only applicable if you have a host with the same arch that your targeting. I've spent about maybe 9 to 15 months scripting, cross compiling, and researching this thing. Even tried looking at other projects that have somewhat simple bootstrapping processes that use musl such as Dragora Linux, Alpine Linux and Ataraxia Linux to bootstrap everything. However I couldn't really get to a good point in the compilation process where I can chroot into it and start building stuff. Might I note a year or so ago I didn't know much about toolchains, embedded buildtools, etc. but I did have a deep love for embedded.
Now with these new LFS news and the changes that there are proposing / have implemented it seems that cross compiling for other archs will be a little easier since you have a good starting point to branch off of for other architectures (Though solely on your own or via lfs-hints if people are inclined to submit their alterations.). As of right now I'm running the first 3rd of the new development book, was able to write up a quick bootstrap script and bootstrap folder structure to get things going. So pretty excited about these new LFS changes and how they will pan out in the coming months.
EDIT: Forgot to add, before anybody makes any assumptions. No, it's not going to be another desktop distro. It's purely focused on server machines (No GUI) with a focus on distributed systems think CoreOS / Clear Linux.
I have used Yocto quite a bit and it's nice. But embedded systems tend to be pretty simple.
What I would actually like to see is a modern scripting language like lua or Scheme embedded in the make binary.
I like the idea of an embedded OS for running containers. There is actually quite a bit of support in systemd for that. So it might be as simple as Buildroot with systemd init plus a mechanism for managing system updates.
Nerves is another nice approach: https://www.nerves-project.org/
Also you could look at xmake if your interested in a make like tool that has lua support: https://xmake.io/#/
I've used debootstrap to build a Debian distro for ARM with a custom kernel (for an Olimex board target or Beaglebone.) That's very hands-off and not nearly the same "depth" as LFS. The next step was taking the same kernel and building a busybox-based rootfs. Then I had to make my own PID 1 and do init work to bring up services such as networking on boot which was very educational, yet busybox provides most of the "lego bricks" that you're not left writing too much from scratch.
Building to an embedded target also allows one to punt on the most annoying and complex parts of a desktop linux distro, such as the desktop environment, audio/video drivers, UEFI bootloader, etc. Building a semi-custom (e.g. Arch or Debian/debootstrap, Buildroot or OpenEmbedded) headless distro was what I'd call a "shallow dive" or gentle introduction to more of Linux's inner workings without having to understand every bit in order to get a running system.
Debian or buildroot are fine to bootstrap new or natively unsupported architectures.
Can you elaborate?
Are you using a Macbook or something instead?
However, I found I didn't really learn much and was just blindly executing steps. It's fun for a while to watch the magic, but maybe there is some better way to consume the material? I found that I wasn't really getting enough context around what was happening to appreciate what was being done and why at every stage.
Maybe someone can suggest something?
I salute the fact he did work on the hint I gave on their "weirdly handled" irc chat on freenode... (you may be able to find some logs...). And they succeeded! I'm very happy that now the public is gonna have the clean way to produce an operating system with Gnu tools & Linux.
While I was very kind sharing with them they didn't appreciate I wasn't giving solution "directly" and clearly thrown stuff to me like I'm a noob speaking too much of thing I don't master... well, it was very surprising to me to end in this situation... I even asked bdubbs itself to support me with that... he thrown me that "he doesn't handle irc chat only book writing...". I was kind of meh I just left their chat...
The result of this weird context pushed me into this view:
THEY ARE BUILDING GNU/LINUX SINCE 20years! and 20 years later they are still waiting information to drops from sky... and then acts like pioneerrs...
This went far the edges limits! go in hell NOOBS!
When will this be ready?
Ironically coincident with this announcement, I ended up going through LFS and managed to get a kernel compiled using an entirely bootstrapped toolchain.
About halfway through I noticed the development version they're discussing, which is available here: http://www.linuxfromscratch.org/lfs/view/development/
I believe this version is likely able to be run-through but please keep in mind it looks like it's been changing every few days!
To the point of other commentators mentioning it can get a little tedious, especially in chapter 6- I think the new revised edition they're working on could help with that somewhat as it looks like avoiding the /tools buildout will help minimize the number of packages that need to get rebuilt.
"Uncles doing LFS"
This week I've been iteratively rebuilding the kernel and glibc and the vDSO on my Ubuntu Server 20.04 install and learning a ton about the interactions, while also having the Ubuntu apt repositories to fall back on for when I want to switch from "learning mode" to "just get this task done mode."
Tools to empower robbery... AND ACT LIKE PIONEER :D
I swear what you do... SINCERELY!