Hacker News new | past | comments | ask | show | jobs | submit login
Major Proposed Changes to Linux From Scratch (linuxfromscratch.org)
148 points by InitEnabler on July 10, 2020 | hide | past | favorite | 93 comments



If you're not ready for the LFS plunge, install and configure an Arch Linux system with a graphical desktop environment. I learned more about Linux in that 8 (non-consecutive) hours than I had in the couple of years preceding.

Then study for the RHCSA and RHCE v7 (v8 is all Ansible now). Arch Linux and RHCSA and RHCE taught me a ton about Linux and made me super comfortable with my Linux desktops and servers.


https://knowyourmeme.com/memes/install-gentoo

Goes even deeper than arch. The desktop profiles make it almost set-and-forget now.


I learned a lot using Gentoo back in my youth, but I am not sure it is much better than Arch for general Linux knowledge. A lot of the effort involved was with configuring Portage and fixing build problems rather than learning my way around the system or configuring the OS itself.


The gentoo installation process is better documented and you learn a lot more about compilation. I started with gentoo, installed lfs for fun and years later made the switch to arch. I think I learned a lot of things I didn't know when installing lfs, but nothing new when installing arch (had used gentoo for at least 5 years at that point thought).

I think that the main sticking point is the documentation. It was certainly the reason why I went with gentoo at first, not because of the control over the system, but the fact that there was a great wiki (that was often down :D) and great official documentation that lead me every step of the way. I'm of the firm believe that every linux install can teach you everything as long as the documentation doesn't skip on things. You won't learn anything from lfs if you just copy paste commands and you can at that point install ubuntu.


I'd say Arch, then Gentoo, then LFS. Arch provide you with usable binary packages and leaves you to know what to install and how to configure it. Gentoo makes you actually build those packages from source and gives you more control over how the process works. Then LFS takes away even what portage was doing for you and makes you do the build process yourself by hand from scratch.


It's kinda sad that no one recommends Slackware Linux anymore.

Back in the day there we used to say: "when you know slackware you know Linux. When you know red hat all you know is red hat".

Effectively when I got into slackware my computer wasn't 100% working all of the time but I learned a lot. I then moved on and years later I got rhcsa certified and that's very worth it too.

Not sure about arch, I never really used that, but I can vouch for slackware, Gentoo and the RedHat/centos/rhcsa thing.


Absolutely.

If you’re looking to just learn about Linux, you can’t go much further than Slackware. It’s as Linux as you can get: you can learn distros (which admittedly is easier now that many are built on a few different platforms - GNOME, xfce, systemd, etc.), but Slackware gives you a kernel, core-utils, and a bunch of packages - can’t get much simpler than that.

Also once you track -current and get around SlackBuilds you can install almost anything you want, it’s a full-fledged and modern distro. Without dependency management to be sure, but if you’ve got a well-made distro and you compile most of the software you install, who needs it?


I started with Slackware (~18 years ago) but it was just pain with no standard package manager. Sure, later I found slapt-get and I turned to dropline-gnome at some point. But moving to any apt distro, Mandrake, Arch or even Gentoo just felt like a warm bath to me back in the day.


I tried using Slackware since everyone said it was the most Unixy distro, but then I ran across a bunch of issues. I don't want to seem like I'm berating Slackware or anything, since I might not understand how to use it coming from Arch, but this was just my experience.

I remember needing to use Clang for ccls but found out that the version of CMake included didn't support C++17, then when I tried upgrading CMake I found that some library named libarchive was too old to use as a dependency, so I essentially had to spend hours of time upgrading things to versions not supported on Slackware just to get to the point where I could use Clang. I'm pretty sure I wouldn't have to spend all that time if I was using pacman instead where the important packages are all up to date and I could get to work within a few minutes.

Honestly only the distros with a large enough community like Arch or Gentoo would probably have enough people to submit and maintain even the most obscure packages.

Also, there was no easy way to do a system upgrade. In Arch Linux, which is what I'm used to, you can usually do 'pacman -Syu' without too much fuss, but the Slack document for upgrading the kernel is a multistep process that I couldn't successfully complete. (It even wishes you "good luck".) Third-party packages on slackbuilds have no automatic dependency resolution either (!) which was very painful every single time I needed a package not in the official repository, which was all the time in my case. I could literally feel my heart sink every time I needed a slackbuilds package and it required upgrading a system dependency. Apparently this was a political decision instead of a technical one[1], which I don't understand because pacman handles it fine.

I also had integrated graphics issues that caused Firefox to hang on startup or Love2D to freeze, forcing me to kill the X server, for which I found no solution. Arch ran them fine. This was on a ThinkPad, by the way.

It could just be a philosophy thing, or that people with my specific needs are better served by other distros, or maybe I've been hopelessly spoiled by Arch and can't see the benefit Slack brings over it yet. But I have to admit I am still mildly curious about it, just not in the sense of using it as a daily driver.

[1] https://slackbuilds.org/faq/#deps



I learned a lot from Gentoo, and then from Arch, my stormy relationships with both took place over 10 years ago but to this day I use the knowledge I gained. Linux is just "familiar" to me now (things like fstab, the root folder structure, popular partitioning schemes). Now that I'm 38 I don't have the time for Gentoo anymore but I'm glad we got to know each other when skipping a night didn't cause me any problems :)


I can't recommend Linux From Scratch enough. If you want to understand the composition of the Linux user space, this is one of the best exercises to follow.


i feel like linux from scratch is kind of overkill

1. format a disk a bootable active partition with ext4

1. download the kernel, make defconfig && make

1. copy the kernel to the partition with syslinux mbr.bin and probably extlinux.conf (probably copy modules too)

1. copy busybox over

1. voila


Yeah but that doesn’t teach much about core components in real systems, such as systemd, the init process, how dependencies work etc.

I’m not saying you won’t learn, but I learnt a heap by avoiding systemd using gentoo. I eventually gave in when it became clear that it was no longer a viable option for various reasons, but the knowledge I learnt was mostly relevant.

That being said, every time I try and do something remotely advanced with network manager I fail. I can no longer have static ips, or customise DNS on a DHCP controlled connection without google


Yeah, the basic use case of simple wired network connectivity has gotten extremely complicated, especially in Ubuntu 18+, where you configure Netplan with YAML to render templates or...something.. that configures NetworkManager or not NetworkManager to do whatever to finally run `ip` to set the address.

I miss just editing /etc/network/interfaces and /etc/resolv.conf by hand. I get that that sucks for laptops, but why can't the rest of us just have an exceedingly simple and pleasant setup?


My current cross-distribution method of choice is systemd-networkd. It supports a lot of configuration options and can set up VPN tunnels (like WireGuard).

https://wiki.archlinux.org/index.php/Systemd-networkd

Here's a sample config file from my home desktop using a static IP:

  $ cat /etc/systemd/network/lan.network

  [Match]
  Name=enp3s0
  
  [Network]
  Address=192.168.100.200/24
  Gateway=192.168.100.1
  DNS=1.1.1.1
  DNS=1.0.0.1
This'll work on any systemd distribution.

netplan is one of those Canonical projects that'll probably disappear in a year or two.


Are there any Canonical projects, outside of Ubuntu itself, that dont just eventually fizzle out?

I feel like after all these years, Canonical has still not figured out how to make Open Source work.

Nearly everything they do leads to controversy, and them eventually accepting the community alternative with a huff.


I feel they do stuff in a whatever way, and that signals other developers on what matters, and leads to working on something to do that properly. Like, their solutions don't get used but if they are not done than no good solution comes up.


Ugh, this resonates with me so much. I recently tried to fix some kind of mundane network issue (had a Fedora machine configured for a static IP and wanted to use DHCP instead), and the absolute mess of tooling that is the Linux networking landscape baffled me. How could it get this bad? As far as I could tell, there were at least 4 competing ecosystems (NetworkManager, the 'ip' tool, /etc/network/interfaces, /etc/sysconfig/network-scripts) all extant at the same time.

Does 'ip' tool update one of those files? How do I tell systemd to re-load the configuration? Is NetworkManager running or not? Does it matter? Does NM update these files, or do something magical to the interface? Are these even the right files? The best answer at which I could arrive is "who knows?"

Every time I attempted to change the network interface settings, it only stuck around until the machine rebooted. Something was resetting it back to a static IP, and I had no tools to tell me what.

Why did distro maintainers ever allow things to become this confusing? Why do we have so many tools and config files for the same thing?

In the end I reinstalled a fresh copy of Fedora on the machine and set the DHCP configuration in the setup wizard. It's that bad.

My experience tells me to anticipate a comment along the lines of "Hey, those programs and config files are all actually the same tool, and you must have just been configuring it wrong." If that's the case - it's an even stronger point that this chaos has reached a critical mass, where parts of the same networking configuration tool are so disjoint that they don't visibly interoperate at all.


I'm almost positive your problems were caused by NetworkManager. I usually disable and blacklist it everywhere (or wipe entirely if the distribution dependency tree allows it), and my systems never behave in this unpredictable way.

It tries to be helpful and rewrites network configuration files to align them with its own configuration.


The 'ip' command only operates on runtime state (it doesn't save anything to disk).

NetworkManager used to be configured to save it's config to /etc/sysconfig/network-scripts on Fedora, for backwards compatibility (see /etc/NetworkManager/NetworkManager.conf).

/etc/network/interfaces is the Debian alternative to network-scripts. I don't use Debian based distros much so I don't know too much about that.

There's also systemd's networkd, which has it's own config in /etc/systemd/network.

Honestly I'm surprised you've got a Fedora machine with networkd. Usually NetworkManager has been the goto choice (and is what my Fedora 32 install has, although this install has been upgraded from Fedora 23 so take that for what it's worth).

Most of this mess comes from ifconfig (which again, only operated on runtime state), which people then wrote scripts around to automate the setup of the networks. Then ip came along to access networking features that ifconfig couldn't.

You best off sticking to NetworkManager (in my opinion) for desktop usage. It has a far better GUI integration than networkd (which is fine for server/embedded use).


Until about 5 years ago debian used /etc/network/interfaces since the 90s to configure network.

Redhat has used /etc/sysconfig/network-scripts.

Then the network stack was changed in the kernel, and traditional tools (ifconfig) carried on working, but didn't expose all the new features, so a new tool to configure your ethernet link was created called "ip" which you had to use despite not running IP on that interface.

But things still worked. Those with complex requirements would drop in extra things into post-up hooks - e.g. I add route tagging for my dual homed machines so that traffic coming in on eth0 would go back out of eth0's gateway, and traffic on eth1 would go back out of eth1, and sockets specifically bound to eth0 (or eth1) would use the correct gateway.

(oh yes, upto about 5 years ago network cards were called eth0 through ethx, now they have unpredictable names)

However as is common across software nowadays, this wasn't good enough, and had to be redone. By dozens of different groups. Because cloud.


On eth0 times it was a lot more fun to script things.

I also felt wpa-supplicant was a very dramatic name.


/etc/network/interfaces & /etc/sysconfig/network-scripts aren't part of NetworkManager (the directories don't even exist on my machine).

You should probably check out `systemctl list-units` to see what is actually enabled and running on your machine.


Thanks, that's a pretty nifty systemd command[1]. Sadly my confusion is only furthered, since I'm seeing both target files and daemons (although maybe they're both considered 'units' in systemd) and now a new name ("networkd") joins the fray. No sign of NetworkManager at least.

What's also interesting is I've somehow managed to have the networkd-wait-online.service fail, despite having an internet connection.

This is mostly a curiosity at this point, nothing's broken - but it is a really humbling experience to realize how little I know about the modern Linux network stack after "using" desktop Linux for a decade or so.

[1]: https://imgur.com/a/VYX7lmu


I really don't understand the motivation for netplan existing. It's such a primitive tool that provides very little abstraction over it's backends that aren't pluggable or really configurable at all.

It's super weird that they didn't just do the thing with NetworkManager since all they needed to write was a plugin a la ifcfg-rh to process turn YAML into NM settings and then you're done. And since they have NM as a backend they clearly are perfectly capable of doing this.


I’ve always been surprised that RHEL/CentOS 7 doesn't have a simplified server-mode for installing a basic, headless system with a static IP address with an Ethernet connection. The default configuration and prescribed usage is to configure the networking with Network Manager - even though its added complexity is to solve the problems faced by portable computers using Wi-Fi. I’d wager the majority of RHEL/CentOS 7 installations are on static hosts (virtual or physical) rather than mobile devices.


Try dropping netplan and using systemd-networkd.


That doesn't teach you how to handle libraries and the magical pain of dependencies. It's an excellent start, and I would happily encourage doing that first before LFS, but it builds an extremely restricted system to the point of not really resembling any real system that you're going to find in the wild. In particular, a pure busybox system can't be self-hosting.


I think the purpose of LFS is to serve as an educational tool. In that way having more verbose steps teaches more.


Is this documented properly somewhere, so that I can give it a go?

I've tried and failed with LFS so many times, I can't count - I tend to just run out of time, and by the time i go back to it, I've totally lost where in the process I was.


I don't like giving my real name out online but I have a GitHub repo that does this automatically as a CI/CD job and a single shell script. It's really neat. Reach out to me (my username @ gmail.com) and I'll share it with you/


Yes it does, can confirm at least the toolchain (Major step 1), and gcc for major step 2. I'm actually working on getting aarch64 working as of right now. If it successful I'd prolly like to submit a lfs hint.


What sort of system and resources do you need to compile the kernel?

For example, "disk" space and memory?

How long does compilation take (depends on system resources)?


Agreed. This got me started on a love affair that is still going strong, around 20 years later.


Agreed. I learnt a lot doing this in the 2000s.


This brings me back to my years of running Gentoo as my daily driver. All fun and games until you update your system over the weekend and spend the rest of the weekend fixing broken networking. But it is all worth it if your goal is learning the ins and outs of an OS and how it all fits together.

It was fun booting to a graphical desktop in a few seconds, opening activity monitor and seeing < 70mb memory used. This was with XFCE around 2010. Now it is 2020 and Task Manager on my daily Windows 10 machine shows 27gb memory used.


Memory used on Windows doesn't mean the same as on Gentoo. Most of that is cached files and it will be discarded when a program needs that memory directly. Empty memory is wasted memory, using 70mb of memory in a 32GB machine is an anti-feature. You should be caching.

My problem with OSes like Gentoo is they give you the illusion of teaching you how OSes work when you're actually just learning the structure of a decoupled Linux system. RAM caching is basic stuff (and there's basically no reason not to do it), if Gentoo was a good educational pipeline users should not be coming out with gaps in their knowledge like that.

IMO OSes like Gentoo mainly teach you how to plug together artificially isolated components (which preclude cross-cutting concerns & optimisations that rely on the structure of large sections of the system being known & predictable) and give you the illusion of understanding, while wasting a huge amount of your time debugging problems that are superficial. It's like installing a bunch of (fragile) pip packages and thinking you've learned Python. I would bank on the average Gentoo user being unable to actually write a significant OS component, and I think that's a much more useful metric than knowing how Linux plugs together.


> Memory used on Windows doesn't mean the same as on Gentoo. Most of that is cached files and it will be discarded when a program needs that memory directly.

I think you might be mistaken. Task Manager seems to generally consider cached memory as available. Most Linux monitoring tools I have used also do this.

> Since cached pages can be easily evicted and re-used, some operating systems, notably Windows NT, even report the page cache usage as "available" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.[^1]

So both the Linux/Gentoo and Windows systems in question should most likely be caching files. The Windows system might even be caching less aggressively, just because almost all memory is in use by processes.

[^1]: https://en.m.wikipedia.org/wiki/Page_cache


My understanding is that it's more complex than that. A portion is silent, but disable SysMain (SuperFetch) and reported RAM usage will also drop considerably.


I don't know how Gentoo display the RAM usage, but I am pretty sure both macOS and Windows does not show the RAM used for file caching as "used".


Some is hidden, some isn't. Compressed memory swells your system process (on Windows) as far as I understand it.


Every time I read comments about Gentoo it's either about systems unexpectedly getting broken after an update, which supposedly requires considerable amount of effort to fix, or I read about how extremely stable Gentoo systems are and require almost no maintenance.

It seems like there's some dissonance here and I'm not sure where this comes from.


I think 90% of the difference is explained by package load. The more moving parts, the less stable the machine. If you run 10 packages from stable repos, Gentoo will be stable. If you run 900 packages from wherever they were available (more realistic IMO) then it's much more likely to break.


> It seems like there's some dissonance here and I'm not sure where this comes from.

It's because of ego at play.

Users of such operating systems (which are essentially sysadmins), and, more in general, a certain type of tinkerers (for example: experimental/edge technologies), don't see their tools as a mean, but as an end.

This is totally fine of course (it's an interest like any other), but it has some perverse side-effects, in particular, there are a couple of phenomenons.

1. "IT TOTALLY WORKS". Making such systems work is complex, so there's often pride in showing off the success (exacerbated by the communities, who laud such accomplishments). It doesn't matter if such success is solid, or it's held on with duct tape (like it often is), because what matters is that it works, or, more specifically, it worked at least once on one system (:rolling_eyes:).

Additionally, the moral burden of breakages moves to the user. If something in Ubuntu breaks, it's Canonical's fault. If something in Gentoo break, it's the user's fault. And it's not fun for users to openly admit that they caused a breakage.

2. "IT REQUIRES NO MAINTENANCE!". Routine operations get so embedded in the workflow of such systems, that users forget that they actually do them. Pride also takes part.

So... "the system doesn't require ANY maintenance", because update burdens (eg. conflict solving) are, hmmm, "not maintenance", because they're... forgotten (:rolling_eyes: again).

I've been in a few such communities, and I left, or, to be more specific, and found long-lasting solution and stuck with them, rather than chasing the latest and greatest for the sake of it. I used to get _really_ annoyed when something that was supposed to "totally work" was either just a bunch of commands run by somebody who clearly didn't know what they were doing, or was hiding other things that correspondingly would not work.


I was a Gentoo user about 5 years ago. With Gentoo you can have both, if you only use stable packages you shouldn't broke your system. But if you are using bleeding edge packages thinks can break.


Current Gentoo user. I find the opposite. Using unstable, things just work. Sometimes package updates fail to compile but your system is still fine, and the package will get fixed eventually. Never had an update break my system on unstable. Stable often had issues, which I presume is due to devs mostly working on unstable.


27GB? Or 2.7GB?


Probably not after boot, but during active use.


My browser alone can eat up 2.7 GB of RAM.


LFS, or gentoo from a stage 1 tarball, is something that is extremely valuable to do.

Once. Once is usually enough.


Once is generally enough. I did it awhile back and have tried to reference LFS for major changes that have been adopted since I last did it like EFI and LUKS. I'm glad to see it getting updates.


I ran Gentoo in college for a couple of years. Oh god the compilation.


I ran Gentoo for about a week a long time ago. On a 400 MHz Powermac G4. By which I mean, I installed Gentoo once and then decided I didn't like it.


It still works for PowerPC Macs :)


As someone who tried running Gentoo in the past, I could never find a decent resource for helping me figure out what make flags I wanted to set. It was far more confusing than FreeBSD, which at least provided rudimentary descriptions for each flag before compiling. I may have been missing something obvious though.


gentoo is too addictive, unless you want `emerge` as part of your routine


i already use emacs, what’s one more thing?


I find Linux From Scratch a bit masochistic. Embedded Linux development is equally educational while actually being useful. There are good frameworks like Buildroot: https://buildroot.org/


I think a bit of masochism is the point. I did it once to fill in all of the gaps and cement all the dependencies when putting together a system. That kind of knowledge helps when you're trying to do server maintenance but try and keep the system up or do crazier things but be fairly confident certain parts won't be affected.

I can't see myself using a LFS system in production.


I ran Gentoo for a few years, and it was a great balance of being low level and customizable but practical for day to day use. When you have set things up from scratch, you can fix it when it breaks. The Gentoo install disk helped me recover many a server. I got tired of the endless compilation, though.


Exactly. I've been through a similar exercise of assembling a system more-or-less from scratch, and I found it quite useful as a learning experience. It's not about using the resulting system long-term, it's about seeing how one is put together.


I think quite the opposite, while Buildroot is really nice for professional use from someone who doesn't know kconfig and advanced makefile usage trying to know how Buildroot builds stuff is somewhat difficult. PTXdist is another great tool and is a bit eaiser to understand as it only uses kconfig and a few bash scripts and simple makefiles. Then there is Yocto / bitbake which is basically based off of Gentoo's ebuilds... But the reason I've spent a lot of time looking at these projects is a bit of a story...

I've been trying to work on getting my own distribution started from scratch myself, while LFS is a great starting point I needed to have multiple architectures. (For now I'm targeting just x86_64, aarch64 and riscv64) Which the current implementation of LFS is only applicable if you have a host with the same arch that your targeting. I've spent about maybe 9 to 15 months scripting, cross compiling, and researching this thing. Even tried looking at other projects that have somewhat simple bootstrapping processes that use musl such as Dragora Linux, Alpine Linux and Ataraxia Linux to bootstrap everything. However I couldn't really get to a good point in the compilation process where I can chroot into it and start building stuff. Might I note a year or so ago I didn't know much about toolchains, embedded buildtools, etc. but I did have a deep love for embedded.

Now with these new LFS news and the changes that there are proposing / have implemented it seems that cross compiling for other archs will be a little easier since you have a good starting point to branch off of for other architectures (Though solely on your own or via lfs-hints if people are inclined to submit their alterations.). As of right now I'm running the first 3rd of the new development book, was able to write up a quick bootstrap script and bootstrap folder structure to get things going. So pretty excited about these new LFS changes and how they will pan out in the coming months.

EDIT: Forgot to add, before anybody makes any assumptions. No, it's not going to be another desktop distro. It's purely focused on server machines (No GUI) with a focus on distributed systems think CoreOS / Clear Linux.


I have mixed feelings about make. On one hand, it's horrible. On the other hand, it's outlived all the competition, and will certainly be around in decades. And if you are a C programmer, you need to understand make anyway.

I have used Yocto quite a bit and it's nice. But embedded systems tend to be pretty simple.

What I would actually like to see is a modern scripting language like lua or Scheme embedded in the make binary.

I like the idea of an embedded OS for running containers. There is actually quite a bit of support in systemd for that. So it might be as simple as Buildroot with systemd init plus a mechanism for managing system updates.

Nerves is another nice approach: https://www.nerves-project.org/


I believe make does have Guile support (Just have to compile with that feature) which is basically a dialect of Scheme.

Also you could look at xmake if your interested in a make like tool that has lua support: https://xmake.io/#/


I agree, for a couple reasons. One, while you usually have to deal with a cross toolchain (usually intel-> ARM) it's easier to build and debug on a target board versus booting and rebooting your PC to see if you did everything right. Also the

I've used debootstrap to build a Debian distro for ARM with a custom kernel (for an Olimex board target or Beaglebone.) That's very hands-off and not nearly the same "depth" as LFS. The next step was taking the same kernel and building a busybox-based rootfs. Then I had to make my own PID 1 and do init work to bring up services such as networking on boot which was very educational, yet busybox provides most of the "lego bricks" that you're not left writing too much from scratch.

Building to an embedded target also allows one to punt on the most annoying and complex parts of a desktop linux distro, such as the desktop environment, audio/video drivers, UEFI bootloader, etc. Building a semi-custom (e.g. Arch or Debian/debootstrap, Buildroot or OpenEmbedded) headless distro was what I'd call a "shallow dive" or gentle introduction to more of Linux's inner workings without having to understand every bit in order to get a running system.


Exactly. LFS does not support multilib nor cross builds properly, neither does Fedora (missing the glibc headers).

Debian or buildroot are fine to bootstrap new or natively unsupported architectures.


Can you recommend a good SBC?


It really depends on what architecture you want... I'm always raving about the ODroid H2s (love em) Arduino's and PIs are really popular. The there's an nvidia one out now too, if you want something with AI potential.


After seeing this headline I was just thinking it would be a lot of fun to dive back into Linux 15 years after I was last a desktop Linux user. It would be fun to try Linux From Scratch, or Gentoo if that’s still a thing. Then I realized I don’t even own any normal PC hardware that I could do that with without some wrangling (I could probably technically get it working on my old Intel iMac or my Synology NAS, but that’s not really the challenge I’m looking for). It’s wild to think that 15-20 years ago I was obsessed with Linux and spending my free time trying to get graphics drivers and dual monitors to work, and now I barely even have any traditional PC hardware.


Couldn't you try it in a virtual machine? Do you have any hardware that can run Virtualbox?


> I barely even have any traditional PC hardware

Can you elaborate?

Are you using a Macbook or something instead?


Probably tablets, phone, or something similar..?


I have tried LFS before because I thought it would teach me more about Linux (been a desktop Linux user since 2006, having tried a number of distributions, including Arch, and more recently back on Ubuntu, but never dug deep into internals).

However, I found I didn't really learn much and was just blindly executing steps. It's fun for a while to watch the magic, but maybe there is some better way to consume the material? I found that I wasn't really getting enough context around what was happening to appreciate what was being done and why at every stage.

Maybe someone can suggest something?


Interesting - my experience was that I learned a lot from it not so much by blindly executing the steps, but by things not working the way they were supposed to. This forced me to dive into the details and understand what was going on. (This is probably 15 years ago though - maybe the instructions have gotten better :) )


What I retain from all of this...

I salute the fact he did work on the hint I gave on their "weirdly handled" irc chat on freenode... (you may be able to find some logs...). And they succeeded! I'm very happy that now the public is gonna have the clean way to produce an operating system with Gnu tools & Linux.

While I was very kind sharing with them they didn't appreciate I wasn't giving solution "directly" and clearly thrown stuff to me like I'm a noob speaking too much of thing I don't master... well, it was very surprising to me to end in this situation... I even asked bdubbs itself to support me with that... he thrown me that "he doesn't handle irc chat only book writing...". I was kind of meh I just left their chat...

The result of this weird context pushed me into this view:

THEY ARE BUILDING GNU/LINUX SINCE 20years! and 20 years later they are still waiting information to drops from sky... and then acts like pioneerrs...

This went far the edges limits! go in hell NOOBS!


I'm 30 and I can't believe I didn't do this pilgrimage yet.

When will this be ready?


I'm almost 31 and I said the same thing to myself this weekend.

Ironically coincident with this announcement, I ended up going through LFS and managed to get a kernel compiled using an entirely bootstrapped toolchain.

About halfway through I noticed the development version they're discussing, which is available here: http://www.linuxfromscratch.org/lfs/view/development/

I believe this version is likely able to be run-through but please keep in mind it looks like it's been changing every few days!

To the point of other commentators mentioning it can get a little tedious, especially in chapter 6- I think the new revised edition they're working on could help with that somewhat as it looks like avoiding the /tools buildout will help minimize the number of packages that need to get rebuilt.


We can start a study group.

"Uncles doing LFS"


Lots of comments so far about using Linux From Scratch to learn about the userspace/kernelspace interaction, and while that's all feasible you can do this on basically any distro.

This week I've been iteratively rebuilding the kernel and glibc and the vDSO on my Ubuntu Server 20.04 install and learning a ton about the interactions, while also having the Ubuntu apt repositories to fall back on for when I want to switch from "learning mode" to "just get this task done mode."


We really need to fix GCC so that the libraries and compiler can be built separately (like Clang).


Am glad to have taught them something before they thrown me out \o/

Tools to empower robbery... AND ACT LIKE PIONEER :D

I swear what you do... SINCERELY!


Is there a reputable startup / company that employs Linux kernel developers?


You could look a the commit logs and see if there’s any email domains that pique your interest.


Depends a bit on your ideas of "reputable"; IIRC, hardware OEMs tend to employ devs to implement hardware support for their boards, but a rather lot of those have a poor reputation in terms of code quality - i.e., "We've successfully shipped the device with kernel 5.1.0 (current when we started); time to forget about that and implement the next board! What do you mean, upstreaming? That sounds like extra work. Bug fixes? Don't be absurd!".



Intel is one of the top contributors AFAIK...


Intel?


Amazon


Red Hat?


Google?


Samsung


SuSE




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: