Hacker News new | past | comments | ask | show | jobs | submit login
Linux from Scratch (linuxfromscratch.org)
258 points by letientai299 on Aug 21, 2020 | hide | past | favorite | 86 comments



I had a lot of fun doing this. You really get a feel for the evolution of build systems -- from older software that uses automake/make to newer programs that use meson/ninja/cmake etc. It was also cool to learn how to bootstrap a bespoke set of development tools tuned for your hardware.

It took me a solid weekend to get everything built. I was able to get a basic LFS system built on a Saturday, and on Sunday I did the "Beyond Linux From Scratch" edition. At one point I got stuck trying to debug a weird interaction between systemd and PAM that took me a while to unravel. That was humbling, I thought I knew just about everything about Linux, but turns out there are large areas where I just don't have a clue.

The docs are well written and maintained, so there wasn't a lot of frustration there. Even if you're not an old hand at Linux you can likely get pretty far by just diligently following the instructions.

I struggled a lot more trying to make a decent desktop environment than I did getting the OS setup. I spent so much time trying to get a nice-looking toolbar (polybar) and basic stuff (like how patched fonts work) took me an embarrasingly long time to sort out. I also didn't know what a compositor was, or why you might want one. I enjoyed figuring out the basics of compton, which allowed me to get cool transparent backgrounds on windows[0], although I never did quite figure out how to get rounded corners.

[0]: https://muppetlabs.com/~mikeh/spudlyo.png


I made a "linux from scratch" years ago and sometimes i consider doing it again (this time i will try to write some scripts to automate some things so it can be easier to update stuff)... but last time i did that was many years ago and i do not remember needing much more than automake. The modern proliferation of build tools makes me think that i'll end up wasting more space on auxiliary tools than on the actual software and libraries i want to build myself.

How founded are my fears? :-P


How do i get started too? Where should i begin with?


I don't know if you're referring to the Linux From Scratch part or the desktop customization part of the comment you're replying to, but if latter:

Visit here for inspiration: https://www.reddit.com/r/unixporn/ (safe for work, despite the name).

Pick a window manager, learn how to use it, and go from there. Popular options are i3 (i3-gaps if you want some space between windows), sway (i3, but for Wayland), bspwm, awesome wm. Usually they're used in combination with polybar (highly customizable status bar) and rofi as an application launcher.


That's right, /r/unixporn influenced me quite a lot. I read a lot of folks' dotfiles that were posted there and was inspired by the screenshots. Because I had built a lot of software with LFS, I wasn't at all afraid of tackling building custom versions of i3 (i3-gaps) and compton (compton-tyrone) to get visual bling.


Unixporn is often too flashy for my tastes. Something like DWM with a CPU and RAM meter up top is often enough. Why we gotta mix compositors with 3D file system explorers and animated desktops is beyond my needs, but I'm atleast guilty of the animated wallpaper trend


Begin with installing, e.g. VirtualBox, end obtaining a distro with a toolchain.

Then add a, say, 10Gb volume to your guest instance and party like you're on the road to mad skillz.


I tried to go through this a few years ago. The time commitment is huge, especially if you really want to understand all steps in detail. In the end I lost interest because it turned into a mindless command-copying exercise for me. My only real take away was a much greater appreciation of the complexity of an operating system.

Has anyone followed this from start to finish and can now comfortably compile their own linux kernel from scratch?


The point of doing it is to understand how Linux hangs together, to remove the mystery in the wiring.

I don't think it's worth pursuing to a GUI, unless you have a burning desire to customize or fix brokenness in that experience you've had on other Linuxes.

The point is definitely not to continuously manage your own distro you built from scratch, unless that's your job or hobby.

I use the knowledge I gained from the experience to debug many random instances of brokenness on Linux, both personally and professionally. It also made compiling from source a fairly comfy option when necessary (e.g. you need to fork, or backport a piece of software).


I did as a teenager, probably close to 16 years ago. The process was glacial because I lived in a rural area with rudimentary internet access. Figure from start to finish it took me a year? Gave me a lot of time to consider the different pieces of software being compiled, which I probably would not have done with a fast internet connection. Really helped jump start my interest in systems programming and is surely one of the experiences that pushed me toward the career I have now.

I still feel pretty comfortable in compiling a linux from source. Userland specialization is also a neat topic.


Me too, although it was maybe 26 years ago. Learning autoconf/automake served me well, as I ended up working on embedded firmware on ISDN cards using the MC68302 and i960 for a couple of years, and the experience meant that building and setting up a cross compiler and build roots from scratch were a relatively easy step forward.


I followed this during winter break my first year of college and loved it! Helped me learned Linux and co. pretty quickly and I used the machine as my main computer for the next 3.5 years of college.

Once you get X and google chrome it becomes pretty self sustainable. Updating the kernel is easy but upgrading the compiler / C library is a nightmare.

A lot of programs you might want to use provide binaries that makes things a bit easier, but there were some annoying days of dependency chasing if you have to install from source. I ended up writing some custom scripts to ./configure, make, and make install into a deb from .tar.gz.

About the initial LFS process, I had messed up a few steps pretty bad, and had to restart / go back one or two times. The worst mistake was embarking on the journey first by installing everything into a USB flash drive. Eventually data corruption / loss ruined the yacc binary (or similar? hard to remember) and I spent days trying to figure out what went wrong. After moving to an SSD it worked better :)

Definitely one of my favorite and most rewarding projects I've done.


I haven’t done LFS, but I have built minimal Linux systems from scratch as part of my day job. FWIW, building a _basic_ usable system is fairly trivial, and requires no where near the amount of work described in LFS.

I could see value in building a basic Linux from scratch. Beyond that you are just copy pasting commands from READMEs for hundreds of projects.


Yes I also did this years ago. During that time I was actually also fascinated by the idea of building my own OS - which I never did though. But if you do this, you really understand how "GNU Linux" works end-to-end and stumble over a lot of interesting things with the upside of having a system tailored to your setup.

I started using Linux when it was still necessary to compile your own Kernel to support literally anything going beyond the absolute base setup.

The most difficult parts were bootstrapping the system and getting the glibc stuff right. The rest is indeed rather robotic, although I think there is some kind of package manager now included.


I followed through the whole process as an assignment for my operating systems class in college. Overall I had the same experience as you -- it was an exercise in copy and paste.


I was wondering if it's automatable, and the site already does that...

I also did the whole book (manually!) as a slacker student. I guess if you do it/let it run automatically as you read the book, you'll know your installed OS inside and out...


Automating LFS is where Gentoo came from.


What came first, ~~the chicken~~ Gentoo or ~~the egg~~ LFS?


Technically, Gentoo is a penguin.

https://en.wikipedia.org/wiki/Gentoo_penguin



I did this 15 years ago, before I gave gentoo a spin. I compiled my kernel for many years (make menuconfig).

Bottom line: It's not worth it.

I use debian now.


I put off doing LFS for several years: Ubuntu, then Debian, then OSX. I discovered NixOS a couple of years ago and never looked back.


I moved to Ubuntu for political reasons outside the fault of gentoo. I really miss the ability to easily patch core libs with custom patches, and portage (when it worked) was a lot easier to deal with than apt.


Care to elaborate on the "political reasons"? I've never heard of someone moving _to_ Ubuntu for political reasons.


If I get breached as a result of using gentoo, it looks bad. If I get breached using Ubuntu it’s at least along the lines of ‘well everyone else is using it anyway’. Semi safety in numbers, semi ‘nobody got fired for buying an IBM’.

It wasn’t my call, and it’s a defense move rather than an opinion on the state of Gentoos security. I know they take things very seriously, but my assertions aren’t enough to make my employer feel comfortable.


Amusing, in this context, that Google decided to base ChromeOS on Gentoo.


Sure, but I’m sure they leverage portage due to its sheer power and additional scrutiny over any ebuilds they source from upstream. Which, to be clear, the threat model assessed internally is intentionally malicious packages which are unintentionally merged into the portage tree by legitimate and well meaning developers. There’s no immediate issue with the portage or gentoo developers and package maintainers themselves.


Wait, compiling the kernel and coreutils is the easy part. It’s when you get to X11/gtk/Firefox that it starts to get bad.


Between 2000 and 2005. "Comfortably" compiling ones "own" kernel got more difficult though, because of the sheer growth of options. But for that https://kernelnewbies.org/ exists. Especially for tracking https://kernelnewbies.org/LinuxChanges if you don't only want to blindly apply your proven config to newer kernels, ignoring new features.



Thanks Dang!

It's interesting that in 2016 the top comment was about how out-of-date the documentation was, which made it very challenging for the novice to approach. When I did it in 2019, my experience was very different -- I marveled at the quality of the instructions and documentation, as well as how up-to-date it was.

In fact, just this week I was googling for some information about building GPG without having to deal with generating PDF documents, and I stumbled upon an LFS GPG page[0] that had been updated that day. There seems to be a fairly active community who is maintaining the guides.

[0]: http://www.linuxfromscratch.org/blfs/view/svn/postlfs/gnupg....


I think this is something every computer/tech/future programmer should have done at least once, even if you give up half way.

I did this when I was in early high school. I just got into Linux, and this was super fun to do, it really showed me how an operating system worked and all the moving pieces and how they all fit. I did it all on a 1ghz machine with 256mb of ram, and some slow DSL. It took me a 4-5 days back then, but it was a blast for me.

I haven't done it since. But today I am a senior devops engineer, and that knowledge was so valuable to me today. It is so much easier for me to reason about systems and linux in general.

Honestly, I am really tempted to go through this again as a weekend project in a VM or something


Maybe. I did it and while it's certainly educational, most of it feels like a waste of time. The bulk of the entire process is running 'make' on random things and waiting. It's fine as a teenager (when I did it lol) but as an adult with a job? There are better/more productive ways to spend your time if you want to learn more about Linux probably.


What would you suggest as an alternative? I've never done LFS. I'm intrigued by it.

We use Debian in an IoT product similar in footprint to a pi. I know Linux OK, but some areas are still pretty opaque to me (or evolved, once upon a time I knew unit pretty well, systemd, I can manipulate it, but I'm no guru).

So would LFS be a good fit for me? Or better some other educational adventure?


IF your hardware is supported by NetBSD (to whichever necessary extent and use case) it offers the most advanced build system in my experience.

Think of it like the complete LFS, with integrated basic X11 (optional), automated. Either selfhosted from installation, OR crosscompiled even from Microsoft Windows to whatever else is currently supported by it. One single way to operate on diverse systems. Even more advanced than the build system of FreeBSD, by which Gentoos Portage has been inspired. Similar thing for its PKGSRC, which is the package manager for anything not contained in that base system.

( http://pkgsrc.org / https://pkgsrc.se )

If you ever experienced that, you realize how big the mindfuck of any Linux is, and wish for being in a parallel universe where something like this would be the basic standard to build upon.

Skim the http://netbsd.org/docs/guide/en/

edit: To be clear, this is not the normal mode of operation, you don't have to do this to use it, but it is integrated, so that you can generate install images from which to bootstrap other systems.

http://netbsd.org/docs/guide/en/part-compile.html


Why wait? Maybe for the very first time doing it, afterwards do it in some VM, let it run in the background, suspend/resume as needed, deploy from there, and so on.

OK, impractical when you have only one system and are new to this, but otherwise?


I also did this once in early high school, but I personally didn't feel it was worth the time. I had already used Gentoo for a year when I did LFS though, so that probably affected the learning potential. (I do believe that most of what I know about Linux, I know from the year I used Gentoo.)


On http://www.linuxfromscratch.org/lfs/view/stable/chapter06/pk...:

> Symlink Style Package Management

For this I highly recommend https://zolk3ri.name/cgit/zpkg/ which I have been using for years now. It works wonderfully.

Environment variables and their defaults:

    ZPKG_SRC = ~/.local/pkg
    ZPKG_DST = ~/.local
    ZPKG_DB = ~/.db
It means that if you install anything from scratch, you have to `make install` (or the like, depending on the build system) it to, say, `~/.local/pkg/foo-1.0` and then run `zpkg link foo:1.0` to install (i.e. link) the "package". After that you just have to make sure you have added `~/.local/bin` to your `PATH` environment variable, and `~/.local/man` to your `MANPATH` environment variable (in your `~/.bash_profile` file). Seems to do the job. It does lack a README file for which I may contact the author. In any case, `zpkg --help` should be of tremendous help.

By the way, I have noticed that someone created a package manager with the same name, but its initial commit was in 2019, while this one's was in 2017.


Oh and yeah, it is written in OCaml, so you do need to have the OCaml compiler installed. I recommend doing it via `opam`, but your Linux distribution's package manager will suffice (simply `ocaml` on Arch Linux, for example). With regarding to `zpkg`: if you run into any issues, bugs, or you miss a feature, contact the author or me. You will not be able to reply to this message after some days and I do not have contact details here, so you will have to contact the author then. :D


Whoops, correction: `ZPKG_DB` actually defaults to `~/.local/pkg/.db`, not `~/.db`.


The first time I did this in 2001 it took me a week. It's still up there as one of the most educational things I've ever done. A perfect graduate level class would be "here's a box full of computer parts, at the end of the semester I'll give you the grade you display on a web page served by the computer you build and compile from scratch".


So... basically pass/fail? Who's gonna get that far but type 'F' instead of 'A' into the HTML they author?


Puts skin in the game. No kidding in 1987 I actually did convince a professor to agree to an independent study course for me and a friend to build a voice synthesizer on an IBM PC proto board using the TI chipset available at the time and the only terms of the class would be that he gave us the grade my computer verbally asked him for at the end of the semester. We actually built two of them, one for each of us. It was wire-wrapped. We had no backup plan whatsoever and it was a brave/stupid thing to do, but we both got A's and the prof told us his only regret was that he didn't have us make one for him too. Even more skin was that it went into my own IBM PC, so if I blew that up not only wouldn't I have had enough credits to graduate but PCs were very expensive at the time. These days you can blow up a Raspberry Pi and who cares.


A perfect graduate level class would be "here's a box full of computer parts, at the end of the semester I'll give you the grade you display on a web page served by the computer you build and compile from scratch".

I actually had a class not too different in spirit from that once, but it wasn't even close to graduate level. It was a class I took at Brunswick Community College while working on an A.S. degree in Computer Programming.

Here's how it went: I had been at BCC before, when I got an Associate in General Education degree. I had taken a few computer related classes, and had gotten to know the main instructor who taught most of the computer programming stuff. Maybe it helped that we were both named "Phil" but we struck up a mild acquaintanceship if not an outright friendship.

Fast forward a few years, after I had transfered to UNC-W, then dropped out, then returned to BCC to get another associate degree. I signed up for a class titled "Operating Systems", which turned out to be more of a "Survey of Operating Systems" not any kind of "OS internals" (keep in mind, we were at a community college here). Anyway, I walk into class the first day and my old friend Phil is the instructor. He looks at me, smiles, and says "I have a deal for you."

I don't know what the rest of the class did all semester, but he pointed me to a room off to the side of the main room we were in, that held:

1. An ancient IBM AS/400 minicomputer

2. The biggest stack of those big-ass reel tapes I've ever seen

3. A ginormous stack of manuals

The deal he offers me? Get an operating system installed on this machine, get it booting (except they don't call it "boot" in AS/400 land, they call it "IPL" for Initial Program Load or something to that effect), put it on the Internet, and be able to log into it remotely. If I do that, no matter how long it takes, I get an "A" for the class, and don't have to do any of the other work the others are doing.

Apparently this monstrosity had been donated to the college by a local company that decommissioned it. And I guess the college had nobody on staff (besides instructor Phil) who knew anything about the things. Not sure how long it had been sitting there before the job fell into my lap, but I digress...

As it were, it took a little while (at the time I'd never touched an AS/400 before), but I managed to get it going. Then I spent the rest of the semester coming in and just playing around on that thing. I got my "A" and even more, Phil helped me land my first real IT job as an AS/400 operator. Which all led, indirectly, through fits and starts, to where I am today.

I don't work specifically with AS/400 (iSeries now, I think) machines any more, but I have a sort of sentimental attachment to them I guess. Not so much for RPG/IV and SEU though.


So the only two grades possible are A and NULL?


Pretty much. But back in those days the central computer for the college, where they kept student records and stuff, was a Prime mainframe running PRIMOS, and I'm not sure how well it handled NULL. So who knows what (apparent) grade I would have wound up with in that case...


I wonder if it would be less work to write an OS with the sole purpose of pushing out packets for the hardcoded webpage…


Be careful with this! It's a slippery slope from LFS to Gentoo. I made the mistake of trying LFS once in college, and then I spent the next 10 years compiling everything from source! Beware!!


Heh - going on 17 years on Gentoo for me! And yes, I got there via LFS. As in "Wow, this LFS/BFS stuff is a nightmare to maintain. I wonder if anyone has automated it..."

Gentoo has its problems, but I can't imagine life without it. The day I can't use Gentoo is the day I quit tech.


> I can't imagine life without it

The people who I know that use Gentoo are addicted to constant updates (they apply them more than once a day).


> The people who I know that use Gentoo are addicted to constant updates (they apply them more than once a day).

Quite the opposite for me. I struggle to do it once a month. I would often go months without it, but that turned out to be a really bad idea.

No - it's the USE flags that make it awesome. And the ability to apply custom patches, etc as needed.


Probably more addicted to all the compiling and stuff.

There is something about compiling non-trivial software, like the kernel, with all that text flashing by the screen, that is purely satisfying to watch.


I just got the warning from portage yesterday that, in that case, told me it had been 33 days since my last sync.


Better use a purely functional build system, like in NixOS, where you can download binaries and have the guarantee that they are the same as when you had built them yourself.


It's much closer to that than any alternative I am aware of, but that's one guarantee you don't have. It's important to limit your substituters (binary caches) to trusted servers, because Nix does not and cannot verify that the binary it receives corresponds to the source, only that the remote server claims it does. Even if the binary does correspond to the source, the build process may have picked up small details about the build system, meaning a local build would produce a different binary. Even if the build process does not pick up any such details, many packages do not have deterministic builds, meaning they might produce different binaries even if you build them multiple times in succession on your own computer.


This is true. I am a NixOS fan, but when a piece of software asks me to use a Nix binary cache it can mask problems like a private git repo or other issues that would prevent a full build without me knowing. Worse yet, a Hydra box eating it's own dogfood ...


Of course you can also use Nix or Guix in a more Gentoo-like fashion and forbid binary downloads, requiring everything to be built locally.


Funny, I use Gentoo, and thought LFS was too far...


Or you end up on a BSD :)


The horror! ;)


Been there, done that. Used LFS as my main system for a while... Now I can't be bothered to even compile the kernel or, for that matter any piece of open-source software. Not that I have nothing more to learn, or I abandoned coding or even being interested in this stuff. Problem is, software today is too complex and bloated, with too much of that complexity and bloat being accidental, and therefore it is not very interesting and worthy one's time spent learning it and tinkering with unless you are paid for it. In terms of learning, xv6 seems to be a much better proposition.


I think the two are a bit orthogonal--but do have some overlap. I recently had a job where I helped out with a bunch of sysadmin stuff. A bunch of the computers had local users and dns entries set locally. The rest of the network used NIS. They also periodically rented computers and didn't have a formal install process or OS image. I also moved while working there and had to do a lot of this remotely.

Knowing how all the pieces fit together from doing LFS and how to bootstrap them was really helpful even though we never compiled anything. It also helped when things get wedged (won't boot or a GUI freezes) and you can fix it without having to power cycle the machine, losing work.

I haven't taken an OS class or messed around with xv6 (I'd like to), but I think that would help with a different set of things.


LFS is great. Do it once, all the way through.

Then go back and automate it yourself. Make your own build process, scripts, auto download sources and automatically apply sets of patches. Make your own fakeroot system and assemble all three pieces into a working image file, auto tar the whole thing and make it easily deployable.

That... is how you come to fully understand everything that goes into a basic bare-bones yet fully functional, <50MB Linux Distribution.

Then do it again, but with a cross compilation toolchain (CLFS but using the build tooling you've just created).

I did this year's ago for the OG Raspberry Pi. I was frustrated by the lack of a minimal distro that just had the basics for serving websites and nothing else.

Fast forward 6 months, all the above completed, amazed by how far I'd come... only to realize CentOS had been released for the RPi in the meantime.

I'll never regret that time spent, though. It really does unlock a "next level" understanding of what a distro is. There is no magic, just a lot of hard work.


I never got into LFS, when I finished my build I was probably exhausted by all the sed'in and patchin' and didn't have a single clue about where the piece fit.

I did end up with a flawed system with funny bugs. I couldn't access the network with the usual tools but elinks managed to bypass something and get a TCP stack somewhere. But it failed to render properly on tty. No surprise I had no clue what I did wrong but it made me realize how much plumbing there was.


Wow, this brings back memories form the days LFS was just starting. I think I have my LFS home-made CDs with source and binary tarballs in '99 and 2000: I've got tired of building everything and chasing dependencies and packages by 2001 so I moved to, ahem, Slackware. I've only switched to a reasonable distribution when Ubuntu came out and it tracked GNOME which I contributed to.

But LFS was a wonderful learning opportunity for me: it allows you to build an understanding of the nitty-gritty bits of the system, but hurdles one has to overcome are very painful for a daily system. I think things have improved a lot in early 2000s when better build scripts were introduced and you actually had links to packages you needed: back before that, there was no way anyone would build a LFS system over a weekend, as I see people reporting here (not the least because most Internet access was dial-up).

So first up: have time and desire to learn. I would probably appreciate re-doing this with systemd to really internalize how things have changed from the inittab days.


I recommend this project to everyone. The experience points are worth three Linux-fu levels, easily.


Did this many years ago, it's fun.

Just checked it quick again, the basic steps remain the same.

It still requires a separate partition, I would add a chapter about: how to use loop device to mount a file as a device, then how to get LFS built on that loop-file and run it inside a virtualbox or kvm, to make it non-intrusive and safe to play with for beginners.

Many beginners will install their linux under one partition, and they probably do not have a second partition to try LFS, so a file-based approach, with virtual machine in mind, could be easier to attract users?


I remember doing this all the way back in 2000 on an old Toshiba Tecra 500CDT with a 90Mhz Pentium. Not sure I ever got as far as being able to run startx, because every time I'd come up with more and more weird filesystem ideas (what if every package had its own separate hierarchy and we just linked things centrally? etc etc).

I do think this is a good education, although realistically I think you will learn as much (and probably more) on Arch these days, especially if you have to create your own AUR packages for some reason.


I remember running Slackware on a Toshiba Tecra 500CDT back in 1997. Man it was like heaven back then compared to Windows 95. I forget how think they were.


I built a basic X-Window Linux from Scratch many years ago. Then I tried to include one of the popular Desktops and I ran into circular dependency hell

Apart from the kudos of saying "I did it." and learning a bit about the construction of GNU-Linux, there is no benefit in Linux from Scratch except as something crossed-off your bucket-list.

These days I 'buy built' and download a distro that many experts have put lots of time into producing something that I, alone, could never compete with.


Do people actually run LFS systems as a daily driver (desktop or server or whatever)? I always thought it was a great way to see what it takes to put together a functioning distribution, but the LFS book's take on updates is basically that it's on you to follow mailing lists for security advisories for literally everything on your system, and update as needed. That sounds like it would take up way too much time and effort to be practical.


Some of my fondest computer memories are working through the LFS guide on a home-built computer in highschool. Wish I still had the time to do stuff like that!


I’m surprised it’s still alive. I did it 14 years ago and I used that system for a year. It was a great way to learn a lot about Linux fundamentals. If I would do that today, I would try to use some layered FS to capture changes on from install script and archive those changes into rpm package (or at least tar archive), so everything’s under control. I don’t feel good just running make install which could do anything.


Nice! I know the project maintainer and he is really methodical about everything he does. Nice work Gerard.


I worked through this in 2001(?); I printed it all out -- on paper! --into a huge binder. Back then, monitors were small.

Was a fantastic way to learn how to grow a system. Horrible gateway drug to Gentoo Linux, of course.


I remember coming back from school 20 years ago to see if Linux or glibc finished compilation.

I think is mandatory if you really want to understand how Linux base os works.


Something I’ve been wondering: what’s the difference between this and buildroot?


"...the goal of LFS is to build a complete and usable foundation-level system. This includes all packages needed to replicate itself."

http://linuxfromscratch.org/lfs/view/stable/prologue/package...

Buildroot's target systems don't include a compiler - it's more for real-world embedded linux system generation.

https://buildroot.org/downloads/manual/manual.html#faq-no-co...


buildroot is used to cross-compile Linux systems. It's a tool. Linux from scratch is a guide that users are supposed to follow to learn more about Linux. It's mostly an educational tool.


That doesn't sound like a very big blocker; don't you "just" need someone to "package" a compiler chain?


Is this something that will allow you to create a new distro?


It can be a first step to creating a new distribution, but most people just do it as a learning experience.

Here's one example though of someone who went on to create a new distribution: http://www.nutyx.org/en/

To create a distribution, you need a package manager of some sort to manage software updates, and while LFS discusses some possible package managers that's not really its focus.


yes, because a distro is a collection of software on top of the kernel.


Yes you can scratch that itch using LFS. (Hey, here's an idea for such distribution's name: "Linux From the Itch.")


Sure, Arch Linux was bootstrapped with LFS by Judd around 2001/2002. Back then pacman was written bash I think?


No you build your own gnu/linux...from scratch.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: