This page does make multiple mentions of it, but in a very offhanded way.
I would encourage the people behind this url to make a case upfront about why someone should use this link as opposed to directly going to LFS, and what do they have that LFS doesn't.
I first used LFS to build my own distro back in college and that experience made me learn a lot about the components of the Linux system which is helpful to me even now. I used it again years later to build an extremely lightweight base for docker containers. Its very versatile and I hope this project helps people as much as LFS helped me.
From an organizational structure, I see that buildyourownlinux.com is brought to us by Linux Academy, while linuxfromscratch.org is not affiliated with any other organization (as far as I can tell).
From a technical standpoint, I'm curious what differs. From what I recall, there are several reboots required during LFS - but at first glance I don't see any mention in BYOL.
Some of the provided tarballs are riddled with old vulnerabilities (and served over insecure channels (!!!) with no integrity checks), and thus it is relegated to educational purposes only, and should not be used for building a secure, general purpose system.
Does it support SATA and UEFI boot disks yet?
You can build LFS on any system that meets the requirements.
> Does it support SATA and UEFI boot disks yet?
The Linux kernel supports SATA disks since forever. UEFI is also supported by most bootloaders. The distro you build by following LFS will define support for things like UEFI.
I think you're confusing LFS with maybe a livecd image they used to put out?
At a minimum, disk layouts are drastically different in UEFI and BIOS.
LFS is so woefully out of date that it isn't even funny.
The book refers to a separate text document for more info on UEFI, including the requirement of an EFI partition on a GPT formatted disk. Dated 2017. (http://www.linuxfromscratch.org/hints/downloads/files/lfs-ue...)
The links they give are to upstream. Some are already HTTPS, some redirect from HTTP. I found one case where upstream provides both HTTP and HTTPS and they link to HTTP. They should fix that.
They also provide md5sums of the tarballs, which get checked automatically in their automated build. They should switch to sha256, but there at least exists an integrity check.
I agree it's much more an educational project than a production one. But that would be true regardless; any system you're putting together by hand like this shouldn't be one that's going into production anywhere.
Are you sure you're referencing the current stable version of the LFS book (8.0)? As noted elsewhere in this thread, there was a Live CD version of LFS which hasn't been maintained for several years, but the main book project, along with the ancillary BLFS (Beyond LFS) and ALFS (Automated LFS) are very much alive and well. LFS 8.0 gives you glibc 2.25, gcc 6.3.0, and a 4.9 kernel. If you want bleeding edge, the development version of the book is updated when upstream source packages are released.
LFS has supported SATA ever since it was available in the kernel! There's also nothing preventing you from building a UEFI capable LFS install, provided you setup the BIOS and grub properly.
> should not be used for building a secure, general purpose system.
Why not? LFS is more than capable of providing secure, general purpose systems -- desktop and server.
From looking over the BLFS now, I haven't had a lot of use-cases for building more userland-ish things. I've tried to avoid compiling X or Display Managers and stick to package managers if I wanted to try a new Desktop Environment. I've found Arch docs (or man pages) really helpful in configuring these kinds of things when I do need it. I can see this being more useful if I wanted to be more of a sysadmin. I remember trying to customize PAM and setup Kerberos years ago and feeling like there wasn't much help putting this all together or troubleshooting (it may have been prior to BLFS).
When you said "weirdly useless to understand linux" do you mean the linux kernel in the literal sense? Or the low level system stuff of how things are put together?
LFS is great no complaints no offense intended I know it must take a lot of work to keep it updated.
Then again lower down on the buildyourownownlinux site wget links to a LFS url so it must be the same people or LFS re-branded?
However LFS and BYOL aren't really about making it easy to build a "distro". They really are about learning how all the pieces fit together.
I switched an old Mac to run Linux sometime ago and have documented issues I faced and the limitations I still live with. Most of it is related to decent trackpad support, GUI options for apps/functions I need, and the customization of keyboard shortcuts system wide (coming from OS X/macOS). A lot of information as well as applications I found were simply outdated and not maintained. I've also spent quite a bit of time just to get the system in a shape that's a bit more amenable to me (as opposed to adjusting myself completely to how a specific distribution of desktop Linux works by default). It's nowhere close to perfect for me, but it's the best choice with constant OS/security updates for old hardware that's unsupported by the manufacturer (Apple, in this case).
My advice to anyone wanting to use Linux would be to look for hardware that has been built with Linux in mind. That will save you time, which you can then use to do more productive work/learning (unless your area of interest is in learning about hardware compatibility, developing drivers yourself, tweaking various config files, etc.).
This boils down to a simple thing to check: Are drives for this piece of hardware included in vanilla kernel?
If the answer is yes, it's a good sign, even more when the driver comes from the company building the hw.
This is probably one of the few use cases where building your own linux distro is a sensible idea.
Other projects like LFS encourage desktop users to build their own linux-distro, which is beyond insanity in my opinion. Keeping an LFS system up to date with security patches should be virtually impossible unless you set aside a decent portion of your daily free time for this task, which probably nobody is willing to do. I can see how building your own linux distro is educational and fun, but in terms of security and maintenance it is an absolute nightmare.
It's really not that hard once you know what software is actually on your system. LFS enforces discipline because you're less likely to blindly install hundreds of useless packages just because your distro bundles them. On a nice svelte system, all you need to do is monitor a few mailing lists and decide if something is worth upgrading.
More often than not, you'll be able to patch your systems faster than a distro will be able to release an update, because the source is in your hands at the same time.
This link gave me flashbacks to that manual.
Anyone familiar enough with both to know the difference? I used the LFS docs recently when trying to build a custom version of GCC.
A lot of people don't think about the effect of design decisions on cache performance. Die area devoted to cache can't be used for other things, and the process of filling the cache with larger entities impacts memory bandwidth utilization. So 64 bit ints when mostly you only need 32 bits can turn out to be a loser. This is also why the pendulum has swung back to CISC -- more functionality crammed into smaller cache footprint, and the compilers are much better at targeting CISC instruction sets than they were at the height of RISC.
Even defining the address space is not trivial, with things like virtual vs. physical ("32-bit" x86 since the Pentium Pro actually has 36-bit physical addresses) and bankswitching (common on MCUs).
Likewise, data width is equally as complex: the Z80, commonly known as 8-bit, actually has operations on 16-bit "register pairs". 32-bit x86 with MMX has 64-bit registers, and with x87 has 80-bit wide quantities manipulable in a single instruction.
It just so happens that we hit a sweet spot for many years with "32-bit" x86 which had 32-bit integers and pointers, but the subtleties of classifying a system as being n-bit remain.
That said, some systems (mostly mainframes/supercomputers) are "true 64/64-bit" with 64-bit registers and 64-bit address space.
"The AMD specification requires that the most significant 16 bits of any virtual address, bits 48 through 63, must be copies of bit 47 (in a manner akin to sign extension). If this requirement is not met, the processor will raise an exception."
Not at all a safe assumption in general; architectures evolve over time, and in addition, that address space is split such that kernel addresses have those bits set, not cleared.
48-bit byte-addressed: 256 terabytes
48-bit word-addressed: 2 petabytes
56-bit byte-addressed: 64 petabytes
56-bit word-addressed: 512 petabytes
It may be just a bit of hacking in the some GCC platform configuration files (I'm guessing), and getting possibly tens to hundreds of packages to work with that change (not all software is written completely portably!).
However there is no good reason to do that. It would literally make things use more memory and slower (as a consequence of the former). Any program that actually needs 64-bit integers just uses int64_t etc..