Hacker News new | past | comments | ask | show | jobs | submit login
Build your own Linux (buildyourownlinux.com)
187 points by ashitlerferad on Aug 16, 2017 | hide | past | web | favorite | 48 comments



I was about to go hard on this because I came across a similar link a few days ago that had absolutely no mention of the excellent existing project on this topic, the LFS project [1].

This page does make multiple mentions of it, but in a very offhanded way.

I would encourage the people behind this url to make a case upfront about why someone should use this link as opposed to directly going to LFS, and what do they have that LFS doesn't.

[1] http://www.linuxfromscratch.org


Yeah it seems very similar to LFS, including the verbiage on the pages. Wondering if the same people are behind this.

I first used LFS to build my own distro back in college and that experience made me learn a lot about the components of the Linux system which is helpful to me even now. I used it again years later to build an extremely lightweight base for docker containers. Its very versatile and I hope this project helps people as much as LFS helped me.


Exactly. What are the differences?

From an organizational structure, I see that buildyourownlinux.com is brought to us by Linux Academy, while linuxfromscratch.org is not affiliated with any other organization (as far as I can tell).

From a technical standpoint, I'm curious what differs. From what I recall, there are several reboots required during LFS - but at first glance I don't see any mention in BYOL.


One works and one isn't maintained.


Are you saying LFS isn't maintained? The last stable version was released in February 2017. RC1 for the next release was made today.

http://www.linuxfromscratch.org/lfs/news.html


LFS feels like a dead project. The project presents itself as a "book."

Some of the provided tarballs are riddled with old vulnerabilities (and served over insecure channels (!!!) with no integrity checks), and thus it is relegated to educational purposes only, and should not be used for building a secure, general purpose system.

Does it support SATA and UEFI boot disks yet?


It is a book.

You can build LFS on any system that meets the requirements[1].

> Does it support SATA and UEFI boot disks yet?

The Linux kernel supports SATA disks since forever. UEFI is also supported by most bootloaders. The distro you build by following LFS will define support for things like UEFI.

I think you're confusing LFS with maybe a livecd image they used to put out?

[1] http://www.linuxfromscratch.org/lfs/view/stable/chapter02/ho...


In the context of a Linux distribution, there's a fair bit more to supporting UEFI than the BL supporting it.

At a minimum, disk layouts are drastically different in UEFI and BIOS.

LFS is so woefully out of date that it isn't even funny.


from the book: "If your host hardware is using UEFI, then the 'make defconfig' above should automatically add in some EFI-related kernel options."

The book refers to a separate text document for more info on UEFI, including the requirement of an EFI partition on a GPT formatted disk. Dated 2017. (http://www.linuxfromscratch.org/hints/downloads/files/lfs-ue...)


LFS current stable comes from February 2017. The current version rc1 was released today. It's hardly dead, archaic styling aside. (Anyone else remember when books on specific software suites were relevant...?)

The links they give are to upstream. Some are already HTTPS, some redirect from HTTP. I found one case where upstream provides both HTTP and HTTPS and they link to HTTP. They should fix that.

They also provide md5sums of the tarballs, which get checked automatically in their automated build. They should switch to sha256, but there at least exists an integrity check.

I agree it's much more an educational project than a production one. But that would be true regardless; any system you're putting together by hand like this shouldn't be one that's going into production anywhere.


So sad to see something described as 'archaic styling' just because it's not 'flat design' garbage.


> Some of the provided tarballs are riddled with old vulnerabilities

Are you sure you're referencing the current stable version of the LFS book (8.0)? As noted elsewhere in this thread, there was a Live CD version of LFS which hasn't been maintained for several years, but the main book project, along with the ancillary BLFS (Beyond LFS) and ALFS (Automated LFS) are very much alive and well. LFS 8.0 gives you glibc 2.25, gcc 6.3.0, and a 4.9 kernel. If you want bleeding edge, the development version of the book is updated when upstream source packages are released.

> Does it support SATA and UEFI boot disks yet?

LFS has supported SATA ever since it was available in the kernel! There's also nothing preventing you from building a UEFI capable LFS install, provided you setup the BIOS and grub properly.

> should not be used for building a secure, general purpose system.

Why not? LFS is more than capable of providing secure, general purpose systems -- desktop and server.


beside that, LFS on its own was weirdly useless to understand linux. More like linux toolchain / sed tutorial. I made a bootable LFS twice but it was strangely broken. BLFS is where linux knowledge is IMO.


I guess it depends where you want the knowledge. When LFS first got started (and a few years before), I found at that time as a home-user, I often had to build the kernel myself (maybe I was trying too hard to be cutting edge), know more about booting (LILO and GRUB), and on occasion I messed up bootable partitions. Recently, I was trying to build a custom version of GCC and dependencies and found LFS docs helpful.

From looking over the BLFS now, I haven't had a lot of use-cases for building more userland-ish things. I've tried to avoid compiling X or Display Managers and stick to package managers if I wanted to try a new Desktop Environment. I've found Arch docs (or man pages) really helpful in configuring these kinds of things when I do need it. I can see this being more useful if I wanted to be more of a sysadmin. I remember trying to customize PAM and setup Kerberos years ago and feeling like there wasn't much help putting this all together or troubleshooting (it may have been prior to BLFS).

When you said "weirdly useless to understand linux" do you mean the linux kernel in the literal sense? Or the low level system stuff of how things are put together?


I meant the whole linux system, not the kernel. I guess I'm wrong since it's not gnu/linux from scratch. But the two are entangled.


I have to wonder is LFS maintained? I know a year or two ago it seemed like the documentation and the CD/DVD were quite old.

LFS is great no complaints no offense intended I know it must take a lot of work to keep it updated.

Then again lower down on the buildyourownownlinux site wget links to a LFS url so it must be the same people or LFS re-branded?


You can always check out the latest[1] version in your browser. Build is from Aug 15th at the time of my writing.

[1] http://www.linuxfromscratch.org/lfs/view/development/


Seriously though, is this a fork of LFS or what?


Yes, Linux From Scratch has been a product for a very long time.


These days, for the situations in which I want to make my own distro, I go to [buildroot]. I recently (well, 2 years ago) built an image to replace DOS on a Pentium 75 system. Built an small distro containing nothing more than a a couple users and minicom, with minicom attaching to the serial port, and ssh access. Sure, I could have done this with a raspberry pi and a usb serial adapter, but this was more of an educational side project. I was already re-using the PC power supply and case to power a standalone APRS setup, and figured I should be able to re-use the computer and serial port to provide remote access.

[buildroot]: https://buildroot.org/


+1 for buildroot. I've used it both professionally and on personal projects. All of my rpi projects run on tiny buildroot images (usually with a single go binary). I love that it produces an image I can write straight to an SD card that is ready to go without me having to plug the RPI into a monitor to configure it.

However LFS and BYOL aren't really about making it easy to build a "distro". They really are about learning how all the pieces fit together.


The harder part of building a Linux distribution/system would be getting to use the hardware you have and exploiting its full potential, especially those that are not built with Linux in mind.

I switched an old Mac to run Linux sometime ago and have documented issues I faced and the limitations I still live with. Most of it is related to decent trackpad support, GUI options for apps/functions I need, and the customization of keyboard shortcuts system wide (coming from OS X/macOS). A lot of information as well as applications I found were simply outdated and not maintained. I've also spent quite a bit of time just to get the system in a shape that's a bit more amenable to me (as opposed to adjusting myself completely to how a specific distribution of desktop Linux works by default). It's nowhere close to perfect for me, but it's the best choice with constant OS/security updates for old hardware that's unsupported by the manufacturer (Apple, in this case).

My advice to anyone wanting to use Linux would be to look for hardware that has been built with Linux in mind. That will save you time, which you can then use to do more productive work/learning (unless your area of interest is in learning about hardware compatibility, developing drivers yourself, tweaking various config files, etc.).


> look for hardware that has been built with Linux in mind

This boils down to a simple thing to check: Are drives for this piece of hardware included in vanilla kernel?

If the answer is yes, it's a good sign, even more when the driver comes from the company building the hw.


> Our goal is to produce a small, sleek system well-suited for hosting containers or being employed as a virtual machine.

This is probably one of the few use cases where building your own linux distro is a sensible idea.

Other projects like LFS encourage desktop users to build their own linux-distro, which is beyond insanity in my opinion. Keeping an LFS system up to date with security patches should be virtually impossible unless you set aside a decent portion of your daily free time for this task, which probably nobody is willing to do. I can see how building your own linux distro is educational and fun, but in terms of security and maintenance it is an absolute nightmare.


> Keeping an LFS system up to date with security patches should be virtually impossible unless you set aside a decent portion of your daily free time for this task

It's really not that hard once you know what software is actually on your system. LFS enforces discipline because you're less likely to blindly install hundreds of useless packages just because your distro bundles them. On a nice svelte system, all you need to do is monitor a few mailing lists and decide if something is worth upgrading.

More often than not, you'll be able to patch your systems faster than a distro will be able to release an update, because the source is in your hands at the same time.


Tangent - when I was 18, I wanted to be hardcore. I thought I was above Fedora and a (then up and coming) Ubuntu. So I printed out the compile-from-scratch manual for Gentoo. It was a solid 20 pages front and back. I made it four pages in and gave up.

This link gave me flashbacks to that manual.


4 Pages? That's the table of contents and the introduction :p


Funny how I was just looking for something like this. Just finished a whirlwind summer semester using Gentoo in a course on Linux admin, so I've been wanting to dig in a bit deeper.


Back in my day it was http://www.linuxfromscratch.org/

Anyone familiar enough with both to know the difference? I used the LFS docs recently when trying to build a custom version of GCC.


Looks like this site may go into some more detail of how the specific packages interact to create a "standard" Linux distribution more than the LFS series does; but I'm not sure how much has changed regarding that in recent times... Even the faded BLFS book I was gifted in 6th grade seems to be more focused on just getting the packages built and installed into the LFS root than going in depth about their internals/function. Even though tools like buildroot/OE/Yocto offer much better tools to make a Linux from scratch, I think both of these offer a great way to quickly gain at least some insight into Linux + userspace


Honestly there are loads of projects where people don't analyse the existing solutions and just start, you know, from scratch.


how feasible would it be to build a true 64 bit system ? (i.e., the ints are also 64 bits)


Feasible? Very. Detrimental to performance on average, though. I remember sitting through many LP64 versus ILP64 meetings almost 20 years ago when the transition to 64-bit was upon us. (At least upon us CPU designers...) Turns out 64 bit ints cause memory usage and cache pressure that outweigh the benefits.

A lot of people don't think about the effect of design decisions on cache performance. Die area devoted to cache can't be used for other things, and the process of filling the cache with larger entities impacts memory bandwidth utilization. So 64 bit ints when mostly you only need 32 bits can turn out to be a loser. This is also why the pendulum has swung back to CISC -- more functionality crammed into smaller cache footprint, and the compilers are much better at targeting CISC instruction sets than they were at the height of RISC.


I disagree with the last part of your comment: many RISC added a 16 bit extension of their ISA.. So you can have RISC and code density similar to CISC, no need to keep a bloated instruction decoder on the CPU..


The instruction decoder is tiny compared to the rest of the components on the chip, it also can be (partially) autogenerated. I still don't want to be on the team doing that...


That's not what "64-bit system" means. "64-bit system" means that addresses (i.e. pointers) are 64 bits wide, not ints.


n-bit system has always been a bit of a vague term --- think of all the CPUs out there described as "8-bit" --- the 8080/8085/Z80 family, 6502, 8051, etc. None of them have an 8-bit address space, since otherwise they would only be able to address 256 bytes.

Even defining the address space is not trivial, with things like virtual vs. physical ("32-bit" x86 since the Pentium Pro actually has 36-bit physical addresses) and bankswitching (common on MCUs).

Likewise, data width is equally as complex: the Z80, commonly known as 8-bit, actually has operations on 16-bit "register pairs". 32-bit x86 with MMX has 64-bit registers, and with x87 has 80-bit wide quantities manipulable in a single instruction.

It just so happens that we hit a sweet spot for many years with "32-bit" x86 which had 32-bit integers and pointers, but the subtleties of classifying a system as being n-bit remain.

That said, some systems (mostly mainframes/supercomputers) are "true 64/64-bit" with 64-bit registers and 64-bit address space.


And I believe all the major 64-bit instruction sets actually use 48-bit addresses at present, so software/compilers can use the top 16 as tag bits. (In addition to the bottom 3, assuming alignment on 8-byte boundaries).


The address space in x86-64 is specifically designed to prevent the use of unused address bits for tag bits.

"The AMD specification requires that the most significant 16 bits of any virtual address, bits 48 through 63, must be copies of bit 47 (in a manner akin to sign extension). If this requirement is not met, the processor will raise an exception."

https://en.wikipedia.org/wiki/X86-64#Virtual_address_space_d...


That applies only when you are actually using such an address for an access; you are free to store anything in those upper 16 bits as long as they are masked off when it's actually used to access memory.


Decades ago I read an article that pointed out that a corollary to Moore's law is that address bits will be consumed at the rate of 1.5 address bits per year, so it is easy to predict when an architecture will run out of address bits.


> And I believe all the major 64-bit instruction sets actually use 48-bit addresses at present, so software/compilers can use the top 16 as tag bits.

Not at all a safe assumption in general; architectures evolve over time, and in addition, that address space is split such that kernel addresses have those bits set, not cleared.


Didn't mean to imply that was a good idea, just that it's possible. It looks like the top supercomputers are getting close to surpassing 64 petabytes (56 bits byte-addressed) of virtual memory space, so 16-bit tags are right out. A 48-bit byte-addressed memory space is 256 terabytes, which is getting snug for high-end servers but still 15±5 years out for consumer hardware.

  48-bit byte-addressed: 256 terabytes
  48-bit word-addressed:   2 petabytes
  56-bit byte-addressed:  64 petabytes
  56-bit word-addressed: 512 petabytes


NVDIMMs are about to change this, addressed like memory but with around 10 times as much data, so address space is about to jump.


> ... addresses (i.e. pointers) are 64 bits wide... i gues, what you mean is that the address bus is 64bits wide...


(as already mentioned, ints being 64-bit does not make it any more 64-bit)

It may be just a bit of hacking in the some GCC platform configuration files (I'm guessing), and getting possibly tens to hundreds of packages to work with that change (not all software is written completely portably!).

However there is no good reason to do that. It would literally make things use more memory and slower (as a consequence of the former). Any program that actually needs 64-bit integers just uses int64_t etc..


You shouldn't be using 'int' anyway. Use int32_t or int64_t and etc.


Ah yes, now to update a few million lines of code. Be right back.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: