I don't understand the appeal of trying to strip the Linux kernel down, then introducing additional specialized code like nolibc, in order to obtain a hyper-minimal binary that can run a single program on bare metal.
Wouldn't it be easier to just use a simpler POSIX-like OS to begin with? Or better yet, a so-called "library kernel" that effectively turns a userspace program into its own OS?
What's the upside of using the (relatively large and complex) Linux kernel for this? I doubt it's reliability, since the kernel in this particular configuration is essentially untested.
The article goes into some detail about different reasons you might want a minimal userland -- test harnesses, recovery environments, and pseudo-embedded environments. I do something similar in the test framework for https://github.com/jmillikin/rust-fuse -- to perform an integration test, I boot Linux in QEMU with a tiny custom /init.
More generally, the Linux kernel has a lot of code in it that is of generally reasonable quality and has received a lot of benchmarking/testing from well-resourced users. Sure, I could run some unikernel with a third-party network stack and SCSI drivers and ext4 implementation, but Linux already has all that stuff and it's ubiquitous. Why would I care about the extra ~30 MiB of RAM or whatever that it takes?
And that's before we even get to the topic of sharing code between environments. I can run the same unmodified binary on my desktop and on a minimal kernel-only Linux, which is not generally true of most alternative kernels.
> Wouldn't it be easier to just use a simpler POSIX-like OS to begin with?
It would be simpler, but harder.
The point of the linux kernel is that it's familiar. Many developers know it from inside, many wrote device drivers or file systems, its network stack is well-understood. There's plenty of reference and expertise.
POSIX is an ancient standard, which, while still useful, does not give you enough to be able to just switch to another compliant OS. Say, io_uring is not POSIX, and EBPF is not POSIX, and both can be hugely important for your embedded system.
With that, you can still run multiple threads with that highly minimal nolibc (not pthreads, but clone() is supported), AFAICT, so you can use a complex architecture. I didn't notice if cgroups are surfaced; with them, you'd be able to add defense in depth.
This is on top of Linux's superior tooling, compilers, a collection of drivers for nearly everything, and other creature comforts, usually FOSS.
> I don't understand the appeal of trying to strip the Linux kernel down, then introducing additional specialized code like nolibc
The appeal is you can eliminate every single dependency. Linux is the only kernel with a stable user space interface. It's the only system where you can do this.
With a single 6-ary system call function, you can do anything you want on Linux. There's no need for anything else.
> What's the upside of using the (relatively large and complex) Linux kernel for this?
The kernel's massive amount of features and drivers for one.
> I doubt it's reliability, since the kernel in this particular configuration is essentially untested.
This is not a kernel configuration though. User space is completely separate from the Linux kernel.
Wouldn't it be easier to just use a simpler POSIX-like OS to begin with? Or better yet, a so-called "library kernel" that effectively turns a userspace program into its own OS?
What's the upside of using the (relatively large and complex) Linux kernel for this? I doubt it's reliability, since the kernel in this particular configuration is essentially untested.