It has certain peculiarities like defaulting to logging in and running everything other than web browsers as root, so I’d like to switch to Tinycore if it’s different in that respect. My main concern is package availability — Puppy is compatible with the repos of major distros like Ubuntu.
The file system.
I'm pretty sure you can't boot into an Ubuntu live disk, detach the usb, and keep using the system without any errors or loss of functionality.
I used to have a flash drive with GRUB installed, and several "Live CD" ISOs. It would have two options per ISO: the second would copy the squashfs to a ramdisk before boot.
Tinycore is especially great when you don't have USB3 available, because you don't have to wait minutes for >1GB of data to be copied to RAM.
I ended up bringing a flash drive with Tinycore and a floppy with Das U-Boot (the BIOS didn't have a USB boot option). I was able to get a snappy Linux system with Chromium running, all from a ramdisk, while everyone else spent most of their time waiting around for Windows XP on slow platter drives. It was great.
I couldn't tell you how complete/solid it was, I was in middle school back then and I doubt I ever did any "work" beyond toy programs on vim, but it was my first contact with linux and a major factor that pushed me towards getting into CS.
Glad to see the project is still going strong after all these years.
I was already a Linux geek by then, but there are plenty of others who found that class to be an eye opening experience.
Thankfully I got off with a stern warning and a "whatever you did, I can't see your screen any more, so you better fix it".
Their only really effective way to "spy on" us was to either log DNS requests or literally look over our shoulders.
The screen recorder was the primary burner of CPU time.
Plus side was that after this episode I was allowed to bring my own laptop. I had a guest account on the wifi which I knew logged all my DNS traffic.
OpenOffice and Firefox on a Ubuntu 10.04 LiveUSB ran laps around the stock software.
The positive outcome was that I was allowed to bring my own laptop to school after that. They even gave me an account on the guest wifi.
I couldn't put time into learning Tinycore's quirks or building packages from scratch due to low VM specifications and so I went for Lubuntu. Lubuntu now comes with LXQt and is surprisingly robust.
Default applications on Lubuntu are all Qt apps, fast and memory efficient. Falkon Web Browser is great unless the websites use some weird JS and since there are several websites which do weird JS stuff I had to install Firefox too and with it increase memory by another 1GB to prevent hang ups(which still happen with Firefox occasionally).
I wanted to set up a minimalistic pocket environment on a USB flash stick, but eventually stumbled on the need to re-package some of additional applications.
For example, I needed to add Maxima, yet it was not in their prepackaged repo. Trying to recompile it manually threw some dependency-related errors, well, familiar story...
But the idea is still great, it works well on older boxes. Will probably revisit it at some point.
The 2008 date is likely when the page was initially authored rather than last updated
- SLAX was Slackware based, but it switched to Debian base (apt)
- The first 'linux-live-kit' was from Slax's author.
- It used layered filesystem (overlayfs) long before Docker & containers were a thing.
- It had 'persistence' while running off the usb, via 'file' or partition. (Then Ubuntu did similar thing with casper-rw <- term for squashfs+overlay persistence in ubuntu)
Why is the kernel so fat ?
the kernel itself is about 1MB.
There was also BeOS and Amithlon , AmigaOS running on x86-32 (IIRC booted up with some patched Linux kernel). Archive.org supposedly preserved this. 
20 years ago, one could run a Linux firewall on a floppy. One good thing about that, is that you could have a floppy physically put to read-only mode (SD cards also have this feature).
The original Oberon system was something like 12,000 lines of code, something you could imagine reading in its entirety.
IIRC most of the Linux kernel source code is device drivers; the linux kernel also has a huge amount of functionality, much more than you need just to boot a PC into a GUI. Much of Linux user code is bloated libraries with a good deal of duplicated functionality.
Many GUI-based systems from the 1980s and 1990s were very compact by modern standards. (Even a full Smalltalk-80 system image is less than 2MB including the source code.) You can still run RISC OS on the Raspberry Pi to try out a 1980s/90s-style PC OS. Classic text-based UNIX OSes were obviously also dramatically smaller than modern Linux distributions.
Regarding device drivers, I wonder if the BIOS approach is better - assume the hardware vendor supplies the device drivers, and let the OS use those drivers rather than supplying its own. Someone will probably argue that "oh legacy BIOS is 16-bit, doesn't support [caching/acceleration/scatter-gather/async/polling/ feature x] or hardware vendors don't know how to write drivers or it's closed source or blah blah blah" but that would be missing the point that the idea of separating the driver subsystem from the OS might not be such a bad one.
It's not so much fat as it is feature support and what's enabled by default. Most people want the "default" configuration to work out of the box on whatever computer they have. E.g. If you take a look at "x86_64_defconfig", nearly every Ethernet card that Linux supports is enabled. Odds are you only have one of those in your (desktop) system. However, unless you shopped around for specific NICs, odds are you don't own two systems that use the same ethernet driver.
For those of us who want to only compile the support for exactly what we use, it is still possible to compile the kernel small enough to boot off of a 1.44 MiB floppy with a minimal initial ramdisk, assuming the system you want to use it on is old. Modern systems require so much more code to use the hardware in them, and as Linux has become the single most used operating system kernel on the planet, the amount of code for device drivers has grown astronomically.
The kernel contains a minimal configuration called "tinyconfig" which turns off literally anything that be while retaining a functioning kernel. A tinyconfig kernel + TTY support compiles to about 500 KiB. Enabling all of the drivers to support hardware via the BIOS (AT style disk control, floppy, display out, etc.), and ELF support will yield a kernel of ~790 KiB, a little more than half of your 1.44 MiB floppy. Assuming you can fit useful userspace tools in the remaining space, you could still load from a floppy.
Um, perhaps support for all of the hardware, filesystems, network protocols, etc., that did not exist?
If you actually want to boot that on real hardware, you can enable the drivers that are stubs into the BIOS calls for classic devices (disks, serial ports, SVGA graphics, etc.) for another ~100 KiB. It's extremely limited though - no specific PCIe support, ACPI, networking, etc.
Text-only, however, unlike the QNX demo.
Note how there are ports on this page but it doesn't seem to be updated well. The Raspberry Pi links lead to ARMv6 ("piCore") ports and not AArch64 ("piCore64") ports:
I found the first link from their Raspberry Pi forum:
- A program to build a bootable Linux image from a config file, like NixOS
- The image is tiny, like Alpine Linux
- Support for x86, Raspberry Pi, DigitalOcean, and EC2
There is a project to build bootable NixOOS images for Raspberry Pi , but it produces 2GB+ images. The iteration time to build + copy + boot is about 10 minutes. And the images are too large to easily distribute.
Packer  has some of these things, but it has to run on an actual EC2/DigitalOcean VM to produce images for them. That's a lot of extra config, potential for cost overruns from failed VM shutdowns, more secrets to manage, and more ACLs to maintain.
Reviewing a generated config file has downsides. The file does not show the context associated with changes. Comments are either unsupported or easily destroyed by the editor. The file format may be confusing or may destroy blame info by putting many options on one line. The file format and how to use the graphical tool are extra chunks of knowledge that engineers must load and maintain.
Buildroot is designed for folks making custom kernels. I don't need that. I want everything related to the hardware to just work. This includes automatically mounting attached network block devices on DigitalOcean & EC2 and configuring wifi on Raspberry Pi.
423kB bzImage, 85s build time
It's easy to set up a weekly cron job and samba server on it for some NTFS drives.
We mailed customers USB drives that when plugged in would mount and start tinycore Linux, then write the new system to a parallel partition, reboot the machine, then run a few health checks.
Some might say it's hacky (it is/was) but it worked pretty well and allowed for recovery of totally borked systems.
This would be a non starter for any security conscious client.
Several of these companies also had strict email attachment filters that we were instructed to get around by just appending `.allow` to the filenames. There was tons of this type of stuff that we encounted which really highlighted that corporate security at many companies is just theatre.
I have no idea if they did that, but mailing USB keys should be fine with the right encryption or validation
Just a bad idea all around. There should not even be USB ports IMO.
I read it as that Integrity and Confidentiality was the only factors here, so USBKill-style stuff (Availability) would just mean the machine was replaced, Integrity should handled by the boot process and Confidentiality should be handled by the embedded OS itself.
>> CorePlus ofers a simple way...
There's absolutely nothing HN can do about it.