Back in the day, there were some old computers that wouldn't boot from a CD drive. I recall that the solution back then was very simple: within the ISO image, there was a bootable floppy image, which you could write to a normal floppy disk (actualy, there were IIRC usually two such images, one for 1.44 floppy drives, and another for the older 1.2 floppy drives). You would boot and load the kernel from that floppy, but the installer would look for the rest of the files on the CD drive.
The same idea could be used here: for those old computers which have a BIOS which do not understand how to boot from a USB drive, a small bootable CD image containing only the bootloader and the kernel could be made available, either within the USB image or as a separate download.
I can't help but imagine it as working similiar to the Floppotron...
But of course, ugly hacks like this were only ever a peecee problem. Macs could boot CD-ROMs like any other drive for as long as they've had CD-ROMs.
This is a bit ridiculous, since in order to be bootable, a Mac requires the CD-ROM to contain a HFS filesystem.
PCs could also boot "like any other drive" from CD-ROM if they forced the CD-ROMs to contain a MBR.
Actually, that is exactly what El Torito is: putting a boot record on a CDROM, so that it can boot "like any other drive". The fact that the boot record is most commonly found in an image of a floppy disk is an implementation detail (you can have whatever you want, it's just that floppy is easier to work with), and hardly "a hack".
"A hack" would be to force the CD-ROM to contain a hybridized HFS filesystem, despite the entire raison d'être for ISO being to avoid having to put multiple filesystems in CDs.
El-Torito specifies a bunch of different possible images, among them floppy images (used by Win9x boot CDs) and native images (used by most linux bootloaders and by later WinNT versions), and there's IIRC separate type for UEFI bootloader.
There's still space left in front of the CD allowing for bootsector tricks, which is also how hybrid HFS+ disks work at all (there's space to fit necessary volume header, and Mac booting uses HFS+ "blessed file" pointer to find boot image).
Why you may ask? Simply because the iso contains a file larger than 4GB (install.wim) so you cannot use a FAT32 formatted usb drive (FAT32 is needed for EFI boot). Instead tools like Rufus provide their own EFI NTFS boot loader.
Interestingly Microsoft’s own usb creator doesn’t include install.wim but instead install.esd which is less than 4GB so can use FAT32. I have no idea why the iso uses a wim file whereas the windows usb creator uses an esd file. I do know the esd version is a little slower to install (only a minute or two). I guess because of higher compression.
This is a pain because making a Windows bootable installer on Linux or macOS is now more complicated than it need be as you cannot simply put the iso onto the usb drive with dd or gdisk as you would normally do.
Does anyone know why this is the case? As a workaround you can use Microsoft’s usb media creator to build an iso with the esd file then use dd as usual. But of course this requires a Windows system to run the Windows only usb creation tool and is a slower process than just downloading an iso directly. Annoying.
As for why my guess is the .iso came from the standard distribution image used in CDs and handed to OEMs and whatnot. ESD doesn't allow direct image modification/slipstreaming and the official DVDs don't care if the file is >4GB as long as it fits on a disc.
The Windows USB tool actually creates an ISO based on the current update files on the Windows Update servers (for the major distribution, I don't think it includes minor updates), it's not pulling down a precompiled .iso. You'll actually get a different hash whenever you make this ISO as a result. Similarly if you dump to a USB drive it's just pulling the files to put there not extracting a .iso containing an .esd.
As to why they haven't implemented a better way for Linux/Mac users without access to a Windows machine to create a bootable Windows USB? It probably isn't a common enough problem for them to care.
The documented way to work around the 4GB limitation is to use the dism.exe /Split-Image command. This will turn the one large .wim file into multiple smaller files.
I ran into this when I was working on SimpleDiskImager. It is my disk image writer that tries to find a balance between dd's literal-minded copying of bytes (which does not correctly handle GPT partitioning) and the opinionated and invasive modifications that Rufus makes (not yet compatible with secure boot). I have not yet integrated support for writing Windows installer ISOs yet, but you can see the start of it in the ConvertEfiIsoToFat directory.
Using DD for windows images has been a pain for years, even before windows 10, manually copying the image content with rsync to a correctly formatted external drive for me was the most reliable way in the Win8.1 era.
Got a 128GB SanDisk USB flash drive. Used diskpart to clean the disk then created a single NTFS GPT partition. Extracted the current Windows 11 ISO they offer for download onto the drive using 7-Zip.
Tried to boot on a Surface Studio Laptop, a Lenovo ThinkPad X1 Carbon from 2019 and an Alienware M15 from this year. Failed on all three.
Just to be on safe side I used bootsect to ensure the USB is bootable but again failed to boot on all three.
I then used the MS Media Creation Tool which happily creates a FAT32 formatted USB drive. This naturally boots fine on all systems.
So while technically me saying FAT32 is needed for EFI is not correct the reality is FAT32 is what you can be [almost?] certain is supported on any system using EFI.
Not even on Microsoft's brand new flagship laptop or a brand new, high end gaming laptop from Dell appear to have NTFS support in their EFI.
I wish it were as simple as just extract the iso to an NTFS USB drive and boot.
For example the new Microsoft Surface Studio Laptop I was using a few weeks ago does not support NTFS and the support documentation gives you instructions to create a FAT32 partition. This is why it’s fresh in my mind.
Not a big deal, just annoying.
EasyOS might be targeted elsewhere? Compared to say debian/ubuntu/rhel
Even more interesting is EasyOS's official install instructions seem to tell you to download a live CD ISO for an old version, and move from there to the latest version. I guess they're not completely free of needing ISOs just yet.
 Obviously excluding things like virt-install. I'm talking about live booting an interactive OS or installer.
 https://easyos.org/install/easy-frugal-installation.html, search for "Easy live-CD"
I can't recall seeing any generic disk image alternatives to ISO's. It's generally download a virtual disk image (e.g Vagrant), and if the image is not in the right format convert it before attaching.
The only other work flow I use is to netboot with iPXE . All it needs is a http(s) endpoint for the initrd/kernel and away you go, but there is a bit of investment in setting up the initial boot environment.
Forensics still needs the stick, though, although I am sure one of my grandkids knows some super secret trick that does the same thing in half the time/money.
Also, add me to the queue of people who were jazzed about an article detailing someone walking away from ISO specifications. I've lost count of the ISO "specifications" that turned out to be nothing more than freeways into the garage of a sole-source vendor who "maintains" some archaic file specification that apparently I can't do without. And who runs the Working Group. Fancy that! What a co-inky-dink.
And AS9100 governance has gone from "middling poor" to "cannonball dive into the thunder pot" during the course of my professional career, now flirting with "outright extortion" . Now, I do realize AS9100 is SAE and not ISO proper, but a ton of the bad stuff is coming straight from old ISO hands either in their original chairs or having moved around to SAE or IAQG.
Come to find out it was the fact that it was a usb 3.0 flash drive. Grabbed a 2.0 drive and everything worked fine.
Had I had a dvd burner (Ironically the computer in question did have one but it was not useable till i fixed it.) I would have saved myself significant amounts of time. (Or randomly grabbed a crappier flash drive.)
And when creating images, you have to take extra care to zero out unused space, otherwise your dd image will not be compressible.
Frankly I just want a small file that I can right-click or drag into Rufus and burn my USB stick or SD card ASAP. I used to know exactly what was going on when making a bootable floppy; the file containing the bootloader had to be in certain sectors, so you had to create it with FORMAT, but all other files could just be copied on. I don't know if any special sector layout is neccessary nowadays, I think for EFI boot it just requires the EFI directory. And tools like Rufus probably just copy file by file to the USB drive, not sector wise.
I feel like I'm already getting there with some things and I'm half that age. If something as disruptive as systemd or GNOME 3 came out again, I think I wouldn't even listen to the rationales and just follow the old fork this time around. Things work as they are for me.
Sometimes people complain about the time to learn when actually they just don't wanna.
We used in our ML data center to iterate on our NixOS images meaning we could push out updates to the underlying OS and then just trigger a restart via IPMI and make it download the new images.
Note that we didn't end up using netboot at the end, we just use iPXE directly.
You can do it with BIOS or UEFI styles, but I've only had success with the BIOS version. I setup my PXE boot setup over a long time, so I don't have a guide, but here's my idea of the steps the boot process takes.
a) (if enabled and selected for booting) the system firmware jumps to the Option ROM on your NIC to boot. These days, almost all cards have a PXE rom, but if you have an ancient card, it probably has a socket fot a boot rom. Getting that setup is possible but outside my scope.
b) the boot rom includes a driver for the NIC and usually Intel's PXE software; either way it will do a DHCP (or sometimes BOOTP) request and ask for a 'next server' and indicate the platform (x86 bios, x86 uefi, amd64 bios, amd64 uefi, sparc variations, etc, etc). The DHCP server will assign an IP and all that junk, but also provide a pointer to the next server (and a path), used to fetch more code to run.
c) next server is traditionally a TFTP server, and the path may be client or platform specific. More modern pxe stacks can do http(s), usually at higher performance
d) I find it easiest to have next server point at pxelinux, part of syslinux that offers a nice menu system and http(s) support that's usually a lot faster than tftp
e) from the menu, you can often selection options that will load a kernel and modules and boot with a command line; in the boot process, usually all the pxe stuff is terminated or discarded.
f) You can also load disk images (floppy, hard drive, iso) and boot from them using memdisk from syslinux. This comes with some caveats; MEMDISK is BIOS only, it hooks the (defacto) standard BIOS disk calls to provide access and doesn't have a UEFI equivalent (AFAIK, corrections welcome), but also if you use an OS that doesn't use BIOS disk access (most protected mode OSes), it needs to support MEMDISK or you can't access the images. This means only some Linux images work, although recent images often do. I've made a patch for FreeBSD, but it's been in review for a long time. Older FreeBSD setup images worked because the setup filesystem was a disk image loaded from the disc by the bootloader, you'd need enough memory to have the cd disc image loaded by memdisk and the setup filesystem loaded by the bootloader and enough left to run the installer, but it's not too bad on today's machines.
So, TLDR, you need to fiddle with your dhcp server, setup a tftp server, copy some files for a bootmenu and/or something to boot. And finally, tweak the boot settings on your netbooting device.
Useful things I've done with this include booting Debian installers, using weird 3rd party software to install windows XP, booting FreeBSD installers with my patch, running disk firmware updates packaged as linux isos (intel), and most fun, running my hobby OS without having to setup a bootdisk (it's multiboot compatible, so pxelinux can boot it easily)
a) recent versions support HTTPS paths for bootfiles
b) iPXE.efi creates virtual ramdisk to which it can download files and execute them in UEFI environment - this can be UEFI shell or other tools that can run under UEFI. Unfortunately this ramdisk isn't available once UEFI Boot Services are stopped (i.e. past bootloader stage).
Worse, if the setup expected to mount the image once the kernel booted, that's not simple either.
However, booting linux using ipxe.efi involves downloading to the temporary ramdisk the kernel and initrd image(s), then booting it as normal uefi executable passing initrd image(s) as parameters. The UEFI "wrapper" used by kernel uses UEFI calls to copy contents of initrd into kernel-accessible memory before early boot code removes the temporary ramdisk.
But so long as you do not hit ExitBootServices(), ipxe.efi installs a simple filesystem driver that stores everything in memory and mounts an instance of it as "current directory" for all actions in your boot script.
If your machines only PXE boot when doing system setup, pwning the image only pwns on next system setup. Which maybe is never for existing systems; when I ran prod systems, most of my prod servers never got resetup unless their root drive gave out, but that was rare, too. So only new machines would get pwnd in a setup like that.
If your machines always PXE boot, pwning the image pwns machines on next boot. Most of my prod machines got rebooted once or twice after they were setup, but if I had an always PXE boot setup, maybe I would run things differently and rebooting might happen more often. Then again, if everything is PXE booted, maybe we're running with a network filesystem for the root drive, and then you'd want to pwn that filesystem/file server to take over. Up to you to figure out if you can update the binary for something that's already running as root to take over, or if you have to wait for something new to spawn as root. On FreeBSD, you can edit a binary on disk while it's running and if you do it wrong (or maybe right), also edit the binary in memory, but if it's a networked filesystem, I dunno if that's still possible. Other systems may not be so capable.
A rogue DHCP server that's faster than the real one can also do nasty things to a PXE client.
There's some mentions of SecureBoot and PXE, so there's probably a way to get things aligned in a secure way, but somehow you've got to enroll your keys (and probably disenroll the default keys). Anyway, that's way outside my expertise.
There's also secure boot etc like the parallel comment said.
But yes, if you can pwn the bootstrap process of anything, you have a lot of power. That's been true since computers were invented.
If the author had taken a bit of their own advice and went out of their comfort zone they might not have been having so many issues finding where to put the save file years ago! Overall though I get it and image distribution is easier for 99% of cases. I have found it more difficult in 2 cases though, both are cases of "the tooling knows ISO".
The first is on IP KVM's, even when they support non-ISO based virtual drives you'll often be wishing they did. The last I remember trying to futz with this was a Lantronix Spider and it wanted to pass through an entire drive letter but it wasn't working for whatever reason and I eventually just made an ISO with the files in it and mounted the file with 2 clicks.
The second is virtual machines with a similar story but this time it's usually possible to qemu-convert or extract and put into a virtual disk of the right format and you're good to mount the disk - or maybe you get lucky and the packaged format is just a click or config line away from natively mounting like assigning an ISO to a virtual CD drive is. A good example of the latter was just today I wanted to boot a live Linux environment to run gparted on an existing guest VM on an M1 Mac Parallel's guest. I had 0 knowledge (or desire) to investigate how to get an ARM64 image of my preferred distro to be in whatever format Parallel's expects just to use for 5 minutes then never use that virtual disk again so I found Fedora shipped ARM64 ISOs and was able to easily mount it with a few clicks in the GUI and do my work.
I don't think either of these are at the forefront of Puppy's use cases but worth mentioning in a universal call to retire ISO. It may be an old hodge-podge at this point but most everything understands how to work with it as boot media already vs whatever disk image format you choose instead.
Again with this "some tool exists so the point is moot". The same kind of excuse for manufacturers to ship a crippled product. Ventoy is a very new tool, and no one puts a link to it next to their ISO's on the download page so to most people it's... moot.
Why do I need a special tool to enable persistence on my boot media? Imagine flashing an ISO on a thumb drive for the first time and find out it's readonly, wouldn't you be like "why?"
Booting a PC is a "dogs breakfast." I like the name "EasyOS" and I just wish booting a PC OS was easy, but it's not.
I boot my daily driver off of a camera card.
I also multi-boot.
I can net boot.
I can still burn CDs and DVDs, but would rather not.
I have a 3 1/2 inch drive that connects thru USB in a drawer somewhere.
If it was up to me (and it's not): I would have many memory cards that could be set to Read-Only by physically flipping a tab just like on an old 3 1/2 floppy. The code in fast memory on the memory card starts from an open standard address, a documented place that is absolutely the first and only code that runs when the computer is powered on. The read-only OS is already in memory and running and there is no such thing as boot time. I press the power on switch, and the OS is running instantaneously.
SD cards (not the micro ones) have a slider on the side to make them read-only.
It seems that the developer isn't aware of this, indeed they developer were using something similar:
> For Linux developers, if you are interested, I have a script for creating a skeleton drive-image file, with a boot-partition and a working-partition, that will boot on either a legacy-BIOS or a modern UEFI PC. There are three scripts, '2createpcskeletonimage', '2createpcskeletonimage-encrypt' and '2createpcskeletonimage-gpt' -- for easyOS I currently use the middle one, which creates a MSDOS partition table and enables ext4 fscrypt in the working-partition. Syslinux is used for legacy-BIOS booting, rEFInd for UEFI booting. These scripts are in the woofQ tarball, available here.
basically these devices are external USB hard disks that can expose a .iso saved inside (selectable among many) as a CD/DVD.
EDIT: sold for £130+. Incredible.
It would be great to have a thingedy that you plug in and that can serve one of multiple ISOs to a device. (Even better if it could emulate a keyboard and automatically go into the BIOS to make the right settings, but I guess such a device would be a security hazard...)
The (very high) price is due - I believe - to the very small amount that they are made, but it doesn't seem (to me) like "rocket science", the basic idea is the same as those floppy disk replacements wherre you can select the floppy image to run/boot.
Some years ago there was a USB stick that had the same capabilities (also expensive as it was almost hand made) called isostick and before that a SD card reader with the same capabilities, simply for some reasons the idea didn't become popular/ubiquitous.
 and BTW it has some limits and compatibility issues
In fact this entire culture of lambasting old stuff would be better off being taken out back and shot.
I had one person advocate for a new, shiny piece of software by telling me that what we were using was "too old", because it was from 2012. This was the sum of his technical argument. No actual pros and cons, just an assertion that what we were using was "too old". The conversation took place in 2019. It was not a date dependent issue.
Worse, what he was calling "too old" I didn't actually get to start using until 2014. I've been doing systems stuff since 1999. He essentially called my entire career obsolete because 90% of what I know was invented before 2019.
the mention of specific ages and the tone of it was way off the mark...
the arguments don't really hold much weight either - frustrations rather than roadblocks. even after reading the article, which i expected to find interestingly informative... and did... i'm not sure there is any sense to be found in the rationale beyond the developer wanting an easier life. thats never a great line to follow in software development imo... its pretty much the opposite of the whole intent.
While not the same as a general purpose OS image, all the major audio Linux distributions I've used (Volumio, moOde, piCorePlayer), which are intended for use on Raspberry Pi or other embedded systems, ship this way rather than as ISOs, and support resizing the OS partition to take the rest of the micro SD card for music storage or anything else. A similar approach for an OS on a bootable USB drive seems very reasonable.
Incidentally, the last time I installed an OS from a CD or DVD was Windows 7 in 2016. Everything since then has been a USB stick, a micro SD card, or a micro SD card in a USB card reader.
I use Ventoy specifically to avoid the issue the article describes with standard ISOs file where it leaves the rest of the USB storage unused.
That way I can carry and easily update a bunch of installable ISOs on a single USB thumbdrive, just in case I need to install a specific OS from scratch (Ubuntu, Windows, etc) or launch a live recovery environment (SystemRescue, GParted Live, etc).
> This is controversial, however, it is just a different philosophy. The user runs as administrator (root), apps may optionally run as user 'spot' or in containers as a "crippled root" or user 'zeus'. The practical outcome is that you never have to type "sudo" or "su" to run anything, nor get hung up with file permissions.
This is a horrible idea. It's not a "different philosophy", it's just incredibly dumb and insecure, especially if you're trying to market your distro to the average user. It's even worse than Windows. Using this logic, why don't they do `chmod -R 777 /` then? See how ridiculous that sounds? Exactly my point.
They're basically just doing some buzzword marketing by saying "we support containers!" but it doesn't say if any services actually run in containers by default. And even if they do, the default setup is still extremely insecure, making users run every command as root by default.
> No systemd, and it is also worth noting what else Easy does not have: policykit, pam, pulseaudio, avahi. The Dunfell-series doesn't have the Qt libraries, but that may change.
I can understand not having systemd, for me it's both a blessing and a curse. But the lack of PAM will break a lot of things like Google Authenticator/TOTP and YubiKey support. Arguably, most average users won't need those (though IMO 2FA should be more normalized), but then again as I've already said the "EasyOS" name seems to be self-contradictory, demanding a bunch of things that power users may be fine with but average users will just get confused by.
> When someone boots up Easy, they will see that the menu (bottom-left of screen) is totally different from what they are accustomed to. Ditto the folder hierarchy. The thing is, keep an open mind -- it is very easy to adjust, and there are solid reasons for the differences.
Again, if you're trying to market to the average user this is nonsensical and just serves to confuse them even more. Sure, power users may be able to adjust more easily, but then don't call it "EasyOS".
> Ditto. The kernel-assigned names for drives and partitions are retained. For example drive sda and partition sda1.
Which distribution actually does this? I'm aware of it being done for network interfaces, but I haven't seen such a thing for drive names yet. (That said, I do believe that the new "predictable" network interface names are stupid and just cause more issues than they solve. Maybe it works well to ensure consistency if you have two network cards, but the whole thing falls apart as soon as you change the hardware.)
Article was about ceasing use of .iso files for booting OSes from USB, and a replacement that makes a lot more sense.
Hold on a sec...so the conclusion is: screw those old timers?
99% of Linux installs are done on some hypervisor... ISO files are absolutely awful for this use case. Yet everyone keeps using them because the tooling is already jumping through thousands of hoops to make it work.
Vintage PCs don't really have USB, but well, in his defense, people working with vintage PCs are already used to extra steps (e.g. if the PC has token ring and they want to connect it to Ethernet), so they'll need to add the extra step of "Write contents of USB image to ISO"...
So to me it sounded more like admitting defeat for now with two people give or take, and letting the rest enjoy the benefits of the upgrade.
Isn't it a volunteer project? If he doesn't want to support it, then he doesn't want to support it, end of conversation. If the old timers want support for a legacy format they can do it themselves.
In the extreme, a dev might think that ratio is 0 and a user might that it's a large number. (People arguing that ISOs are already widely supported and useful are implicitly attaching a large value to that ratio)
I don't think there's an objectively correct value that can be assigned to that ratio, hence disagreements about whether Really Old Thing should be supported.
But there's no obligation, just expectations and communities.