Hacker News new | past | comments | ask | show | jobs | submit login
Why I stopped releasing EasyOS as an ISO file (bkhome.org)
127 points by gbrown_ 48 days ago | hide | past | favorite | 110 comments



> Oh yes, there are some old computers that won't boot from USB

Back in the day, there were some old computers that wouldn't boot from a CD drive. I recall that the solution back then was very simple: within the ISO image, there was a bootable floppy image, which you could write to a normal floppy disk (actualy, there were IIRC usually two such images, one for 1.44 floppy drives, and another for the older 1.2 floppy drives). You would boot and load the kernel from that floppy, but the installer would look for the rest of the files on the CD drive.

The same idea could be used here: for those old computers which have a BIOS which do not understand how to boot from a USB drive, a small bootable CD image containing only the bootloader and the kernel could be made available, either within the USB image or as a separate download.


I use to use a boot CD called "plop boot manager" for this purpose:

https://www.plop.at/en/bootmanager/download.html


For a while, in 1994, I had some weird SCSI card (soundblaster with SCSI, I think) that couldn't boot Linux off a harddrive. (It probably could, I just couldn't get it to work...) So I just left a floppy with a kernel in the drive. The system was generally running 24/7, but I wanted it there in case it did reboot...


> some weird SCSI card (soundblaster with SCSI, I think)

I can't help but imagine it as working similiar to the Floppotron...

https://www.youtube.com/watch?v=oGfkPCZYfFw


ISO 9660 discs still have bootable floppy images in them. The El Torito standard for bootable CDs specifies that a floppy image be incorporated into the disc image. The PC BIOS knows how to find this image, load it into memory, and boot it as a virtual floppy.

But of course, ugly hacks like this were only ever a peecee problem. Macs could boot CD-ROMs like any other drive for as long as they've had CD-ROMs.


> But of course, ugly hacks like this were only ever a peecee problem. Macs could boot CD-ROMs like any other drive for as long as they've had CD-ROMs.

This is a bit ridiculous, since in order to be bootable, a Mac requires the CD-ROM to contain a HFS filesystem. PCs could also boot "like any other drive" from CD-ROM if they forced the CD-ROMs to contain a MBR.

Actually, that is exactly what El Torito is: putting a boot record on a CDROM, so that it can boot "like any other drive". The fact that the boot record is most commonly found in an image of a floppy disk is an implementation detail (you can have whatever you want, it's just that floppy is easier to work with), and hardly "a hack".

"A hack" would be to force the CD-ROM to contain a hybridized HFS filesystem, despite the entire raison d'être for ISO being to avoid having to put multiple filesystems in CDs.


ElTorito doesn't include normal boot record on the CD-ROM. Placing MBR or similar is a hack used among other things to support the ability to "burn" image to disk drive that is not CD/DVD.

El-Torito specifies a bunch of different possible images, among them floppy images (used by Win9x boot CDs) and native images (used by most linux bootloaders and by later WinNT versions), and there's IIRC separate type for UEFI bootloader.

There's still space left in front of the CD allowing for bootsector tricks, which is also how hybrid HFS+ disks work at all (there's space to fit necessary volume header, and Mac booting uses HFS+ "blessed file" pointer to find boot image).


It's actually one of three possible images: floppy, HD MBR, or raw binary.


That's totally true. You can use Plop Boot Manager to boot any CD, external hard drive, or USB stick from a floppy disk.


If we're doing two phases, I'd much rather the first be CD drive and the second stage be over the network


I have a feeling the people with these issues wouldn’t appreciate their (likely limited) networks being saturated.


Somewhat related… for a while now Microsoft have provided publicly downloadable iso images for Windows 10 that you can’t image to a usb drive without needing special EFI NTFS software from a third party which in turn requires disabling secure boot (as it’s not secure boot signed)

Why you may ask? Simply because the iso contains a file larger than 4GB (install.wim) so you cannot use a FAT32 formatted usb drive (FAT32 is needed for EFI boot). Instead tools like Rufus provide their own EFI NTFS boot loader.

Interestingly Microsoft’s own usb creator doesn’t include install.wim but instead install.esd which is less than 4GB so can use FAT32. I have no idea why the iso uses a wim file whereas the windows usb creator uses an esd file. I do know the esd version is a little slower to install (only a minute or two). I guess because of higher compression.

This is a pain because making a Windows bootable installer on Linux or macOS is now more complicated than it need be as you cannot simply put the iso onto the usb drive with dd or gdisk as you would normally do.

Does anyone know why this is the case? As a workaround you can use Microsoft’s usb media creator to build an iso with the esd file then use dd as usual. But of course this requires a Windows system to run the Windows only usb creation tool and is a slower process than just downloading an iso directly. Annoying.


On Linux look at wimsplit which will convert the .wim into a series of .swm of smaller size.

As for why my guess is the .iso came from the standard distribution image used in CDs and handed to OEMs and whatnot. ESD doesn't allow direct image modification/slipstreaming and the official DVDs don't care if the file is >4GB as long as it fits on a disc.

The Windows USB tool actually creates an ISO based on the current update files on the Windows Update servers (for the major distribution, I don't think it includes minor updates), it's not pulling down a precompiled .iso. You'll actually get a different hash whenever you make this ISO as a result. Similarly if you dump to a USB drive it's just pulling the files to put there not extracting a .iso containing an .esd.

As to why they haven't implemented a better way for Linux/Mac users without access to a Windows machine to create a bootable Windows USB? It probably isn't a common enough problem for them to care.


Have you tried ventoy? It’s worked with everything I’ve thrown at it.

https://github.com/ventoy/Ventoy


Thanks for posting this. I'm about to build a PC and you just saved me from having to buy a second usb stick just for booting an OS installer


except MacOS, I know it's illegal but if only...


A breach of licensing contracts is not "illegal".


I've been using an IODD for years, works on Macs too. I have every macOS release back to 10.6 as a bootable iso.


EDIT: I just realized no of the bellow helps you if you are not on Windows =(

The documented way to work around the 4GB limitation is to use the dism.exe /Split-Image command. This will turn the one large .wim file into multiple smaller files.

https://docs.microsoft.com/windows-hardware/manufacture/desk...

I ran into this when I was working on SimpleDiskImager. It is my disk image writer that tries to find a balance between dd's literal-minded copying of bytes (which does not correctly handle GPT partitioning) and the opinionated and invasive modifications that Rufus makes (not yet compatible with secure boot). I have not yet integrated support for writing Windows installer ISOs yet, but you can see the start of it in the ConvertEfiIsoToFat directory.

https://github.com/AustinWise/SimpleDiskImager


Yeah I noticed this. I tried using DD to 'burn' the windows 11 ISO to a USB stick and it didn't work. I had to use Rufus as per MS' instructions. It's very weird to me that they include a third-party app in this workflow.


It Just Works (tm) with Ventoy.


I've had success with using exFAT on my Mac for installing Windows, but I think that support for this in EFI on most hardware may be even less common than NTFS support in EFI.

Using DD for windows images has been a pain for years, even before windows 10, manually copying the image content with rsync to a correctly formatted external drive for me was the most reliable way in the Win8.1 era.


Modern PCs made in the last 5+ years or so come with EFI drivers to read NTFS partitions. FAT32 is not "needed for EFI boot". You can extract the ISO (e.g. with 7z) onto a USB stick and you're done in most cases.


The spec allows it but whether FAT32 needed depends solely on the board not the age. I've got plenty of recent boards that don't take NTFS/exFAT - I usually find out when I'm scratching my head as to why it doesn't show up in the boot menu.


Out of interest I did a little test this afternoon.

Got a 128GB SanDisk USB flash drive. Used diskpart to clean the disk then created a single NTFS GPT partition. Extracted the current Windows 11 ISO they offer for download onto the drive using 7-Zip.

Tried to boot on a Surface Studio Laptop, a Lenovo ThinkPad X1 Carbon from 2019 and an Alienware M15 from this year. Failed on all three.

Just to be on safe side I used bootsect to ensure the USB is bootable but again failed to boot on all three.

I then used the MS Media Creation Tool which happily creates a FAT32 formatted USB drive. This naturally boots fine on all systems.

So while technically me saying FAT32 is needed for EFI is not correct the reality is FAT32 is what you can be [almost?] certain is supported on any system using EFI.

Not even on Microsoft's brand new flagship laptop or a brand new, high end gaming laptop from Dell appear to have NTFS support in their EFI.

I wish it were as simple as just extract the iso to an NTFS USB drive and boot.


Thanks for that data. I personally haven't run into any unsupported machines but I have since learned Microsoft's own machine (the laptop you mentioned) doesn't have the EFI driver. That's quite sad. I'm going to make some noise about this in back channels.


Some systems do indeed come with NTFS EFI support. However it isn’t a common thing.

For example the new Microsoft Surface Studio Laptop I was using a few weeks ago does not support NTFS and the support documentation gives you instructions to create a FAT32 partition. This is why it’s fresh in my mind.

Not a big deal, just annoying.


>Installing Windows on actual hardware and not just running a VM with virGL.


I haven't touched a physical optical drive in many years, but I still use ISO's regularly for building virtual machine images or booting servers into something quickly. I can get away with plain pxe/netboot for distro's I'm familiar with the packaging of but an ISO is the distro packaging unit I know that will work wherever it is needed.

EasyOS might be targeted elsewhere? Compared to say debian/ubuntu/rhel


That's an interesting point. It's easy to mount an ISO in VirtualBox, but I don't see a way to mount any sort of "virtual USB flash drive". I wonder what's the easiest[1] way to install an OS in a VM if you don't have an ISO.

Even more interesting is EasyOS's official install instructions[2] seem to tell you to download a live CD ISO for an old version, and move from there to the latest version. I guess they're not completely free of needing ISOs just yet.

[1] Obviously excluding things like virt-install. I'm talking about live booting an interactive OS or installer.

[2] https://easyos.org/install/easy-frugal-installation.html, search for "Easy live-CD"


> I wonder what's the easiest[1] way to install an OS in a VM if you don't have an ISO

I can't recall seeing any generic disk image alternatives to ISO's. It's generally download a virtual disk image (e.g Vagrant), and if the image is not in the right format convert it before attaching.

The only other work flow I use is to netboot with iPXE [0]. All it needs is a http(s) endpoint for the initrd/kernel and away you go, but there is a bit of investment in setting up the initial boot environment.

[0]: https://ipxe.org/


I ran headfirst into this "non-writeable" business a few years back. For what I was doing, though, I went straight into the virtualization thing, which seems awful popular with the kids these days.

Forensics still needs the stick, though, although I am sure one of my grandkids knows some super secret trick that does the same thing in half the time/money.

Also, add me to the queue of people who were jazzed about an article detailing someone walking away from ISO specifications. I've lost count of the ISO "specifications" that turned out to be nothing more than freeways into the garage of a sole-source vendor who "maintains" some archaic file specification that apparently I can't do without. And who runs the Working Group. Fancy that! What a co-inky-dink.

And AS9100 governance has gone from "middling poor" to "cannonball dive into the thunder pot" during the course of my professional career, now flirting with "outright extortion" . Now, I do realize AS9100 is SAE and not ISO proper, but a ton of the bad stuff is coming straight from old ISO hands either in their original chairs or having moved around to SAE or IAQG.


I have to say the reason I still want an iso is at least I have a much higher guarantee it will work. I just spent about 4 hours this week trying to get clonezilla to boot from usb and could not get it to work. EFi. Legacy. Everything I could think of. And this was on a reasonably modern server (2015/16 ish lenovo with a e3-1270 V5 )

Come to find out it was the fact that it was a usb 3.0 flash drive. Grabbed a 2.0 drive and everything worked fine.

Had I had a dvd burner (Ironically the computer in question did have one but it was not useable till i fixed it.) I would have saved myself significant amounts of time. (Or randomly grabbed a crappier flash drive.)


Yes, I've found USB 3 and above to be significantly less compatible and reliable than 2.0 and below. I can always trust that whatever I have plugged into a 2.0 port will work the moment the PC comes on, be it a flash drive or a DVD drive or a keyboard/mouse. That isn't always the case with a 3.0 port.


The reason for the problems like you describe is that on many motherboards ports with later USB versions were implemented with discrete USB host controllers or ever hubs, while ports of older standards were integrated into the southbridge. This is one of the reasons why the first thing I do encountering a previously unexplored PC is looking on motherboard block diagram in the motherboard's manual to understand how ports are routed.


It was a usb 3 port. It just would not see it to boot.


I have the opposite experience wrt. ISOs and plain (dd) images. If I burn an ISO to a USB stick, I can use the whole stick. But if I burn a dd image to the stick, the size must match exactly, otherwise I have to futz around with fdisk and resize2fs. Sometimes a 4GB stick has less memory than another 4GB stick, so you can't restore.

And when creating images, you have to take extra care to zero out unused space, otherwise your dd image will not be compressible.

Frankly I just want a small file that I can right-click or drag into Rufus and burn my USB stick or SD card ASAP. I used to know exactly what was going on when making a bootable floppy; the file containing the bootloader had to be in certain sectors, so you had to create it with FORMAT, but all other files could just be copied on. I don't know if any special sector layout is neccessary nowadays, I think for EFI boot it just requires the EFI directory. And tools like Rufus probably just copy file by file to the USB drive, not sector wise.


For EFI it's just the directory and probably using FAT32. And if there's a partition table it should be marked the right type. But there are no magic files or sectors.


You should try Ventoy


> Despite very careful instruction, and even writing down simple steps to make a call, etc., I could see her eyes glazing over. She just didn't want to know, and after awhile I had to buy her another keyboard flip-phone.

I feel like I'm already getting there with some things and I'm half that age. If something as disruptive as systemd or GNOME 3 came out again, I think I wouldn't even listen to the rationales and just follow the old fork this time around. Things work as they are for me.


Yep, I think there's just an inverse relationship between age and how large the change needs to be for you to give it a pass.


Also the change has to bring some added value. When you get older time becomes scarce and you don't have time to test every shiny new toy and fix its possible issues.


Also, as you get older, you realize how much of this churn is change for change's sake, bringing you no real direct improvement to your quality of life. In IT in particular, we like to reinvent the wheel every few years, because a new generation of hungry ninjas thinks they've discovered a new type of fire.


The time to learn depends on how much you do with a device. If it might take 20 minutes to learn some ass-backwards way to do a phone call, but that's all I need to do, then it's more hassle to go get a different device.

Sometimes people complain about the time to learn when actually they just don't wanna.


For those that don't even want to download and dd to a usb stick any more (or only do it once!), check out https://netboot.xyz/. I've actually gone ahead and flashed it over my network card's stock PXE booter, and so now booting into a live image to fix something is completely trivial.


Every time there’s a conversation about ISO’s and booting someone mentions netboot, I still don’t understand at a granular level how it works and the native documentation, at least to me, appears to be written in Greek. I can’t make heads nor tails of it. Do you have a guide you followed that explained how it worked and what to do?


It basically bundles iPXE which allows one to boot over the network including ethernet and I think WiFi although WiFi depends on drivers. It downloads an bootable image and chain boots into it.

We used in our ML data center to iterate on our NixOS images meaning we could push out updates to the underlying OS and then just trigger a restart via IPMI and make it download the new images.

Note that we didn't end up using netboot at the end, we just use iPXE directly.


PC Network booting these days is usually PXE booting.

You can do it with BIOS or UEFI styles, but I've only had success with the BIOS version. I setup my PXE boot setup over a long time, so I don't have a guide, but here's my idea of the steps the boot process takes.

a) (if enabled and selected for booting) the system firmware jumps to the Option ROM on your NIC to boot. These days, almost all cards have a PXE rom, but if you have an ancient card, it probably has a socket fot a boot rom. Getting that setup is possible but outside my scope.

b) the boot rom includes a driver for the NIC and usually Intel's PXE software; either way it will do a DHCP (or sometimes BOOTP) request and ask for a 'next server' and indicate the platform (x86 bios, x86 uefi, amd64 bios, amd64 uefi, sparc variations, etc, etc). The DHCP server will assign an IP and all that junk, but also provide a pointer to the next server (and a path), used to fetch more code to run.

c) next server is traditionally a TFTP server, and the path may be client or platform specific. More modern pxe stacks can do http(s), usually at higher performance

d) I find it easiest to have next server point at pxelinux, part of syslinux that offers a nice menu system and http(s) support that's usually a lot faster than tftp

e) from the menu, you can often selection options that will load a kernel and modules and boot with a command line; in the boot process, usually all the pxe stuff is terminated or discarded.

f) You can also load disk images (floppy, hard drive, iso) and boot from them using memdisk from syslinux. This comes with some caveats; MEMDISK is BIOS only, it hooks the (defacto) standard BIOS disk calls to provide access and doesn't have a UEFI equivalent (AFAIK, corrections welcome), but also if you use an OS that doesn't use BIOS disk access (most protected mode OSes), it needs to support MEMDISK or you can't access the images. This means only some Linux images work, although recent images often do. I've made a patch for FreeBSD, but it's been in review for a long time. Older FreeBSD setup images worked because the setup filesystem was a disk image loaded from the disc by the bootloader, you'd need enough memory to have the cd disc image loaded by memdisk and the setup filesystem loaded by the bootloader and enough left to run the installer, but it's not too bad on today's machines.

So, TLDR, you need to fiddle with your dhcp server, setup a tftp server, copy some files for a bootmenu and/or something to boot. And finally, tweak the boot settings on your netbooting device.

Useful things I've done with this include booting Debian installers, using weird 3rd party software to install windows XP, booting FreeBSD installers with my patch, running disk firmware updates packaged as linux isos (intel), and most fun, running my hobby OS without having to setup a bootdisk (it's multiboot compatible, so pxelinux can boot it easily)


An useful thing with UEFI PXE booting is that

a) recent versions support HTTPS paths for bootfiles

b) iPXE.efi creates virtual ramdisk to which it can download files and execute them in UEFI environment - this can be UEFI shell or other tools that can run under UEFI. Unfortunately this ramdisk isn't available once UEFI Boot Services are stopped (i.e. past bootloader stage).


For b) I don't see anything in the ipxe documentation about a ramdisk? All the booting examples I've seen are using a kernel and initrd or modules etc. This is inconvenient if you just want to support booting from released disk images, because you've got to extract the files from the isos and there's no standard.

Worse, if the setup expected to mount the image once the kernel booted, that's not simple either.


Because the ramdisk provided by ipxe.efi can't be mounted by kernel, as it disappears once ExitBootServices is called early in kernel boot code.

However, booting linux using ipxe.efi involves downloading to the temporary ramdisk the kernel and initrd image(s), then booting it as normal uefi executable passing initrd image(s) as parameters. The UEFI "wrapper" used by kernel uses UEFI calls to copy contents of initrd into kernel-accessible memory before early boot code removes the temporary ramdisk.

But so long as you do not hit ExitBootServices(), ipxe.efi installs a simple filesystem driver that stores everything in memory and mounts an instance of it as "current directory" for all actions in your boot script.


So does this mean that pwning the PXE image server basically pwns the entire organization?


Depends on your setup and timing.

If your machines only PXE boot when doing system setup, pwning the image only pwns on next system setup. Which maybe is never for existing systems; when I ran prod systems, most of my prod servers never got resetup unless their root drive gave out, but that was rare, too. So only new machines would get pwnd in a setup like that.

If your machines always PXE boot, pwning the image pwns machines on next boot. Most of my prod machines got rebooted once or twice after they were setup, but if I had an always PXE boot setup, maybe I would run things differently and rebooting might happen more often. Then again, if everything is PXE booted, maybe we're running with a network filesystem for the root drive, and then you'd want to pwn that filesystem/file server to take over. Up to you to figure out if you can update the binary for something that's already running as root to take over, or if you have to wait for something new to spawn as root. On FreeBSD, you can edit a binary on disk while it's running and if you do it wrong (or maybe right), also edit the binary in memory, but if it's a networked filesystem, I dunno if that's still possible. Other systems may not be so capable.

A rogue DHCP server that's faster than the real one can also do nasty things to a PXE client.

There's some mentions of SecureBoot and PXE, so there's probably a way to get things aligned in a secure way, but somehow you've got to enroll your keys (and probably disenroll the default keys). Anyway, that's way outside my expertise.


It could, but you have to consider your threat model. Which is more of a risk: a PXE box with logging and other auditing mechanisms, and isn't exposed to the internet, or random data center ops people running around with USB drives doing who knows what?

There's also secure boot etc like the parallel comment said.

But yes, if you can pwn the bootstrap process of anything, you have a lot of power. That's been true since computers were invented.


> There are some multi-boot tools, that enable putting many ISO files on the one USB-stick. I have never used them, and don't see the point of them, unless you are a crazed disto-hopper.

If the author had taken a bit of their own advice and went out of their comfort zone they might not have been having so many issues finding where to put the save file years ago! Overall though I get it and image distribution is easier for 99% of cases. I have found it more difficult in 2 cases though, both are cases of "the tooling knows ISO".

The first is on IP KVM's, even when they support non-ISO based virtual drives you'll often be wishing they did. The last I remember trying to futz with this was a Lantronix Spider and it wanted to pass through an entire drive letter but it wasn't working for whatever reason and I eventually just made an ISO with the files in it and mounted the file with 2 clicks.

The second is virtual machines with a similar story but this time it's usually possible to qemu-convert or extract and put into a virtual disk of the right format and you're good to mount the disk - or maybe you get lucky and the packaged format is just a click or config line away from natively mounting like assigning an ISO to a virtual CD drive is. A good example of the latter was just today I wanted to boot a live Linux environment to run gparted on an existing guest VM on an M1 Mac Parallel's guest. I had 0 knowledge (or desire) to investigate how to get an ARM64 image of my preferred distro to be in whatever format Parallel's expects just to use for 5 minutes then never use that virtual disk again so I found Fedora shipped ARM64 ISOs and was able to easily mount it with a few clicks in the GUI and do my work.

I don't think either of these are at the forefront of Puppy's use cases but worth mentioning in a universal call to retire ISO. It may be an old hodge-podge at this point but most everything understands how to work with it as boot media already vs whatever disk image format you choose instead.


Funny how the author complains about how ISOs make the rest of the drive unusable and then say they see "no point" in multi-boot tools. That is exactly one of the use cases multi-boot tools like Ventoy support, being able to store other data on the drive. (The only caveat is that you must format the whole drive to install Ventoy in the first place, but after that you're free to store both ISO and non-ISO files on it.)


Ventoy even explicitly supports persistent data for the ISOs, defeating his argument of being unable to store the OS' data on the flash drive: https://www.ventoy.net/en/plugin_persistence.html


the point is that you should use a mechanism that allows it, without some magical tool. Ventoy wasn't a thing until recently and what they said was true and is still true for people who just go to some homepage, download an ISO and flash it. Not a lot of people know about Ventoy, they would be stuck with a non persistence backup media.


Persistence isn't anything new or revolutionary. Rufus and other tools have supported persistence with ISOs just fine for ages. Ventoy may support persistence too, but its main selling point is being able to put multiple ISOs and whatever other files you want on the USB.


Having usable free space on the drive is not the same as having persistence on the live OS. You still have to manually copy files from the RAM disk to your drive. I think Ventoy even supports actual persistence but its not the same as getting it by simply flashing an image.


It depends on your use case. The author first goes on talking generally about the rest of the drive being unusable, and only then mentions persistence. But Ventoy supports both, so either way the point is moot.


> Ventoy supports both, so either way the point is moot.

Again with this "some tool exists so the point is moot". The same kind of excuse for manufacturers to ship a crippled product. Ventoy is a very new tool, and no one puts a link to it next to their ISO's on the download page so to most people it's... moot.

Why do I need a special tool to enable persistence on my boot media? Imagine flashing an ISO on a thumb drive for the first time and find out it's readonly, wouldn't you be like "why?"


You don't need Ventoy to enable persistence. I'm pretty sure Rufus supports it as well, which is far more popular. Ventoy's main selling point is that you can put multiple ISOs and other files on the same USB.


Rufus doesn't, but that's not the point, I'm sure there are more than one tool out there that does it. But there should not be any tool. Linux distros should be packaged in such a way that you flash it with the default command that everyone has and you get a non-crippled OS.


Hello, yes I am a crazed disto-hopper.

Booting a PC is a "dogs breakfast." I like the name "EasyOS" and I just wish booting a PC OS was easy, but it's not.

I boot my daily driver off of a camera card.

I also multi-boot.

I can net boot.

I can still burn CDs and DVDs, but would rather not.

I have a 3 1/2 inch drive that connects thru USB in a drawer somewhere.

If it was up to me (and it's not): I would have many memory cards that could be set to Read-Only by physically flipping a tab just like on an old 3 1/2 floppy. The code in fast memory on the memory card starts from an open standard address, a documented place that is absolutely the first and only code that runs when the computer is powered on. The read-only OS is already in memory and running and there is no such thing as boot time. I press the power on switch, and the OS is running instantaneously.


> I would have many memory cards that could be set to Read-Only by physically flipping a tab just like on an old 3 1/2 floppy.

SD cards (not the micro ones) have a slider on the side to make them read-only.


That's a host-enforced write protection, which is worthless for protecting against malicious modification of contents (because a compromise host can just ignore it).


Ooh, TIL, thanks. :)


The solution is to simply use a live usb tool, which creates a usable partition in the rest of the device. There are plenty, for example, dus.

It seems that the developer isn't aware of this, indeed they developer were using something similar:

> For Linux developers, if you are interested, I have a script for creating a skeleton drive-image file, with a boot-partition and a working-partition, that will boot on either a legacy-BIOS or a modern UEFI PC. There are three scripts, '2createpcskeletonimage', '2createpcskeletonimage-encrypt' and '2createpcskeletonimage-gpt' -- for easyOS I currently use the middle one, which creates a MSDOS partition table and enables ext4 fscrypt in the working-partition. Syslinux is used for legacy-BIOS booting, rEFInd for UEFI booting. These scripts are in the woofQ tarball, available here.


There's something to be said from being able to boot from a read-only medium.


Unfortunately one of the most clever devices around is rather uncommon the Iodd/Zalman thingy (only for the record), this should be latest version:

https://www.zalman.com/EN/Product/ProductDetail.do?pageIndex...

basically these devices are external USB hard disks that can expose a .iso saved inside (selectable among many) as a CD/DVD.


there is also https://www.drivedroid.io/ which emulates this behavior on a (rooted) android device


Why buy this unnecessary gadget when Ventoy exists? Most clever? That product page looks like satire.

EDIT: sold for £130+. Incredible.


I could imagine this being useful for our company. We need to provision the computers we make. Currently somebody runs to them and boots them with a USB stick or via PXE, or we clone the disks. But it gets cumbersome to manage all the sticks, and cloning is difficult if the device is provisioned in a different factory than where it is assembled. PXE is good for managing the images, but we need to connect a keyboard to start the installation (even though most PCs have touchscreens, but PXELinux doesn't support touch AFAIK). (I had the idea to install a panel where you could preselect which image gets sent to which "PXE cable", but we rejected that as too cumbersome.)

It would be great to have a thingedy that you plug in and that can serve one of multiple ISOs to a device. (Even better if it could emulate a keyboard and automatically go into the BIOS to make the right settings, but I guess such a device would be a security hazard...)


Ventoy is a very good software solution, and not the only one[1], we have also Easy2boot and - for the more technical ones - also "plain" grub4dos/grub4UEFI.

The (very high) price is due - I believe - to the very small amount that they are made, but it doesn't seem (to me) like "rocket science", the basic idea is the same as those floppy disk replacements wherre you can select the floppy image to run/boot.

Some years ago there was a USB stick that had the same capabilities (also expensive as it was almost hand made) called isostick and before that a SD card reader with the same capabilities, simply for some reasons the idea didn't become popular/ubiquitous.

[1] and BTW it has some limits and compatibility issues


Ventoy gave me plenty of trouble. I guess I have a lot of hardware it doesn’t like. Meanwhile, my iodd just works. For me it was worth it to end the hassle.


I agree with this guy fully (having been professionally bitten by exactly what he describes and also having to innovate a way around it) but casually dismissing old-timers and old-school ways of doing thing reeks of ageism and needs to be stopped. Grow up, folks.

In fact this entire culture of lambasting old stuff would be better off being taken out back and shot.


Here, here.

I had one person advocate for a new, shiny piece of software by telling me that what we were using was "too old", because it was from 2012. This was the sum of his technical argument. No actual pros and cons, just an assertion that what we were using was "too old". The conversation took place in 2019. It was not a date dependent issue.

Worse, what he was calling "too old" I didn't actually get to start using until 2014. I've been doing systems stuff since 1999. He essentially called my entire career obsolete because 90% of what I know was invented before 2019.


i got a similar impression, although i thought of it rather than being "anti-old-school" as less flattering "naive oss/linux mentality".... and a bunch of thoughts cursing how much this hurts progress.

the mention of specific ages and the tone of it was way off the mark...

the arguments don't really hold much weight either - frustrations rather than roadblocks. even after reading the article, which i expected to find interestingly informative... and did... i'm not sure there is any sense to be found in the rationale beyond the developer wanting an easier life. thats never a great line to follow in software development imo... its pretty much the opposite of the whole intent.


There are a bunch of legit arguments in favor of retiring ISOs; he even touches on some of them (obsolescence)… but your users are your users, whether they’re snot-nosed kids or dusty old farts or whatever. All users matter!


For windows boot disks the ISO contains files too big for fat32. They are splitable archives. but it's a pain in the ass.


The Windows 10 installer doesn't fit on a standard burnable DVD anyway. Microsoft is clearly as done with optical drives as the author of this article.


This isn't that uncommon and doesn't seem like a big deal at first thought. I do appreciate the author going over the (unexpected) issues he found related to moving away from the use of ISOs, as I hadn't considered them.

While not the same as a general purpose OS image, all the major audio Linux distributions I've used (Volumio, moOde, piCorePlayer), which are intended for use on Raspberry Pi or other embedded systems, ship this way rather than as ISOs, and support resizing the OS partition to take the rest of the micro SD card for music storage or anything else. A similar approach for an OS on a bootable USB drive seems very reasonable.

Incidentally, the last time I installed an OS from a CD or DVD was Windows 7 in 2016. Everything since then has been a USB stick, a micro SD card, or a micro SD card in a USB card reader.


One place where it is still helpful to have ISO images is when creating Hyper-V VMs. You can only attach ISOs or VHDs to virtual machines. So a raw thumb drive image would need converting before it is possible to use.


> There are some multi-boot tools, that enable putting many ISO files on the one USB-stick. I have never used them, and don't see the point of them, unless you are a crazed disto-hopper.

I use Ventoy specifically to avoid the issue the article describes with standard ISOs file where it leaves the rest of the USB storage unused.

That way I can carry and easily update a bunch of installable ISOs on a single USB thumbdrive, just in case I need to install a specific OS from scratch (Ubuntu, Windows, etc) or launch a live recovery environment (SystemRescue, GParted Live, etc).


Isn’t there an easy compromise for old timers: utility that converts drive image to ISO linked from a footnote on the download page?


I always find ISO9660 being abbreviated to ISO very amusing.


The usable space part of drive images vs iso made the case for me. Those illustrations really drove the point home. Consider me converted!


Totally wasn't expecting that reference to Silicon Heaven, that made my morning :D


Looking at it are a lot more questionable things about EasyOS than dropping ISOs. For a distribution that claims to be "easy" it makes even my head hurt as a power user, I can't imagine it being easy for the average user.

> This is controversial, however, it is just a different philosophy. The user runs as administrator (root), apps may optionally run as user 'spot' or in containers as a "crippled root" or user 'zeus'. The practical outcome is that you never have to type "sudo" or "su" to run anything, nor get hung up with file permissions.

This is a horrible idea. It's not a "different philosophy", it's just incredibly dumb and insecure, especially if you're trying to market your distro to the average user. It's even worse than Windows. Using this logic, why don't they do `chmod -R 777 /` then? See how ridiculous that sounds? Exactly my point.

They're basically just doing some buzzword marketing by saying "we support containers!" but it doesn't say if any services actually run in containers by default. And even if they do, the default setup is still extremely insecure, making users run every command as root by default.

> No systemd, and it is also worth noting what else Easy does not have: policykit, pam, pulseaudio, avahi. The Dunfell-series doesn't have the Qt libraries, but that may change.

I can understand not having systemd, for me it's both a blessing and a curse. But the lack of PAM will break a lot of things like Google Authenticator/TOTP and YubiKey support. Arguably, most average users won't need those (though IMO 2FA should be more normalized), but then again as I've already said the "EasyOS" name seems to be self-contradictory, demanding a bunch of things that power users may be fine with but average users will just get confused by.

> When someone boots up Easy, they will see that the menu (bottom-left of screen) is totally different from what they are accustomed to. Ditto the folder hierarchy. The thing is, keep an open mind -- it is very easy to adjust, and there are solid reasons for the differences.

Again, if you're trying to market to the average user this is nonsensical and just serves to confuse them even more. Sure, power users may be able to adjust more easily, but then don't call it "EasyOS".

> Ditto. The kernel-assigned names for drives and partitions are retained. For example drive sda and partition sda1.

Which distribution actually does this? I'm aware of it being done for network interfaces, but I haven't seen such a thing for drive names yet. (That said, I do believe that the new "predictable" network interface names are stupid and just cause more issues than they solve. Maybe it works well to ensure consistency if you have two network cards, but the whole thing falls apart as soon as you change the hardware.)


ventoy FTW


This is the one time that the title is very ambiguous that I'll clarify what are they talking about. This is about ISO boot files on EasyOS, but if you didn't read the article you could read it as ISO the organisation or a particular specification issued or co-issued by ISO.


OK, we've replaced the title above with how the author puts it in the first sentence, which is much clearer.


Yeah, it could do with "for EasyOS releases" being appended (but that would be editorializing)


Thank you. Came here expecting International Standards.

Article was about ceasing use of .iso files for booting OSes from USB, and a replacement that makes a lot more sense.


I initially thought it was about the ISO file format for disk images and thought "but almost every Linux distro still uses them" and was very confused.


> What I have observed is that those persistent individuals are Puppy old-timers, very old-timers. A couple of them are in their 80s, and they have a collection of vintage PCs, all with optical drives. Optical media, CD/DVD, ISO files, that's what they know. And they ain't gonna change.

Hold on a sec...so the conclusion is: screw those old timers?


I don't understand it either. Other distributions offer ISO files too, and that's very handy as it's still one of the source formats for virtualization platforms. I created my Ubuntu and Windows VMs from ISO images. I don't care how old that format is. Why change something that works?


I simply like the fact that I can take an ISO file and easily mount it as a drive in my OS. It might be a breakfast of a format, but all that complexity is nicely hidden away from me and it "just works".


And that is an absolutely insane state of affairs as well. I remember when my Packer playbooks stopped working because Linux kernel dropped floppy disk support and that was the method that was being used to inject the config on vmware.

99% of Linux installs are done on some hypervisor... ISO files are absolutely awful for this use case. Yet everyone keeps using them because the tooling is already jumping through thousands of hoops to make it work.


Why are they absolutely awful for this case, and what would be a better way?


The next paragraph about her non-tech 88 year old stepmother seems worse, he's throwing all of those people under the bus by comparing them to her.

Vintage PCs don't really have USB, but well, in his defense, people working with vintage PCs are already used to extra steps (e.g. if the PC has token ring and they want to connect it to Ethernet), so they'll need to add the extra step of "Write contents of USB image to ISO"...


I didn't read that as the conclusion at all. He didn't say screw anybody. He pointed out that he has provided tools for migration, and that about two of the people who are nevertheless still concerned by the situation won't be changing.

So to me it sounded more like admitting defeat for now with two people give or take, and letting the rest enjoy the benefits of the upgrade.


> screw those old timers?

Isn't it a volunteer project? If he doesn't want to support it, then he doesn't want to support it, end of conversation. If the old timers want support for a legacy format they can do it themselves.


I see no problem with that: setup a FAQ page and redirect people there. Link it from the downloads page and generally avoid receiving personal emails for your volunteer project.


I don’t see why anyone’s obligated to support Really Old Thing just because some people don’t want any change.


The value of backwards compatibility is directly proportional to the ratio of "value of user time" / "value of dev time". Depending on whether you are the dev, the user or a third party, you may attach different values to that ratio.

In the extreme, a dev might think that ratio is 0 and a user might that it's a large number. (People arguing that ISOs are already widely supported and useful are implicitly attaching a large value to that ratio)

I don't think there's an objectively correct value that can be assigned to that ratio, hence disagreements about whether Really Old Thing should be supported.


Supporting ISO takes basically zero effort. By dropping support for it the developer is just wasting both users' and his own time. It would be far better to say "you can download an ISO but I don't recommend it, here's why a plain disk image is a better way".


Obligated: no. Document how things are and instructions for those who want to make an ISO by themselves: it would be nice.

But there's no obligation, just expectations and communities.


More like "there is no change that will satisfy these users".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: