I have no idea how I would install EasyOS in my hypervisor. Well, I could probably boot from a Live linux ISO and then install it from there.
And on actual hardware I can mount an ISO over IPMI but I would not immediately be able to install EasyOS that way.
Also, the argument that you can store state isn't always an advantage. When I boot a Live ISO I never ever want to save anything, a clean state is one of the features of a Live ISO.
I don't know what EasyOS is, and it is for sure possible that it lives in a niche where it never makes sense to have an ISO. But I don't get the crusade against ISO.
>There are some multi-boot tools, that enable putting many ISO files on the one USB-stick; however, the ISO format does not have any intrinsic avantage, these boot managers could also be made to boot image files.
Never heard of EasyOS either so I looked it up. It's definitely in a pretty interesting niche:
In a traditional "full" installation, the filesystem occupies an entire partition, with the usual /etc, /bin, /usr, /proc, /sys, /tmp, etc. Easy does not install like this.
Easy installs to hard drive in what we call frugal mode, which occupies just one folder in a partition, allowing to co-exist with whatever else the partition is used for.
Installation is so simple: Just copy three files 'vmlinuz', 'initrd' and 'easy.sfs' into a folder in any ext4 partition, and that's it, Easy is installed. All that remains is to create an entry in the boot manager. [1]
Generally seems to be a highly stripped down, security focused Linux distro. No systemd, polkit, or even PAM. Heavily container oriented. Feels a bit like a lighter alternative to Qubes OS.
Well, the article is correct that there is nothing unique to ISO on your comment. You can achieve all of that with an EXT-4 or NTFS filesystem too.
Our tooling evolves in some strange ways. The only reason you want an ISO is because some software insists that it can't use a simpler FS the same way. It's like the software that insisted on saving things into floppy drivers even after no computer had them anymore.
Of course, that software existing may be a good reason for the author to keep the ISO packaging. Or maybe not, I have no idea what is his context. But what you describe is a deficiency on your software, and not any intrinsic benefit from ISOs.
Obviously, but with that logic the only unique thing with his solution is that it doesn't work well in the software ecosystem of our time. Not a great selling point.
It is more than fine to opt at not supporting ISO, but the arguments put forward for this crusade are not convincing.
> the arguments put forward for this crusade are not convincing.
I agree. They read like "I don't want the additional work of creating ISOs" to me. Which, honestly, is plenty good justification for not making them on its own. There's no need for further justification.
Why wouldn't it work well with your software ecosystem. All hypervisors allow booting from a disk image. If not you wouldn't be able to boot your installed OS!
ISO is great at keeping things immutable - most OSes auto-mount inserted USB sticks r/w by default, so your ext4 USB rescue stick cam be broken by one accidental mouse drag, or by a typo in "rm" command. And even if you fix one OS to not mount it r/w, it will still happen if you insert into your friend's PC.
On the other hand, ISO is guaranteed to be immutable, you need some real effort to break it.
Immutability is not a property of the ISO file (system) format, it's a property of having it live on an optical disc.
But most computers these days don't have drives for optical discs, and if your ISO is merely a file on a USB stick, then it can be broken just as easily.
Maybe in theory, but not in practice. Because no ISO fs implementation has support for writing to a mounted image.
I'd argue that even in theory the immutability is kind of a property of the image format, since it doesn't have file or free space fragmentation. So whenever a file grows or shrinks you have to regenerate the entire image.
This is not true, ISO9660 was designed to be a read-only filesystem: it has no concept of slack space nor sectors in the traditional sense. Every file and directory entry on ISO9660 consists of a single extent (contiguous number of sectors). If your file or directory entry needs to grow beyond its current size, you must create a new copy of the entire entry and append it to the image.
The only writable filesystems that use the same allocation strategy are log-based flash filesystems, but they also have support for quickly scanning the filesystem to get the most recent snapshot state. Again, ISO9660 does not have that because it was not designed to be writable.
The point is that an ISO image is just a file, and if that file is on a USB stick, it can be accidentally overwritten or deleted, same as any other file.
And if someone really wanted to (in a black hat scenario), it would absolutely be possible to rewrite an ISO image on the fly to modify individual files.
No, once written to USB stick, ISO image is not a file anymore, it is a sequence of blocks outside of any filesystem. This means you need to use special raw disk tools (like dd) to modify it, and you can no longer operate on it using file managers, "ls", "mv", "rm" and all the related stuff.
You are correct that in black hat scenario, one could rewrite ISO image on the fly - after all, that's what I do when I upgrade my bootable USB stick to latest version.
But while ISOs are not perfect protection against malware, they are _great_ against accidental modification. On my PC, when I insert USB stick, the window pops up right away, and it takes 1 (one) mouse drag to make drive unbootable. No such problem exists with ISO images.
A good final half of the article is spent on blaming Ventoy's developers and users for holding it "wrong". And here I am left still wondering what is so bad about ISOs.
Agreed. The author is discounting the ease of use argument, which frankly is the last argument you should make, especially when the cost is effectively ZERO. Frequently good enough is really good enough.
I skimmed the docs for multiple hypervisors (qemu, vmware, virtual box, hyper-v), it looks like only Qemu can directly boot the raw image, but everything else can convert the raw image to a native format and then boot it. It does take an extra, often command line only step to get it booting though which is less than ideal.
This is a feature of QEMU and not the raw image itself. Most hypervisors have a preferred image format. The two most popular ones for QEMU are qcow2 and raw.
Others such as vdi and vhdx can be easily created with the qemu-img tool, which allows easy conversion between image formats. I've successfully run the same raw image in QEMU, VirtualBox, HyperV, and VmWare just by converting it and booting it.
Assuming you even have access to the storage repository where your disk images reside. Which you don't immediately do in many cases.
Yes, for sure it can be done. But unless you have a habit of doing it it is likely going to be much quicker to boot from a live ISO and do it from there. And by then it all seems rather ridiculous.
> So, I am repeating, there are no use-cases where the ISO format has an advantage over a Linux distribution as a drive image file. The perceived advantages are only due to ignorance.
This is paradoxical statement: overcoming said ignorance (read: ease of use) is by definition an advantage (actual, not just "perceived")
It's plain cognitive dissonance of conveniently denying usability by skipping user concerns and declaring what is false on their behalf supported by rhetorical nonsense.
If the governance of this tiny distro is bad, don't worry, there are a million other tiny distros to choose from.
Ignorance is not complete failure, just minor lack of awaremess sometimes.
Things like Etcher or dd are nonideal when the "drive geometry" of the master that was imaged is not fully compatible with the target USB device and/or its bootability. The master image is simply written bitwise to the target regardless of differences in the "heads & sectors" that might be native to the target USB device, which can be a show-stopper when it comes to bootability, or it can compromise erase-block alignment which can dramatically slow the performance of an otherwise good master file set.
Bootable ISO's are made to boot using the Isolinux proper approach, one part of the overall Syslinux Project. Which means the ISO when burned to an optical disk or mounted will have an ISOLINUX folder in the root of its filesystem, and within that folder will be the boot files which allow it to boot as an optical disk.
The other parts of the Syslinux Project are Extlinux for network booting, and of course Syslinux proper for booting from FAT partitions.
The intention was to get a Linux ISO to boot from a HDD, you copy all files and folders from the optical disk (or mounted ISO) to the FAT volume of the HDD, rename its ISOLINUX folder to SYSLINUX, rename the contained isolinux.cfg file to syslinux.cfg, then "syslinux" the FAT volume and it will write a Syslinux boot sector to the volume which will then load and run the live fileset from FAT not much differently than if it was on an optical disk.
Traditionally a target USB device would be zeroed before inserting into the socket of a DOS/Windows/Linux PC. The USB device can then be partitioned MBR-style based on what the BIOS/OS detects regarding the native geometry of the device, without having a potentially nonideal sector 0 or other misleading data previously on the USB remain existent, which the partitioning and/or formatting process would prefer to replicate. The partition must also be "marked active" before it will be a bootable one, which is an important option to select when partitioning.
Once partitioned in response to low-level detected geometry, the USB device would then be formatted as FAT32 in DOS/Win9x, once again allowing the traditional OS to detect its preferred geometric layout, this time within the provided partition. This can make all the difference when it comes to bootability. Lots of people have never done this since DOS is so out-of-date, plus DOS did not support USB anyway so you had to plug in the USB device before you boot the PC so the BIOS can recognize the USB as a drive to begin with before DOS can then format it. Once formatted the volume would then physically start with a DOS Bootsector in place at its detected location, often sector 63 but also sector 16, 32, 64, 1024, or other choices can also be where the "native" volume might start instead. For booting you would still further need an actual Master Boot Record to be written to sector zero (to accompany the partition table already there) which would require careful use of the undocumented DOS FDISK switch /MBR. FDISK /MBR is best run when booted to DOS from a floppy (or some optical media), while the USB device is already plugged into the PC and there are no other drives connected to the motherboard, because FDISK /MBR will silently write the standard DOS MBR to sector 0 of the C: drive. Also before the USB would boot you would need an OS or at least some boot files to be present on the FAT32 volume, which was accomplished when booted to DOS by running SYS.COM on the target device which would write the 3 DOS OS files to the FAT USB.
Then ejecting the floppy and rebooting to the USB instead, the motherboard would then detect the USB as a regular drive with its recognizable geometry, read the Master Boot Record at sector 0 then jump to the Volume Boot Sector of the (primary) partition marked as active at the time. A DOS boot sector would then seek DOS bootfiles on that FAT volume and if present you get a DOS command prompt.
The modern alternative in Windows10/11/Linux would be to partition the zeroed USB, and set it active, followed by formatting it FAT32 using Windows. This would give you a Windows NT6 boot sector at the beginning of your FAT32 volume which will not boot to DOS but will instead seek a BOOTMGR file and an accompanying BOOT folder containing some NT6 bcd-style boot files. But for booting Linux you won't need BOOTMGR or a BOOT folder when using Syslinux. But you will need a Master Boot Record so use the Windows command bootsect /nt6 x: /force /mbr and it will overwrite the NT6 bootsector with a duplicate while also adding the optional MBR which is essential for booting, where x: is the actual volume of your USB device as seen by Windows.
Now for BIOS the Syslinux Project has been mature since 2014 at version 6.03, but most Linux distributions are using one of the poorly documented version 6.04's to achieve variable degrees of UEFI/GPT performance. The ISOLINUX/SYSLINUX folder will usually contain various of the helper files known as .C32's which need to be from the same exact version of Syslinux that the ISO was prepared from. If you can not run the Syslinux app to prepare the FAT volume from a distribution which has the same Syslinux version as the ISO, then I use version 6.03 from Windows while also replacing any of the .C32 files which may be present, with ones having the same filename from version 6.03 instead of the supplied files in my renamed SYSLINUX folder.
To recap,
Zero the USB device.
Partition and set it active.
Format it FAT32 in Windows, ideally label the USB volume with a descriptive moniker in 11 characters or less. Then run bootsect /nt6 x: /force /mbr.
Copy all the files and folders from the mounted ISO to the FAT volume. This works for Windows setup ISO's too (older ones with filesizes less than 3.2GB which can be handled by FAT32) which are then ready to boot at this point. For Windows setup ISO's where the install.wim file is too big for FAT32 you would have to format as NTFS instead of FAT, however some of the early large setup ISO's had the install file split into smaller install.swm's for use from FAT32.
Run SYSLINUX from Linux or Windows on the target USB volume, to prepare the FAT volume to boot Linux. This will write the ldlinux.sys and ldlinux.c32 files to the root of the FAT volume while also overwriting the NT6 bootsector with a syslinux bootsector as a replacement.
Change the name of the ISOLINUX folder to SYSLINUX, rename the contained isolinux.cfg file to syslinux.cfg. This is supposed to be how easy it was intended to be to change a proper distribution from optical to FAT. However as this knowledge has faded over the decades there are some distributions which may need further effort within the Syslinux folder (occasional editing of cfg files and kernel switches) in addition to making sure any .C32 files are from the correct matching Syslinux version.
I often feel we missed a chance to make booting from removable media real easy.
If we had built ISO support into EFI, you could just copy an ISO (like a container) to a MBR or GPT partitioned thumb drive and boot it - no magic tools required, no bits to set. Yes FAT32 has a 4gb limitation, but there is nothing saying that couldn't be fixed with future versions of EFI, or by using multiple partitions on thumb drive.
Either way, I feel like an opportunity was missed.
For the specific case of "burning" some Linux distro's installer to a USB drive, I've always found those .img files much easier to deal with than ISO files, simply because a single dd(1) invocation is all I need to "burn" the image onto my drive.
I figure this is probably the vast majority of people who install Linux from a physical device rather than network booting. (I have dealt with PXE about a decade ago but I can't figure out how to use it anymore.)
Loads a boot menu so you can pick from a number of .iso files dynamically. Just copy them to the first partition of the drive. Drag and drop if you want. Much more efficient than clobbering the filesystem with a dedicated tool every time.
I dunno, I used to use ISO linux images because they were write-once and assuming I had a trusted source of the ISO or system writing the ISO, I didn't have to worry about the Linux system getting compromised and I could just run Linux off a CD/DVD.
I'm surprised that didn't come up in the original article or comments here on HN.
(I am aware of course of non-persistent in-memory worms, and the fact things can persist in places other than the filesystem, but I presume they are much rarer.)
If you write an ISO to a flash drive is it read only? Obviously optical media is inherently read only, but I haven't owned a machine with an optical drive in almost fifteen years and I don't have confidence any way of running an image (aside from a hypervisor) would preserve that property.
Hmm I don't think they are cryptographically protected, it's a security by difficulty thing (obscurity?) - iso is not a "simple" format so corrupting is not simple, either.
On the other hand as you said, physical media can have a strong(er) security guarantee (write once at the physical layer), and iso's are easier to burn than disk images...
Optical media can still be bought and drives still work fine...and there are other advantages to optical media (such as longevity) that as of yet drives can't replicate.
It's the result of 'cat /dev/sda >drive.img', basically. It's not that different from an "ISO image", which is the same except for the CD-ROM drive (I forgot which device name that is). An ISO image is just a "drive image" with the ISO9660 filesystem.
(Live images like this typically do have different bootloaders than your regular/dev/sda).
That's generally called a "disk image", not a "drive image", which is a term I had never heard before in my over 30 years of using computers and seems to almost exclusively be used with relation to some proprietary software program that happens to have that name.
I would say 'cat /dev/sda >disk.img' is a "disk" image since it bitwise includes all sectors with partitions in order and everything on the source HDD.
You would restore a disk image to an entire HDD of at least the same size as the original, with compatible geometry when applicable.
OTOH 'cat /dev/sda1 >drive1.img' would be an image of only the first partition, making it a "drive" image of that one volume alone.
You could restore such a "drive" image to any partition of the same or larger size. Bootability would still be dependent on considerations outside that one volume.
> Is it just that the write once nature of the media makes pre-composing the file system a pain?
I think it's sort of the opposite. Read-only/write-once file systems can be simpler because they don't have to be structured in a way that allows things to change. For instance, full file systems generally need to deal with files that are laid out in a non-contiguous manner on the disk since files may change in size after creation.
I think the other aspect is that in the early days of CD-ROM a lot of folks envisioned there would be a lot of use cases that transcended a single operating system so having a universal file system for them was useful. And while some of the grander visions may not have won out, distributing plain data (photos, clip-art, etc.) on CD-ROM was a common use case and benefited from a cross-platform filesystem.
How many people would've owned CD-ROM drives between 1984 (when CD-ROMs first hit the market) and 1988 (when ISO 9660 was finalized), though?
The bigger pain point for me was around booting from CDs; even though El Torito was around by the time I was potty trained, there were still plenty of PCs out there from before - and it's those PCs on which I cut my teeth as a kid :)
Widely, but not exclusively used. Some big vendors at the time that made things like library indexes and legal references used their own completely wacky in-house formats that required lots of fiddling with device drivers. They'd often only work with a handful of CD-ROM devices as well, meaning you're stuck with your $2000 single-speed top-loading NEC drive for eternity.
High Sierra won out, it was the logocal winner, and there was, thankfully, no standards war like with HD-DVD vs. Blu-Ray.
The sgi irix cd's were in their EFS filesystem. It made reading sgi disks on other systems a bit of a challenge.
At one time I was tempted into putting a UFS(the dvd filesystem) format on a flash drive, the theory was it would be a better filesystem than FAT and be readable in any OS. The problem was nobody ever implamented write routines for their UFS implementation, they were all read only. Sort of understandable as the key driver of UFS was a read only medium(the dvd). but it dashed my dreams of using it as a cross platform filesystem.
Optical discs can also use the UDF filesystem, and that one can be read/write on other media. There was a moment in history where it looked like Microsoft and Apple were going to both support UDF as a filesystem for USB flash drives, SD cards and so on, but unfortunately that seems to have been scrapped in favour of exFAT, maybe because Microsoft gets licensing fees that way.
While the floppy disc and optical drive may have been retired to the domain retro computing, ISOs are still the standard delivery mechanism for most hypervisor OS installations.
If you don't provided a convenient means to test your tiny OS, I'm not going to jump through hoops that could've been automated and solved at the scale of 1 release process vs. N users.
> On the other hand, a live-CD type of distribution, like Puppy Linux, it is an issue, because the "save file" can only be created on some other drive, usually an internal drive. Wouldn't it be nice if the "save file" or "save folder" could be on the USB-stick, or rather it would be nice if you had that choice.
I'm about 98.69% sure this is easy to fix with any ol' partition editor. Even if it wasn't, the absolute worst-case scenario here for a fully-in-ramdisk distro (my experience is with Damn Small and Tiny Core, but I'm sure Puppy and EasyOS can do this, too) is to just boot into such a distro and reinstall the whole OS on top of the USB stick; everything's in RAM already anyway, so with such distros you can reformat or even unplug the stick once you're booted up.
I wouldn’t care so much if I could get a USB 3.2 or 4 device that’s only 2–4 GiB for like $2. I would actually buy a handful for each system I need, label it, and update that dedicated drive. However, fast, new drives don’t come at sizes appropriate just an ISO.
Since observing a SquashFS image outperforming others when doing PDB database processing, I await Linux boot images to all switch to compressed read-only filesystems.
Kernel image files already use bzimage for a long time.
That would make it boot much faster, and use less memory than ISO:
First step of system installation on Ubuntu (Debian) is uncompressing the filesystem. Why not use compressed FS for this purpose too, and shorten the whole process to few seconds?
I have no clue what EasyOS is, maybe I can quickly fire up a VmWare/VirtualBox/qemu VM and then I have a better idea. Anyone has a link to something I can stuff into -cdrom command line argument and give it a whirl?
I prefer ISOs for Linux distributions as well, for a number of reasons. But it's not that big of a deal. If I get an image in another format, I can convert it to ISO myself.
An underlooked fact is ISO has became a fundamental word for an OS distribution file (which is meant to be universally compatible and straightforward to deploy).
Another interesting thing: the author mentions boot managers which let you boot from a number of ISOs (see? I don'r even wan to bother saying "disk images" even though HDD images are meant to be supported here) put on a single USB drive but he forgets USB drives with have such managers integrated with the hardware (e.g. Zalman ZM-VE200) - I intend to buy one but who knows if these support non-ISO drive images.
And on actual hardware I can mount an ISO over IPMI but I would not immediately be able to install EasyOS that way.
Also, the argument that you can store state isn't always an advantage. When I boot a Live ISO I never ever want to save anything, a clean state is one of the features of a Live ISO.
I don't know what EasyOS is, and it is for sure possible that it lives in a niche where it never makes sense to have an ISO. But I don't get the crusade against ISO.
>There are some multi-boot tools, that enable putting many ISO files on the one USB-stick; however, the ISO format does not have any intrinsic avantage, these boot managers could also be made to boot image files.
Well, they don't. Maybe that ought to be a hint?