It's something you can do since a lot of years. I used to do so 10 years ago, when I've got the first motherboard with UEFI. But is it useful? It saves a minimal time in the boot sequence, but at what cost?
The bootloader (being it grub, or something more simple as systemd-boot) is useful to me for a couple of reasons:
- it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
- it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
- it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
- it has a voice for entering the UEFI setup menu: in most modern systems again entering the UEFI with a keyboard combination is unnecessarily difficult and has a too short timeout
- it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
If I'm understanding correctly, it might help to point out that in spite of the title they are proposing a bootloader, which can still let you modify the cmdline, boot to other OSs, etc. It's just that the bootloader is itself using the Linux kernel so it can do things like read all Linux filesystems for "free" without having to rewrite filesystem drivers.
you seem to be saying that they are using two separate kernels, one for the bootloader and one for the final boot target
the title text says 'Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target' which sounds like they're not talking about using two separate kernels, one for the bootloader and one for the final boot target, but rather only one single kernel. possibly that is not the case because the actual information is hidden in a video i haven't watched
I watched the video. They have two different configurations, one where there’s only one kernel, one where there are indeed two separate kernels with one kexec’ing to the other.
To be clear: the win here is that there's no longer duplicated (or worse - less capable and outdated) code to do the same things in both the bootloader and the kernel, however the two versions of that code might be deployed.
> It's just that the bootloader is itself using the Linux kernel
This sentence does not say "the bootloader is itself another, separate, Linux kernel", so I'm not seeing him saying what you're saying he seems to be saying.
although you stated your comment less aggressively, you don't have nmstoker's excuse of possible ignorance; you must have already known what i said in this comment: https://news.ycombinator.com/item?id=40911469
>> you seem to be saying that they are using two separate kernels, one for the bootloader and one for the final boot target
This doesn't make sense. There's nothing in the post you responded to which could realistically be interpreted as making that point. And there haven't been any edits, which might have explained your confusion.
the comment says 'they are proposing a bootloader, which can still let you modify the cmdline, (...) the bootloader is itself using the Linux kernel'
possibly you don't know this, but in order to run a kernel with a modified command line, the bootloader-kernel would need to run a second kernel, for example using kexec; linux doesn't have a useful way to modify the command line of the running kernel. that's why i interpreted the comment as saying that they are proposing using two separate kernels. in https://news.ycombinator.com/item?id=40910796 comex clarifies that they are in fact proposing using two separate kernels; the reason i was confused is that that's not the only configuration they're proposing
What I know or don't know is irrelevant, because what matters is that your statement rests of bringing in external knowledge/assumptions, so it's clearly not what the commenter is saying (alone).
Indeed, but accusing someone of saying something based on unstated external knowledge/assumptions is the original problem here. They just needed to say words to the effect of "taken with point X what you say implies Y" and it would be fine and much less accusatory.
The sibling comments who think you need to jump back to EFI to solve this, are thinking in layer-ossified terms. This is Redhat proposing this, and they're perfectly confident in upstreaming kernel patches to make this happen.
I would assume that in their proposed solution, the kernel would have logic to check for a CMDLINE flag (or rather, lack of any CMDLINE flags!) to indicate that it's operating in bootloader mode; and if decides that it is, then it never calls ExitBootServices. All the EFI stuff stays mapped for the whole lifetime of the kernel.
(Also, given that they call this a "unified kernel image", I presume that in the case where the kernel decides to boot the same kernel image that's already loaded in memory as the bootloader, then nothing like a kexec needs to occur — rather, that's the point at which the kernel calls ExitBootServices (basically to say "I'm done with caring about being able to potentially boot into something else now"), and transitions from "phase 1 initramfs for running bootload-time logic" into "phase 2 initramfs to bootstrap a multi-user userland.")
>and if decides that it is, then it never calls ExitBootServices
That's unlikely, I think that would mean you cannot use native drivers, at which point you're just writing another bootloader. I suspect they only planning to kexec into target kernel, not chainloading other EFI bootloaders.
Something that hasn't been addressed by comments here yet is that you could implement EFI boot services in the Linux kernel and essentially turn Linux into a firmware interface. Though note that I generally shy away from any attempts to make the kernel into a really fat bootloader.
AFAIK, the UEFI spec imposes no requirement that (non-hotplug) devices be re-initializable after you've already initialized them once. Devices are free to take the "ExitBootServices has been called" signal from EFI and use it to latch a mask over their ACPI initialization endpoints, and then depend on the device's physical reset line going low to unmask these (as the device would start off in this unmasked state on first power-on.)
Devices are also free to have an "EFI-app support mode" they enter on power-on, and which they can't enter again once they are told to leave that mode (except by being physically reset.) For example, a USB controller's PS2 legacy keyboard emulation, or a modern GPU's VGA emulation, could both be one-way transitions like this, as only EFI apps (like BIOS setup programs) use these modes any more.
Of course, presuming we're talking about a device that exists on a bus that was designed to support hotplug, the ability to "logically" power the device off and on — essentially, a software-controlled reset line — is part of the abstraction, something the OS kernel necessarily has access to. So devices on such busses can be put back in whatever their power-on state is quite easily.
But for non-hotplug busses (e.g. the bus between the CPU to DRAM), bringing the bus's reset line low is something that the board itself can do; and something that the CPU can do in "System Management Mode", using special board-specific knowledge burned into the board's EFI firmware (which is how EFI bring-up and EFI ResetSystem manage to do it); but which the OS kernel has no access to.
So while a Linux kernel could in theory call ExitBootServices and then virtualize the API of EFI boot services, the kernel wouldn't be guaranteed to be able to actually do what EFI boot services does, in terms of getting the hardware back into its on-boot EFI-support state.
The kernel could emulate these states, by having its native drivers for these devices configure the hardware into states approximating their on-boot EFI-support states; but it would just be an emulation at best. And some devices wouldn't have any kind of runtime state approximating their on-boot state (e.g. the CPU in protected mode doesn't have any state it can enter that approximates real mode.)
You're right (as I saw another comment cite the primary-source for); but I'm still curious now, whether there'd be a way to pull this off.
> I think that would mean you cannot use native drivers
Yes, that's right.
> at which point you're just writing another bootloader
But that's not necessarily true.
Even if you could only use EFI boot+runtime services until you call ExitBootServices, in theory, an OS kernel could have a HAL for which many different pieces of hardware have an "EFI boot services driver" as well as a native driver; and where the active driver for a given piece of discovered hardware could be hotswapped "under" the HAL abstraction, atomically, without live HAL-intermediated kernel handles going bad — as long as the kernel includes a driver-to-driver state-translation function for the two implementations.
So you could "bring up" a kernel and userland while riding on EFI boot services; and then the kernel would snap its fingers at some critical point, and it'd suddenly all be native drivers.
Of course, Linux is not architected in a way that even comes close to allowing something like this. (Windows might be, maybe?)
---
I think a more interesting idea, though, would come from slightly extending the UEFI spec. Imagine two calls: PauseBootServices and ResumeBootServices.
PauseBootServices would stop all management of devices by the EFI (so, as with ExitBootServices, you'd have to be ready to take over such management) — but crucially, it would leave all the stuff that EFI had discovered+computed+mapped into memory during early boot, mapped into memory (and these pages would be read-only and would be locked at ring-negative-3 or something, so the kernel wouldn't have permission to unmap them.)
If this existed, then at any time (even in the middle of running a multi-user OS!), the running kernel that had previously called PauseBootServices, could call ResumeBootServices — basically "relinquishing back" control over the hardware to EFI.
EFI would then go about reinitializing all hardware other than the CPU and memory, taking over the CPU for a while the same way peripheral bring-up logic does at early boot. But when it's done with getting all the peripherals into known-good states, it would then return control to the caller[1] of ResumeBootServices, with the kernel now having transitioned into being an EFI app again.
[1] ...through a vector of the caller's choice. To get those drivers back into being EFI boot services drivers before the kernel tries using them again, naturally.
It's a dumb idea, mostly useless, thoroughly impractical to implement given how ossified EFI already is — but it'd "work" ;)
Giving "the control of hardware back" is going to be extremely difficult. Just look at the mess that ACPI is: there are lots of notebooks that Linux can not put into/back from hibernation, and here we're talking simply about pausing/resuming devices themselves. What you are proposing means that an OS would have to revert the hardware back to the state that would be compatible with its state at the moment of booting, so that UEFI could manage it correctly. I don't think that's gonna happen.
This is being discussed more extensively in other comment threads but it sounds like maybe there's a way for it to just reboot but set a flag so the firmware boots into a different .efi next time (once).
The target image would need to reset that field so that a second reboot puts you back into the bootloader because otherwise you'll be stuck booting that image forever.
Even after you’ve already installed a custom boot laser to it? I mean, I agree with you in principle, but we already have the chicken - can’t existence of the egg be assumed?
Aside from the DVD issue mentioned in the other person's comment. I have a design for a SED OPAL based encryption setup where the system boots with a read-only boot partition and it only becomes RW as part of the initramfs running (although optionally you can just keep it RO until you need to write to it, but this requires buy-in from the package manager).
I think network booting with EFI would also suffer from a similar problem.
You can have command line parameters baked into the EFISTUB.
I also have two kernels, so there's two UKIs on /efi, and I have both added as separate boot options in BIOS.
Just because the boot loader is using Linux, it doesn’t prevent an alternative OS from being booted into, so there is nothing fundamentally stopping all of grub’s features from working in this new scheme.
It is a bit more complex, though. Quoting "nmbl: we don’t need a bootloader" from last month[1]:
> - Possibility to chainload from Linux while using Secure / Trusted boot: Dual-booting, although not supported on RHEL, is important for Fedora. While there are attempts to kexec any PE binary, our plan is to set BootNext and then reset, which will preserve the chain of trust that originates in firmware, while not interfering with other bootloaders.
It could be seen as an advantage to do chainloading by setting BootNext and resetting. I think Windows even does this now. However, it certainly is a different approach with more moving parts (e.g. the firmware has to not interfere or do anything stupid, harder than you'd hope) and it's definitely slower. It'd be ideal if both options were on the table (being able to `kexec` arbitrary UEFI PE binaries) but I can't imagine kexec'ing random UEFI binaries will ever be ideal. It took long enough to really feel like kexec'ing other Linux kernels was somewhat reliable.
Let's say I have a dual-boot system with two totally independent OSes, Systems A and B. It is powered down. I want to boot into System B but the EFI is configured to boot into System A by default.
Am I correct in understanding that the offered solution here is to first boot into System A, find some well-hidden EFI configuration utility (which varies from OS to OS, if it even exists), and then tell EFI to boot into System B on the next reboot?
Basically, System A's kernel boots. But, instead of immediately loading the System A userland, it loads a boot menu of systems that it reads from UEFI NVRAM and presents it to the user. So you select System B from the list, the menu sets BootNext in NVRAM and issues a reboot.
In practice, the main UX difference is that it takes a bit longer and you'll see the UEFI vendor splash screen again after selecting the boot option.
I'm not a user of Windows anymore but I seem to recall Windows doing something quite similar, where it had a boot menu that felt suspiciously like it was inside of Windows, and to actually change the boot target, it had to reboot.
> instead of immediately loading the System A userland
I mean, it kind of is loading the System A userland. At least the initramfs of it. AFAICT in the proposal the bootloader would now be a regular userland program living in the initramfs.
I get the impression that the eventual goal would be to make this bootloader program into the "init(8) but for the initramfs phase of boot" — i.e. rather than there being a tool like update-grub that calls mkinitramfs, feeding it a shell-script GRUB generated (which then becomes the /init of the initramfs); instead, there'd be a tooling package you'd install that's related to the kernel itself, where you call e.g. kernel-update(8) and that would call mkinitramfs — and the /init shoved inside it would be this bootloader. This bootloader would then be running for the whole initramfs phase of boot, "owning" the whole bootstrap process.
What the architecture is at that point, I'm less clear on. I think either way, this initramfs userland, through this bootloader program, will now handle both the cases of "acting like a bootloader" and "acting like the rest of initramfs-based boot up to pivot-root." That could mean one monolithic binary, or an init daemon and a hierarchy of services (systemd: now in your bootloader), or just a pile of shell scripts like GRUB gives you, just now written by Redhat.
Yes of course. I really mean to say, before/instead of pivoting to the OS root. It sounds like this will synergize well with the UKI effort too, at least from a Secure Boot perspective.
I wonder if I have ever had a laptop where the UEFI worked correctly and without bugs. It always required some workaround somewhere to get stuff working.
I'm not entirely sure, to be honest. If you google something like "windows 11 advanced startup settings" you'll see what I mean, though: the boot menu is now in Windows.
I guess it allows the bootloader to be much simpler, at least in theory.
If you embed an x86 system somewhere then you might find yourself not wanting to use GRUB because you don't want to display any boot options anywhere other than the Linux kernel. The EFI stub is really handy for this use case. And on platforms where UBoot is common UBoot supports EFI which makes GRUB superfluous in those cases.
Many of the Linux systems I support don't have displays and EFI is supported through UBoot. In those cases you're using a character-based console of some sort like RS232.
A lot of those GRUB options could also be solved by embedding a simple pre-boot system in an initial ramdisk to display options, which maintains all of the advantages of not using GRUB and also gives you the ability to make your boot selection. The only thing GRUB is doing here is allowing you to select which kernel to chain-load, and you can probably do the same thing in initramfs too through some kind of kernel API that is disabled after pivot root.
I just have two kernels with two boot options in BIOS. I just hit F11 at boot time and choose a BIOS boot option for either kernel. Of-course, you need to add the entries in UEFI, either from UEFI shell either with some tool (efibootmgr).
This scheme also supports secure booting and silent booting. The stubs are signed after being generated.
Does Windows not ensure that the UEFI boots back into Windows when it does an auto-reboot for updates? There's a UEFI variable called BootNext which Windows already knows how to use since the advanced startup options must be setting it to allow rebooting directly to the UEFI settings.
Given that Windows tries to restore open windows to make it look like it didn't even reboot, I'm surprised they wouldn't make sure that the reboot actually goes back into Windows.
No, it doesn't. Even a sysprepped image of Windows (which thus runs Setup to install drivers and finalize the installation) doesn't change the boot order on UEFI machines. I think just the installer does this when you first install Windows.
That's so weird. Normally I don't want my OS changing what is booted into on a whim, but going back into the same OS for cases like these just seems like sane behavior to me.
There's good reason you might not want that behavior, and no reason to enforce it. Booting an alternate OS doesn't interrupt Windows update operations.
Not in my experience. For my typical dual boot situation where Grub is installed as the bootloader, I have to update the Grub settings like so to allow Windows updates to go smoothly:
I am not certain about this, but I think that these options no longer work on UEFI machines. GRUB does not have control over what options are presented if GRUB isn't the selected bootloader. This stuff is BIOS-only.
I have this working on a UEFI system. You select your Linux drive in the UEFI configuration (so the computer always boots into GRUB) and then GRUB will boot into Linux or Windows depending on the last saved option.
The GRUB_DEFAULT and GRUB_SAVE_DEFAULT settings don't affect the UEFI settings, they only affect the default boot option in GRUB's boot menu. From the UEFI configuration perspective, the boot option never changes and it's always set as the drive with GRUB installed on it.
What kind of machines are people using that entering the UEFI boot menu is difficult? On all three of mine I just press F10 during the first 5 or seconds the vendor logo shows, and I end up in a nice menu where I could select Windows, other kernels, memtest, or the EFI shell or setup.
One easy way to meet Microsoft's boot time requirements is to skip input device enumeration, so there's a lot of machines meeting the Windows sticker requirements where entering the firmware either requires a bunch of failed boots or getting far enough into the boot process that you can be offered an opportunity to reboot into the setup menu.
I have a system where you need to hold down power when turning on the PC to get out of "Quick Boot" mode, and get the ability to get to the bios screen. It's a Sandy-Bridge-era Intel motherboard.
If you want to have (legit) "Designed for Windows" and similar certification, you need to have an option to disable "fast boot" as well as option to enable it.
The fast boot involves skipping a bunch of slower pathways using saved knowledge of minimal set of devices to bring up to boot the OS in happy path, and only reset to "slow path" if it fails.
In fast boot, you're often unable to hit the button to enter the menu and at most get to it through windows "reboot to firmware" option.
I was working on my Dad's Dell laptop this weekend, and no matter how quickly I spammed the correct key (F12 in this case) it would miss it and continue to a full boot about 3/4 times. I never figured out if it is just picky about timing, or if it had different types of reboots where some of them entering BIOS wasn't even an option.
Newer Dell laptops have a BIOS option to artificially delay the boot process by a configurable number of seconds to give you more time to enter the menu. Which should be proof enough that the default time window is an issue.
Mine has a large delay between when the keypress is registered and the menu actually shows up. But, the window for pressing the key itself is quite short. Also, if you spam the key too quickly, it will hang indefinitely instead of entering the menu necessitating a hard-reboot. Good times.
On my last two uefi boards, if I press F12 or F8 too soon after power on it either stalls the boot, or it makes it restart. When the latter happens, I’m always too careful in pressing it causing me to miss the window of opportunity and booting right to the OS. Entering the bios or choosing the boot drive regularly takes me 3 tries. (Gigabyte with Intel and Asus with AMD.)
How many computers are you operating though? Maybe you'll have to reboot a couple times until you figure out the proper key but then you'll know it. And if you forget it, you clearly aren't doing this often enough for it to be a problem either
It really depends on users. Personally... ~100? Servers, clients, dual-boot configurations, lost machines with PXE boot, various brands and BIOS versions, some even still boot in legacy mode because their UEFI support is bad (like PXE boot doesn't work as well as it should, and as well as it does in "BIOS" mode). So having GRUB on basically all these machines, I'm very happy.
If I could do the same with something that is as small in terms of footprint, and is as flexible as GRUB is (we also PXE-boot into GRUB loaded from the network, both in BIOS and UEFI mode), then I'm interested.
> - it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
Do people really dual boot a lot in 2024? It was a good use case when virtualization was slow but decades after the CPU started shipping with virtualization extensions there is virtually zero overhead in using VM nowadays and it is much more convenient than rebooting and losing all your open applications just to start one on another OS.
> - it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
How many times in a decade are you running memtest?
Getting to UEFI firmware or booting another OS/drive is just a matter of holding one key on my thinkpad. I would just simply not buy and bad hardware that doesn't allow me to do that. Vote with you wallet damit.
I would also argue that you can perfectly have grub sitting alongside a direct boot to kernel in an UEFI setup. There are many other bootloaders than grub and users are still free to use them instead of what the distro is shipping. UEFI basically allows you to have as many bootloader as you have space on that small fat partition.
> there is virtually zero overhead in using VM nowadays
Not for real-time audio production. The state of audio plugins having Linux support from vendors like EastWest, Spitfire, Native Instruments, iZotope is abysmal and even Wine does not run them nowadays.
Even with a virtual machine that has pinned cores and USB pass-through of a dedicated audio interface, it practically locks you to one sample rate, any change causes crackles, try to load more than one plugin and you hear crackles. There is plenty of overhead.
The state of GPU virtualisation, for example, is a spectrum from doesn't exist/sucks to only affordable for enterprise customers.
So unless you have a second graphics card to do pass through with, if you want to use your GPU under both OSes then you almost always have to dual boot (yes, there are other options like running Linux headless, but it's not even remotely easier to set up than dual boot)
Consumer motherboards haven't had gpus for a while now (IPMI usually comes with one, so servers do), they're built in to the CPU (if they are, not all cpus have them). These can't usually be easily allocated to a vm.
I clicked randomly on a number of motherboards sold by the 2 brands that came to my mind, Asrock and Gigabyte, and all of them advertised hdmi and usb-c graphics output so I am surprised by your declaration that consumer motherboards don't have GPU. If I am not mistaken on AMD Ryzen architecture it comes down to choosing a CPU with a G or 3D suffix which states they have an integrated GPU.
It really still is the case that most if not all consumer motherboards don’t have built in graphics. For the most part especially on the intel side, they’ve relied on the iGPU in the CPU for output for probably 10 years now
Well my case still stand that you still have an integrated graphics, if not by the motherboard but the GPU, that you can use on the host while you dedicate a discrete card for VM passthrough.
Desktop Ryzen 4's and newer have a very small iGPU that's just enough to put up a desktop (and presumably a framebuffer fast enough to feed a discrete card's output into)
How can the host have integrated graphics, if integrated graphics don't exist?
Per, Korhojoa, and my personal experience plenty of desktop CPUs simply don't have integrated GPUs. Consumer mainboards simply don't come with them at all. Consider my previous workstation CPU, top of the line a few years ago and no iGPU: https://www.amd.com/en/products/processors/desktops/ryzen/50...
Integrated GPUs is a feature of server mainboards so that there is something to display with for troubleshooting, but not on any retail mainboards I am aware of. It is a feature of some consumer grade GPUs designed for either budget or low-power gaming. It simply doesn't exist on all CPUs, consider the AMD 5600, 5600x and 5600g, last gen mid-range CPUs adequate for gaming and the x had a little more clock speed, and the g had an iGPU.
This is a fundamentally dishonest take. I provided three specific CPUs that varied by just the letter at the end where some had an iGPU and some didn't. I am being honest that some have it but that it isn't ubiquitous.
Well when you buy a desktop computer in 2024 their are usually 4 main ways:
- buying a ready made computer from a brand --> always come with an integrated GPU. Some will be even such a small form factor you have to use an external thunderbolt connected GPU if you want to use one.
- you build your computer yourself from parts --> you decide your motherboard and CPU, if VM passthrough is something you want to do, you just buy the parts that fits your use case
- you buy a configurable prebuilt computer from an online or local vendor --> you just have to choose the right option on the configuration tool so that you get a motherboard/cpu that offers integrated GPU.
- you buy second hand and you don't have an igpu: you buy the cheapest gpu available, usually around 10 to 25$ and you have your second GPU that the host can use.
Even when you are using laptop, having 2 GPUs is really not complicated in 2024, especially with thunderbolt external GPU cases/adapters.
Bottom line: you only have one GPU if you actively choose not to have 2.
The average PC is already a trade of that costs the average user around $800 and near 2/3 would need a substantial RAM upgrade a new GPU or both to make gaming through VM passthrough a reality. Most people aren't looking to buy new hardware and learn new tech to game.
It sounds like a useful toy for those whom already enjoy playing with their computer as much as playing with the game.
That said wouldn't limiting the host to integrated graphics (or whatever you get for $25) be a substantial limitation compared to using wine/proton or dual booting?
> Most people aren't looking to buy new hardware and learn new tech to game.
Most people don't play game.
Most people that play game that isn't solitary or a web game just buy a playstation, xbox or Switch.
Only a relatively small fraction of people playing AAA games use computer for that. The most hardcore one and the most willing to spend money on a game Rig. And I am pretty sure most of them aren't the least interested in dual boot because they would have a desktop gaming rig and a laptop for everything else anyway. Only a tiny fraction of gamers is probably interested in dual booting. You are part of that tiny group. Fine. The nmbl tool presented in this conference do not prevent dual booting anyway so I am not even sure why people act like they should be offended because grub might be replaced someday by something else with more capabilities.
It doesn't make sense to ex post facto try to justify what people SHOULD do when we can look at what in fact they actually do.
The idea that the only people that play PC games are ONLY play AAA games on their souped up rigs is also a counterfactual. People play games everything from 8 year old laptops to $5000 custom built rigs with RGB everything. You are oversimplifying the universe consists of many and varied irrational individuals not spherical cows.
Dual booting is simple and suitable for nearly 100% of machines running Linux.
Wine/Proton is suitable for nearly 100% of machines running Linux. Steam has reduces this complexity to a few clicks for the majority of titles.
GPU passthrough is unsuitable for 70-80% of configurations and by dint of complexity undesirable for nearly everyone which is why virtually nobody does this.
Because people don't want to play "games" they oft want to play a particular game and if it doesn't work it doesn't work. Also consider how many people are new and they have an existing computer with Windows the standard play is to dual boot first and then possibly transition to only Linux if it works well enough for usage.
Approximately half of gaming revenue is from PC customers. It wavers up and down depending on exactly what metric you want to use and when the last console refresh was.
You are correct on the complexity cost and how most people, even those with nice gaming computers, just don't want to deal with more complexity than needed. Even mandating a store app that works causes a significant hit to conversion rates. EA couldn't give away Dead Space a previously successful AAA title when bundled with their store.
You are thinking of pay to win games with microtransactions. Whereas this trash HAS come to the PC platform there is no reason to believe it represents any substantial portion of the revenue in PC gaming.
I suppose everyone is entitled to their opinion, but most people base it on something. You are free to do whatever you're doing, but I hope no one takes you seriously.
One of the big problems is with the graphics cards, because the vendors block a driver functionality ( SR-IOV ) for consumer GPUs that would allow single GPU passthrough for VMs.
The alternative is to leave the system headless (reboot needed, and the VM need to run as root), or to use two graphics cards (wasting power, hardware resources, etc.), for which you also need to add an extra delay layer inside the VM for to re-send the graphics back to the screen, or to connect two wire inputs to the monitor.
> there is virtually zero overhead in using VM nowadays
It might be more accurate to say that if you have a fast computer with lots of resources the experience running a basic desktop experience feels perceptibly native. This means it is a great answer to running windows software that neither needs a discrete GPU nor direct access to hardware on the minority of machines that are capable enough for this to be comfortable.
In actuality laptops are more common than desktops and the majority of computers have 8GB of RAM or less. 60% all form factors 66% laptops. This just isn't enough to comfortably run both.
Furthermore while most Linux users are comfortable installing and running windows and Linux whereas they may or may not be familiar with virtualization.
Also probably the number one reason someone might dual boot is probably still gaming which although light years ahead of years prior still doesn't have 100% compatibility with Windows. In theory GPU passthrough is an option but in reality this is a complicated niche configuration unsuitable for the majority of use cases. Anyone who isn't happy with steam/proton/wine is probably more apt to dual boot rather than virtualize.
Yes, people dual boot. Particularly people who are contemplating a move from Windows. I'd hate to see Linux take the "my way or the highway" attitude of Windows.
My experience when I had a dual boot in the late 90's was that rebooting is such an interruption that you never become fully comfortable on one of the OS. You just stick to the OS you are used to and never really do the switch.
While if don't dual boot you can switch completely to another OS and only use VM or remote desktop for the handful of use cases when you aren't ready yet (and then end ip abandoning them completely as well).
I dual-boot on my personal desktop. I mostly use Debian, but there's a Windows partition for games and a few other Windows-specific things. The GPU in it was way too expensive to justify buying two, and I use it under Linux for ML, hash-cracking, etc.
My original plan was to do everything in a Windows VM, but there was too much of a performance hit for some of my purposes, and VMWare doesn't allow attaching physical disks or non-encrypted VMDKs to a Windows 11 VM, so it's actually easier to have a data drive that's accessible from both OSes with dual boot than it would be with a VM.[1] I'm still disappointed about that.
[1] Using HGFS to map a host path through to the VM is not an option because of how slow that is, especially when accessing large numbers of files.
As much as I generally detest indirection, for me a bootloader is a necessity; I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility. NetBSD's bootloader is best for me. UEFI seems like an OS unto itself. A command line, some utilties and network connectivity (UNIX-like textmode environment) is, with few exceptions, 100% of what I need from a computer. To me, UEFI seems potentially quite useful. But not as a replacement for a bootloader.
>I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility.
Yes it does, I use it with two kernels, just have different entry for each stub in UEFI. Whenever I want to boot the non-default kernel I just hit F11 (for BIOS boot menu, on my motherboard) and choose the boot option. You just need to add the boot options in UEFI, pointing to the corresponding EFI files. They also have the kernel command line parameters baked into them and you can set your desired ones (silent boot whatever).
You can also craft a text file named startup.nsh, and if present in the root (or ~nearby) of the FAT32 EFI partition, on bootup its UEFI commands will be executed rather than the default firmware selection.
If a motherboard doesn't have enough UEFI commands in its built-in Shell (or has no built-in Shell at all), you'll want to include your own Shell.efi file right there along with any startup.nsh you might decide to deploy.
This can also be good for USB booting where the removable USB device is in regular MBR layout rather than GPT-style-partitioning.
Whether or not the whole USB drive is FAT32 or not, as long as there is a proper EFI folder in a UEFI-recognizable filesystem, you can boot to any other OS on any other filesystem, depending only on the contents of the EFI folder. Unless there is a startup.nsh for the UEFI to follow instead, then you might not even need an EFI folder. As intended. Boot floppies still work as designed too. Startup.nsh is more commonly expected to contain reference to an EFI folder that is present on some recognizable filesystem, rather than work as a lone soldier though. GPT-layout partitions are not supposed to be necessary either, it's only needed when you want more partitions than legacy BIOS will handle, or partitions that are too huge for MBR representation.
Now any alternative to GRUB would by necessity have to perform more pleasingly on legacy-compatible systems where UEFI is not enabled also, or it will remain a less-effective alternative.
Once a geek is smart enough to handle both BIOS & UEFI, the more I would be able to trust their UEFI solution.
You left out the most important reason I went back to using grub: some motherboards have dodgy UEFI support, and having an extra layer of indirection seems to be more robust sometime for some reason.
I dual boot Win/Arch easily with EFISTUB setup. It's super quick to boot to a usb stick of arch if I need to edit anything with the configuration in an "emergency" situation as well. https://wiki.archlinux.org/title/EFISTUB
I've used gummiboot before systemd ate it; and I've used rEFInd. Mainly, I just followed the excellent documentation @ https://www.rodsbooks.com/; that's also how I first familiarized myself with UEFI (Thanks Rod!).
My brain has leaked all the information I understood (unfortunately). Is rEFInd still active? Is there a gummiboot fork (besides systemd)?
Personally, I kind of hate Redhat calling itself that now, it's IBM. You can tell because all of the online knowledge from the community on their websites are now pay-walled. RIP Redhat (CentOS) (I'll miss you)
> it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window
Hardly a problem in my experience - just hold down the key while booting.
And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.
> also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
You can change the EFI boot entries including priority from the OS, e.g. via efibootmgr under Linux. Should be easy to setup each OS to make itself the default on boot if that's really what you want.
> it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
All motherboards I have used had an EFI shell that you can use to run EFI programs such as the Linux kernel with efistub with whatever command-line options you want.
> it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
EFI can have many boot entries too.
> it has a voice for entering the UEFI setup menu
What does "a voice" here mean? Or you meant "a choice"? Either way, same as with the boot menu you can just hold down the key while booting IME.
> it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
In my experience the EFI shell has always been accessible without a bootloader.
> And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.
I've been dual-booting linux since the kernel 2.2.x era and being able to do it was a major driver to migrate away from windows. It is super important for onboarding of new users that can't yet get rid of windows fully - mostly because of gaming (yes proton is nice, but anything competive that uses anti-cheat won't work yet is the majority share of gaming). And that is the reason I still boot into Windows on my dual-boot machine: Gaming. For me that windows is just a glorified bootloader into GoG or Steam, yet desperately needed and virtualization won't solve anything here.
Ideally rather than dual booting I would welcome something like running both OSes in sort of a virtual machine but being able to switch between them as easy as with a physical KVM.
Having to actually restart a PC is a pain in the ass which is why I don't dual boot.
is a pain in the ass? All the virtualization solutions are moot for gaming due to anticheat (plus 3d graphics virtualization not really working for windows)
I have experience with two different laptops:
1. Dell enterprise laptops generally have a robust EFI system which allows for all kinds of `.efi` files to boot on `vfat` partitions. Dell laptops also have a good firmware setup for stuff like mokutils to work so that people can use measured boot with their own version of linux. They also work extremely well with self-encrypting nvme drives.
2. HP consumer laptops which are the worst of lot and essentially prevent you from doing anything apart from stock configurations, almost like on purpose.
3. All other laptops which have various levels of incompetence but seems pretty harmless.
For all laptops apart from Dell, Grub is the bootloader that EFI could never be.
> - it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
You can use the UEFI shell for this. It's kind of a replacement for the old MS-DOG command line.
It is bold of RedHat to claim this is 'their solution'. UEFI has already been used for years to boot without grub. Some examples, MacOS, HP-UX, or systemd-boot via UEFI.
> it allows to edit the cmdline of the kernel to recover a system
Except they've made it increasingly harder to do this over the years. Nowadays you have to guess when it is on the magic 1 second of "GRUB time" before it starts loading and then smack all the F keys and ESC key and DEL key at the same time with both hands and both feet because there is nothing on the screen that tells you which key it actually is.
All while your monitor blanks out for 3 seconds trying to figure out what HDMI mode it is using, hoping that after those 3 seconds are over that you smacked the right key at the right time.
And then you accidentally get into the BIOS instead of the GRUB.
It used to be a nice long 10 seconds with a selection menu and clearly indicated keyboard shortcuts at the bottom, and you could press ENTER to skip the 10 second delay. That was a much better experience. If you're in front of the computer and care about boot time, you hit enter. If you're not in front of the computer, the 10 seconds don't matter.
I know you can add the delay back, I just wish the defaults were better.
> - it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
This is an indication of bad admin choice. The kernel defaults should not corrupt the boot process and if you add further experimental flags for testing you ought to have a recovery mechanism in place beforehand.
Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.
It's a feature that goes back to Windows NT (NTLDR) supporting dual boot for Windows 9x, but it can be repurposed to boot anything you would like so long as it can execute on its own merit.
eg: Boot into Windows Boot Manager and, instead of booting Windows, it can hand off control to GRUB or systemd-boot to boot Linux.
>Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.
With the NT6 bootloader this appears to be limited to operating only in BIOS mode using bootmgr.exe. The traditional chainloading is still possible by pointing to a binary file which is a copy of a valid partition bootsector, whether it is a Microsoft bootsector or not.
The equalent BCD for UEFI mode uses bootmgr.efi (instead of bootmgr.exe), and does not seem to be capable of chainloading even when there is an equivalent BOOTSECTOR bootentry on the NT6 multiboot menu.
It would be good to see an example of the NT6 bootloader successfully handling UEFI multibooting which includes starting Linux from the EXTx partition it is installed on. Still works perfectly in BIOS since early NT, but in UEFI not so much.
It allows you to enter your passphrase to unlock your Linux LUKS partition before you even get a menu to chainload Windows.
At least this is what an Arch Linux derivative (Artix) system of mine does, amusingly. It sort of gives an observer the impression that it's an encrypted Windows system on boot.
I'd have preferred CoreBoot or OpenFirmware, but the PC industry was too slow to move and let Intel -- still smarting from Microsoft forcing it to adopt AMD's 64-bit x86 extensions -- take control of the firmware.
The problem with all of the alternatives is that they aren't friendly for alternative OS. They mostly operate on a fork model, so upstreaming support for an OS doesn't mean everyone using that bootloader will support your OS. You either need to pretend to be Linux with a sort of boot shim, or build and flash a custom bootloader with support, which might be non trivial if you cannot get access to the forked bootloader's code.
UEFI is just a standard interface, not an implementation of a bootloader. This enables multiple UEFI compliant implementations as well as an easy way for OS to support all UEFI based bootloaders without needing to coordinate with the owner of the bootloader. While I'm sure most would agree the UEFI interface may not be ideal, it has a lot of industry momentum, and is therefore probably the best option to get behind. There are a lot of players in this space (mostly hardware vendors) and coordinating anything is very difficult and takes a very long time.
OpenFirmware is similar level of complexity as UEFI, to be quite honest, and lacks certain mechanisms that were designed into ACPI (and inherited by UEFI) precisely to support multiple different operating systems without requiring the OS to be have specialty drivers for every little bit.
Sure, in the happy path, you can depend on OpenFirmware giving you parameters like locations and addresses and "this device is compatible with X so you can use its driver", but it still requires that you have the specific driver the device is compatible with, and a new hardware release was often incompatible with older versions of OSes because, unlike ACPI, you can't encode information like "Hey, I'm compatible with interface X version Y" => "here's limited functionality because your driver is not updated for interface X version Y+1 but the computer will work".
Instead you had special "hardware support releases" to get the OS to boot at all.
CoreBoot and uboot by itself provide even less support. They might have open source, but they provide effectively closed platform to the end user. UEFI is in practice less restrictive because I only have to program to the interface, and in absence of gross bugs, I can expect things to work - whether it's a boot-time driver to support my super-duper-special storage add-in card, or a custom OS that I want to be available for normal people to try out by running on their random home PC. Hell, if Linux kernel people didn't tell they would no longer accept "platform definition" patches, you probably still wouldn't have FDT used on ARM with uboot.
Nothing is perfect, and you are probably right that any firmware for potentially SMP computers with multiple types of boot device is unavoidably complex.
However, ISTM that relying on a magic partition on a fixed disk is a poor design, and while other types of firmware are not radically simpler, there are or were alternatives, and some of them are noticeably more FOSS. UEFI is EFI for x86-64, broadly, and EFI was proprietary. That is not a good thing, in my book. Something more cross-platform and less vendor-dependent would have been preferable, even if of comparable complexity.
I had a long dissertation here destroyed by random F12 and backspace key that I can't deal with retyping again, but I fully disagree.
UEFI mandates certain, user and developer and admin UX nice, minimums regarding boot process. You can fully expand on supported filesystems, or even non-FS sources to boot from. Even from paper tape if you want to. You're not bound to magic partition on a fixed disk[1] anymore than you're with OpenFirmware (and decidedly less than IBM PC BIOS compatibles).
None of the comparable alternatives were really FLOSS (by the time OFW went open source, EFI was shipping on x86 and amd64[2]), coreboot/uboot/redboot/etc were too limited, by themselves being e-waste framework unless paired with upper layer to provide open platform for users and developers.
EFI was available, back in 1.1 timeline, as open source code for x86 and IA-64 (the IA-64 specific bits were called "SAL" IIRC), then some bright mind at Intel decided to close it down. Fortunately they open sourced it back as TianoCore and we now have FLOSS solution (it's as proprietary as OFW at this point in time, and it's more of an open platform than uboot/coreboot/etc).
The available "less proprietary" options all created closed platforms, where you need excessive porting to boot anything the vendor didn't ship for you. It's trivial to make firmware so flossy it will make RMS shed tears of nostalgia for KA-10, but it's not going to be useful for majority if they ever want to run something not provided by vendor. Minicomputer/workstation complex firmware monitors etc. happened because diagnostics were often needed, and some required at least some compatibility with third party hardware, but them - including origins of OpenFirmware - implicitly accepted a closed platform where vendor would need to ship a special "hardware enabling" OS update or ship entire OS version to match new platform.
UEFI might have proprietary roots, but it (and ACPI) is designed specifically to provide for the case of freedom of end owner to run whatever crap they want, including older version of OS they already got used to.
[1] Unless the hardware is too cheap, like Qualcomm ARM systems with UEFI where various critical services are patched in windows drivers to be handled through magic files on ESP, or in permissible CHRP OpenFirmware variants where magic partition on fixed disk is explicitly mentioned as an option.
[2] EFI based firmwares started shipping by 2005~2008 timeframe on x86 and amd64, mainly due to DXE providing way easier method to integrate 3rd party code. It was also from start designed to handle multiple platforms, partially thanks to having IA-32 and IA-64 code simultaneously as early as EFI 1.0, which made it easier option to handle future 64bit platforms.
I've been in that situation. There used to be a very handy Firefox addon called "Lazarus" that helped. Quantum killed it. This may help - not tried it yet:
Unfortunately not. Proper writeup would require sitting down for some software (and hardware) archeology to dig out stuff that in some cases seems to have been pulled off the internet on purpose (original open source EFI 1.10 release) or by virtue of disappearing companies (SRM source & documentation disks for OEMs, various random workstation documentation and programs that don't necessarily made it to bitsavers, things like that)
UEFI btw is late 1990s thing, with work starting because BIOS was unwieldy chimeara that didn't match anything in hardware, and supporting things like network booting by hooking into "boot BASIC program from Cassette" subroutine was problematic.
It's more that Itanium was the one system where it originally shipped, and that for no obvious reasons Intel closed-sourced it at some point. The Itanium proprietary firmware was, IIRC, "SAL" (somewhat related to modern UEFI's PEI layer).
For reference, it was possible to run it on x86 (and even the ICC with EBC was provided!) ~2001, including DUET which I ran from a floppy. There was close to same level of source access as today with TianoCore, though probably with different license.
Then someone at intel got a bright idea to close access to source and until the direction got reversed (which got us TianoCore) it was fully proprietary.
The bootloader (being it grub, or something more simple as systemd-boot) is useful to me for a couple of reasons:
- it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
- it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
- it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
- it has a voice for entering the UEFI setup menu: in most modern systems again entering the UEFI with a keyboard combination is unnecessarily difficult and has a too short timeout
- it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.