BIOS should be simple, because it is buggy anyway. Handing over to a bootloader in the MBR is all that a BIOS should do. Now one is at the mercy of NVRAM, grub2 and loads of gratuitous complexity.
Legacy BIOS had to go, it was designed at the time of 8086 and DOS and was really out of step with modern HW and OS needs. The replacement could have been even simpler though, now that OSes really want to take over everything themselves and no longer lean on the BIOS that way that DOS used to. Instead EFI created a monster and now we're stuck with it.
It could be implemented in UEFI, and bundled with disks as a compatibility layer, for machines that support UEFI but not XSFI.
Wikipedia only mentions the mobile platform, but it was also what the card-based Xeon Phi was using (own research).
Isn't that how it's worked since forever? The graphics card already comes with a driver in a flash ROM chip, the BIOS installs it during boot, and programs can talk to it through the standard INT 10h interface. AFAIK, modern graphics cards also already come with an EFI driver in the same flash ROM chip, which the BIOS installs during boot when in EFI mode, and EFI programs and operating systems can talk to it through standard EFI interfaces.
Early discussions (cannot find a trace online, from my mail archives):
From: ron minnich
Subject: Re: [sw-dev] SBI extension proposal v2
Date: Sat, 10 Nov 2018 08:46:07 -0800
At Google and other places, we've been struggling now for years with
overly complex firmware that is implemented incorrectly, enabling
exploits and other bad things. The list of things vendors get wrong in
firmware, both enabling exploits and enabling others to enable
exploits, is long and it continues to this day. There is an
unbelievable amount of money out there all involving firmware
exploits, very little of it involving nice people.
I'm currently working on deleting all use of the x86 version of M
mode, i.e. SMM. There are many proposals out there for deleting SMM
from the architecture. I've also shown at a talk in 2017 how we could
redirect SMM interrupts back into the kernel. We're also removing all
use of callbacks into UEFI on x86. We're almost there.
Which is why I'm a bit unhappy to see this (to me) cancerous growth in
proposals for M- mode code. PPP in firmware? Really? multiple serial
devices? really? We've been here before, in the 1970s, with something
called the BIOS. If you're not familiar with it, go take a look, or
you can take my word for it that these proposals implement that idea.
We spent over 20 years freeing ourselves from it on x86. Why go back
to a 50 year old model on a CPU designed to be in use for 50 years?
My early understanding of M mode was that it was an Alpha PALCode like
thing, enabling access to resources that were behind a privilege wall.
I did not like it that much, but I was OK: it was very limited in
function, and the kernel could replace it, or at least measure it. I
also accept that every cpu vendor uses m mode like things (e.g. ARM
TF) for reasonable purposes and also (let's be honest here) for
dealing with chipset mistakes. But that does not mean you need to
The SBI should be hard to add to, deliberately. It should be used only
when there are no possible alternatives. It needs to be open source
and held in common. It should be possible for a kernel to replace or
at least measure it. And, further, there needs to be some work done on
why you add to it, and why you don't, with bias against adding to it.
This proposal works against those ideals, as it explicitly enables
vendor-specific forks of the SBI. Sure, this can happen, but why make
it so easy?
see https://github.com/riscv/riscv-sbi-doc/pull/12 for other thoughts.
Also, I've had discussions with some security folks in our firmware
community about the fact that the PMP can be used in a way that the
kernel can not measure the SBI, since SBI might read-protect itself.
This is a real step backwards, FYI. Not sure if it can be changed at
p.s. For interleaving debug and console output firmware, use the
oldest trick in the book: ASCII is 7 bits. Since console out is 8
bits, reserve 128 values for console out, and 128 for debug stream,
and if the debug stream needs 8 bit for some words, you know what to
do. It's very easy and doesn't require that we add multiple UART
support to SBI.
It will continue to bite until enough people petition it's replacement.
Don't get me wrong, some of the features are good, but the kitchen sink is not.
systemd is largely architected the way you'd hope it to be. systemd itself, the core program, is responsible only for maintaining the lifecycle of other programs. The other functionality is provided by satellite daemons, which you can choose to use (or not use!) at will.
Now, distro maintainers may choose to use the whole kit-and-caboodle, but that is a deliberate decision on their part. It's not forced by the systemd authors, and you are free to override your distro maintainer's choice if you so desire.
So it implements DNS.
...and I must add, implements it badly 
> Now, distro maintainers may choose to use the whole kit-and-caboodle, but that is a deliberate decision on their part. It's not forced by the systemd authors
Not forced, instead they "gently push" 
So, while I agree that we should keep things simple and modular, it’s a thankless job trying to solve issues and being forced to add complexity. Nobody defends complexity, and maybe we shouldn’t, in order to stay on our toes, but defending simplicity is also about the safest thing you can do.
If you want simple use systemd-boot. It's a hell of a lot leaner than grub ever was. The config is sane and doesn't require a billion additional modules to be installed.
It should be the default at this point, especially after all of this fiasco.
I think we have different definitions of lean and simple. After about a decade of Linux use I just found out Grub has modules thanks to you. You point it a disk and it installs. You edit the config file and that works. That's my experience it. I have had many debates with people who like systemd-isms , it's a fundamental difference in use case and philosophy.
The explicit design goal was to be a minimal alternative to grub.
Bug report: https://bugs.launchpad.net/cloud-init/+bug/1877491
Is there a reason why this patch was silently applied? For something as risky as breaking the boot process, you'd think you want user confirmation before proceeding. It can obviously be done, eg. https://d11a6trkgmumsb.cloudfront.net/original/3X/f/5/f55e36.... Also, recovering from this might be easy if you're technically inclined, but it could be worse if you have FDE enabled with the boot keys sealed in TPM. Changing the boot loader or the secure boot settings in that case might lead to the TPM refusing to release the disk encryption keys, which could lead to permanent loss of data.
Sad to see TSA in software form.
At least for me, that page seems to have both shim-x64-15-13.el8.x86_64.rpm from 2020-07-29 22:10 and the older shim-x64-15-11.el8.x86_64.rpm from 2020-05-07 19:53; the older one should work. Worst case, you could manually copy the shim executables from a working server to the EFI partition of the broken server (from what I have read at https://bugzilla.redhat.com/show_bug.cgi?id=1861977, in the RHEL/CentOS case it's the shim executable which is broken, so you don't have to do anything to the grub executables).
I installed these rpms (which are the affected ones for my system):
Is anything else needed? I'm thinking the easiest solution is for me just to wait a few days and do a "yum update" in rescue mode once an update fix is available. Luckily this is a non-critical server.
It takes a comprehensive approach rather than piecemeal like a lot of these patches, leveraging technology already in your system to build a conceptually airtight and fully audited system. Happy to get some of your opinions on it, constructive criticism, and pull requests!
Emphasis mine. The main objection to secure boot is that, some time in the future, it will be mandatory; that already is the case for Windows devices with ARM CPUs (https://www.softwarefreedom.org/blog/2012/jan/12/microsoft-c...).
Windows Client (laptops/tablets/...) devices with 64-bit Arm CPUs have Secure Boot unlockable. That article applied to Windows RT and Windows Phone, which were earlier projects.
A lot of secure boot implementations let you add your own keys. Some don't, and that's bad, but it's not the fault of secure boot!
Number of times Secure Boot has locked out a legitimate user: too many
The only reason everyone used the Fedora key was because the alternative was registering with Microsoft, paying $99, and hoping for approval. Microsoft are as much a gatekeeper in this as they've always been, and the whole framing of the news around this feels like an attempt to discredit those who would go around the gatekeeper: https://mjg59.dreamwidth.org/17542.html
FWIW, I tested upgrading a few (test) CentOS virtual machines at work to see if I can trigger this bug, but they worked fine, so perhaps the bug only triggers with a configuration they happen to have not tested.
Were the virtual machines using EFI? Most virtual machines I've seen boot through legacy BIOS, not EFI.
I remember reading it was like a signed loader and that's it.
But I presume that's incorrect?
That said, all secure boot even tries to assure is that the software that's booting is the same thing you thought you installed. If that is, say, a Linux distribution running a webapp which has problems, well... the boot mechanism can't save you from those.
Like for example, the Linux kernel isn't signed, right?
Secureboot is one step in a chain of verification you'd need to do to make sure you're only running the binaries you've approved:
> Secure Boot is a technology where the system firmware checks that the system boot loader is signed with a cryptographic key authorized by a database contained in the firmware. With adequate signature verification in the next-stage boot loader(s), kernel, and, potentially, user space, it is possible to prevent the execution of unsigned code.
That is literally what secure boot is, nothing more: