Actually no, secure boot is evil, and comparing it to GPG is fallacious. The objection isn't to code signatures in general, but the inevitability of baked-in privileged keys inherent in its design. Calling it just an impartial mechanism is flat out naive, and the author even goes on to describe why:
> But it’s worth noting it’s no more bad or wrong than most other major ARM platforms. Apple locks down the bootloader on all iDevices, and most Android devices also ship with locked bootloaders.
While Microsoft exempts x64 hardware because they're concerned about further anti-trust action, the ARM ecosystem actually demonstrates the end result. By providing these companies the capability to lock things down, they inevitably will use it to enforce their inherent authoritarianism onto their customers. The majority of their customers will not care, accepting the age-old excuses for authoritarianism like "security". But capturing a pragmatic majority does not make something right, as apparent to anybody who struggled due to Microsoft's monopoly in its heyday.
A freedom-preserving boot verification specification is definitely possible. In such, there must be no privileged key that cannot be disabled or augmented. Microsoft's x64 requirements do fulfill this, but as a result cause the system to fall to evil maid attacks. However, evil maid attacks can be prevented by incorporating an open-to-all proof-of-work / timed-lockout scheme. The core specification must incorporate this type of non-privileging scheme as a required element, to preempt any contractual requirements like Microsoft's.
>Most of the unhappiness about Secure Boot is not really about Secure Boot the mechanism – whether the people expressing that unhappiness think it is or not – but about specific implementations of Secure Boot in the real world
Maybe the UEFI specification should have implemented a way to distribute keys, though that would have been seen as a way to lock out smaller Linux distros. Maybe the ability to self sign keys should be part of the specs, I'm not sure it isn't, but that adds an additional security risk with fresh hardware.
I would draw an analog to surveillance. I have a problem with surveillance itself, even without the surveillance being used to facilitate any bad effects. Even just building the systems normalizes the paradigm and puts us in a precarious situation.
> publicly posted hashes
The method you used to retrieve the hash and ...
> package managers
The method you used to obtain the initial install and ...
The web of trust (how you came to associate a given key with an identity) and ...
As commonly used, the CAs. Which we're presently having a problem with, because the list is too damn fixed. In the case of TLS applied to other protocols (eg openvpn), then the trust lies in how the keys are distributed. And ...
The "..." is of course the integrity of your machine. Which, unless you always keep it in your sight, is a big if. A large part of this is what boot image verification is aiming to solve. But to do this, one needs to choose somewhere else to anchor the trust. "Secure Boot" specifies that this trust should be anchored in manufacturer-designated entities using public key signatures. On x64 this this would raise antitrust hackles, so Microsoft mandates (for now) that its primary security property be destroyed, leaving the anchor back to possession/integrity of the machine.
What I'm advocating is that this trust anchor could also be something non-trapdoored like a proof of work (or simple waiting time, since we're dealing with trusted hardware). For example, imagine if the specification mandated that all conforming implementations allow changing the keys after waiting in an offline "key provision mode" for a week. The trust root would then be "possession of the hardware for a week" (defeating an evil maid), rather than a fixed set of manufacturer-designated signers.
No, it doesn't. It doesn't specify how the keys should be dealt with at all. The implementation currently has manufacturers controlling that aspect, which the author views as flawed.
A packet manager is installed by the user of the system. It's protecting against attackers elsewhere on the internet tricking the user into installing malware. The "secure boot" is installed by the manufacturer before selling the computer. It's "protecting" the computer against running the software that the user wants. One of them gives users more control over what programs they run, the other gives them less.
The user is always free to control secureboot except for on windows phones, you might have a point about it being a bad thing for mobile.
Either way, right now Secure Boot gives me more control over what runs on my computer, that's a fact.
A lot of people are opposed to centralized package managers - not the (desktop) Linux ones, since Linux distros aren't monopolish enough to have political power, but their proprietary equivalents, the Windows Store and Mac App Store (also mobile app stores but that's a bit different). Nothing stops you from installing software from outside the store on Macs (in fact the store's pretty dead), but there's no end of complaints about walled gardens and speculation about an iOS-style lockdown being implemented in the future. Ditto on Windows, but Gabe Newell famously called Windows 8 a catastrophe (and was widely cheered for it) and started a big Linux push, just because Microsoft created a store to compete with his own company's centralized store.
 Besides to a manufacturer, being able to retain control of hardware they do not own.
 In a scheme anchored in specific keys, the keys must be immutable to prevent against evil maid.
I mean, both my desktop and laptop have extensive BIOS options for key management - including removing the initial keys and loading your own.
Manufacturers inherently want to lock stuff down (for one, they sell more devices when someone wants to try out a new OS), but functionality being called a "standard" should actually be accessible to device purchasers.
I'm all in favor of pragmatic arguments for openness and hackability for the sake of good engineering, transparency, generosity, customer expectations, etc. Those are all good things. I've never gotten the sense that the absence of those things are necessarily evil, however. You clearly do, based on your choice of words.
Maybe that's the difference between the (for lack of a better term) BSD philosophy vs. RMS. The BSD (permissive?) philosophy seems rooted in the sense that sharing is a virtue, and not sharing is neutral (not necessarily evil). RMS' philosophy has always felt like, sharing is neutral, and not sharing is evil.
It's a gross generalization, but copyleft vs. permissive reminds me of western vs. eastern religions. The FSF philosophy is defined by a large number of very strict thou shalt nots, defined as carefully and precisely as possible. Permissive licenses like BSD/MIT/Artistic/Apache/etc. are defined by their lack of conditions, which usually just amount to "do whatever you want, but you can't sue the author".
Similarly, one of the ways western vs. eastern religions (from the outside looking in, at least) feel different is the focus on not being evil, by being aware of all of the behavioral laws you shouldn't violate, vs. the focus on trying to attain enlightenment by trying to elevate yourself. Stamping out bad behavior vs. encouraging good behavior. I see the merits of both, but for whatever reason the latter has always been much more relatable for me. I wonder if an individual's preference of copyleft vs. permissive is correlated with whether they think humans are inherently evil, vs. humans being inherently good or neutral. As cynical and pessimistic as I feel about humanity sometimes, deep down I do have a conviction that people are inherently good. It might be naive but I've never been able to totally shake it.
I like your characterization of this, but keep in mind you're proceeding to apply a moral dimension here ;)
> I wonder if an individual's preference of copyleft vs. permissive is correlated with whether they think humans are inherently evil, vs. humans being inherently good or neutral. As cynical and pessimistic as I feel about humanity sometimes, deep down I do have a conviction that people are inherently good.
Most people have an internal narrative whereby they are doing good, yet we still get emergent evil behavior. I tend to look at good/evil in terms of describing constructive end results, rather than as intent behind individual actions.
Pragmatically, I'm seeing an awful lot of Free software that has been locked down through one scheme or another, making it non-Free for its end users. Since the intention of at least some of this software was to be Free for end users, I'd say nullifying that goal is moving in an evil direction.
While fewer restrictions is indeed "simpler", such a regime isn't necessarily sufficient to achieve certain goals. When something is existentially defined as "base primitives", complexity will build on top of it to undermine the goals that fostered the axioms. Only by defining universal quantifications on behavior can specific qualities be preserved through time.
Do what thou wilt shall be the whole of the law, but thou shalt not sue me! ;-) SCNR!
This argument demonstrates why architecture is so insidious. Yes, "Secure Boot" solves real problems. As I said, these problems can be solved through similar functionality that does not anchor the trust root to private keys. But now that Microsoft (et al) have promulgated their naive fully-trusted-publisher implementations, we're left having to dispel the primacy of "keys" in the first place.
If you instead mean that you can build your own custom hardware that mimics the functionality of Cisco's TPM and uses another key pair for signing boot code, then I give you a pat on the back for the accomplishment.
Presumably Cisco is really talking about using trusted hardware to preserve secrecy of their binaries (as the march of software eats their custom hardware, too). Which as I said, is possible to do without privileging specific signing keys.
On the other hand, BIOS boot is optimised for the overwhelmingly common case of one OS; select a boot device, load its first sector into memory, and jump to it. The BIOS does not need to know at all about filesystems, partitioning, or other things which are more at the OS level, which I think is a good application of the principle of "separation of concerns"; UEFI however seems to have become much of an OS itself, with all the associated complexity and increased failure modes of such. Two memorable incidents:
There is no BIOS specification. BIOS is a de facto standard – it works the way it worked on actual IBM PCs, in the 1980s. That’s kind of one of the reasons UEFI exists.
It's more like a collection of specifications, but you can find much of the important ones here and much of the API is in the IBM PC/AT Technical Reference:
The particularly relevant one for this article is here:
(Note the URL and the date of the document.)
The design doesn’t provide a standard way of booting from anything except disks. We’re not going to really talk about that in this article, but just be aware it’s another advantage of UEFI booting: it provides a standard way for booting from, for instance, a remote server.
Network booting in the BIOS world is almost always done through PXE.
I would agree that UEFI is an OS of its' own. But thats the whole point isn't it? That way writing drivers for peripherals can be done in C, etc.
I have a BIOS machine I cant boot using my SD card for example, which I need to do. I imagine writing (or porting) a driver for UEFI to interface with that SD card reader would be an order of magnitude simpler than doing the same by modifying my BIOS (which I've seriously considered doing).
You can inject a custom one into your BIOS image, but if you have PCI/PCIe available, you can get cards that have an OPROM onboard. The easiest/hacky way is an old NIC that someone has reverse engineered enough to flash its OPROM; you're just using it for the OPROM, the NIC functionality is unused.
Otherwise, if you want to actually integrate a driver for an internal SD card reader (which isn't a USB one) into the BIOS, then an Option ROM (INT 13H hook) would be the way to go.
A decade ago I was very well versed in how the bits and bytes of the MBR, partition table and BIOS boot process worked.
Since then, I was completely out of the loop, and this filled me in.
See also yesterday's story about the 2016 MBP not supporting Linux...
When your motherboard dies and you need to put in a new one, now you have to boot off a CD/DVD/USB Stick to get to the boot manager utility. Then you have to figure out the command line parameters since the first time the OS installer did it for you.
At minimum the motherboard setup menu should give you FULL editing cabability of the NVRAM variables. I should be able to add, modify and delete any and all entries without having to boot something first.
Last motherboard I tried using the efibootmgr on got bricked. And there was no way to clear it like you can with normal BIOS NVRAM.
From the article about the fallback path: "This mechanism is not designed for booting permanently-installed OSes."
Plus there is only one fallback path so we're back to the situation of multiple OSes fighting over the fallback path.
And from the article a disadvantage of the BIOS: "It’s inconvenient to deal with – you need special utilities to write the MBR, and just about the only way to find out what’s in one is to dd the contents out and examine them."
But now with UEFI we need a special utility to manipulate the EFI NVRAM variables. The motherboard setup only lists the names of the entries. No editing and no listing of the details.
So my criticism is valid. The information stored in the UEFI bootmanager NVRAM should have been in a file(s) in the system partition.