Hacker News new | past | comments | ask | show | jobs | submit login
UEFI boot: how does that actually work, then? (2014) (happyassassin.net)
164 points by luu on Nov 11, 2016 | hide | past | web | favorite | 51 comments



> And no, Secure Boot itself is not evil. I am entirely comfortable stating this as a fact, and you should be too, unless you think GPG is evil

Actually no, secure boot is evil, and comparing it to GPG is fallacious. The objection isn't to code signatures in general, but the inevitability of baked-in privileged keys inherent in its design. Calling it just an impartial mechanism is flat out naive, and the author even goes on to describe why:

> But it’s worth noting it’s no more bad or wrong than most other major ARM platforms. Apple locks down the bootloader on all iDevices, and most Android devices also ship with locked bootloaders.

While Microsoft exempts x64 hardware because they're concerned about further anti-trust action, the ARM ecosystem actually demonstrates the end result. By providing these companies the capability to lock things down, they inevitably will use it to enforce their inherent authoritarianism onto their customers. The majority of their customers will not care, accepting the age-old excuses for authoritarianism like "security". But capturing a pragmatic majority does not make something right, as apparent to anybody who struggled due to Microsoft's monopoly in its heyday.

A freedom-preserving boot verification specification is definitely possible. In such, there must be no privileged key that cannot be disabled or augmented. Microsoft's x64 requirements do fulfill this, but as a result cause the system to fall to evil maid attacks. However, evil maid attacks can be prevented by incorporating an open-to-all proof-of-work / timed-lockout scheme. The core specification must incorporate this type of non-privileging scheme as a required element, to preempt any contractual requirements like Microsoft's.


The author directly addresses this, and hits it dead on with

>Most of the unhappiness about Secure Boot is not really about Secure Boot the mechanism – whether the people expressing that unhappiness think it is or not – but about specific implementations of Secure Boot in the real world

Maybe the UEFI specification should have implemented a way to distribute keys, though that would have been seen as a way to lock out smaller Linux distros. Maybe the ability to self sign keys should be part of the specs, I'm not sure it isn't, but that adds an additional security risk with fresh hardware.


To me, creation and standardization of the capability is evil, even if it takes a further party's decision to cause actual effects. And I see those subsequent decisions as pretty inevitable, given how the market functions.

I would draw an analog to surveillance. I have a problem with surveillance itself, even without the surveillance being used to facilitate any bad effects. Even just building the systems normalizes the paradigm and puts us in a precarious situation.


How is this different than every other instance of signing for verification? Are you opposed to package managers, publicly posted hashes, GPG, and even TLS works with the same method. What about it verifying the operating system makes it more prone to being misused?


It's not the usage of the mechanism itself, but how the mechanism is used that mandates specific types of policy. It's a matter of where the trust is anchored:

> publicly posted hashes

The method you used to retrieve the hash and ...

> package managers

The method you used to obtain the initial install and ...

> GPG

The web of trust (how you came to associate a given key with an identity) and ...

> TLS

As commonly used, the CAs. Which we're presently having a problem with, because the list is too damn fixed. In the case of TLS applied to other protocols (eg openvpn), then the trust lies in how the keys are distributed. And ...

The "..." is of course the integrity of your machine. Which, unless you always keep it in your sight, is a big if. A large part of this is what boot image verification is aiming to solve. But to do this, one needs to choose somewhere else to anchor the trust. "Secure Boot" specifies that this trust should be anchored in manufacturer-designated entities using public key signatures. On x64 this this would raise antitrust hackles, so Microsoft mandates (for now) that its primary security property be destroyed, leaving the anchor back to possession/integrity of the machine.

What I'm advocating is that this trust anchor could also be something non-trapdoored like a proof of work (or simple waiting time, since we're dealing with trusted hardware). For example, imagine if the specification mandated that all conforming implementations allow changing the keys after waiting in an offline "key provision mode" for a week. The trust root would then be "possession of the hardware for a week" (defeating an evil maid), rather than a fixed set of manufacturer-designated signers.


You could root your trust in a TPM or other hardware security modules. You still have to trust the manufacturer of the HSM chip but that's their entire business model, unlike Microsoft's.


You can't really "root" your trust in an HSM. Yes, the HSM is trusted in that if it is broken, then the security of the system is as well. But the "trust root" is what the HSM uses as a specification for what to trust. This still boils down to a public key, physical possession, proof of work, immutable hash, etc.


>Secure Boot" specifies that this trust should be anchored in manufacturer-designated entities using public key signatures.

No, it doesn't. It doesn't specify how the keys should be dealt with at all. The implementation currently has manufacturers controlling that aspect, which the author views as flawed.


As long as the trust root consist of public keys and not physical possession of the device, then the manufacturer inherently controls those public keys.


The problem is who uses it.

A packet manager is installed by the user of the system. It's protecting against attackers elsewhere on the internet tricking the user into installing malware. The "secure boot" is installed by the manufacturer before selling the computer. It's "protecting" the computer against running the software that the user wants. One of them gives users more control over what programs they run, the other gives them less.


That just isn't true, especially on x86. I can and have loaded my own keys and made secure boot validate against my keys to force it to only run what I wanted it to. The standard actually mandates this so it's pretty disingenuous to state that secure boot on a computer prevents the user from running whatever software they want to.

The user is always free to control secureboot except for on windows phones, you might have a point about it being a bad thing for mobile.


Please read my above comments more carefully. Secure boot on x86 (amd64) is deliberately crippled to avoid anti-trust scrutiny.


So they deliberately made it so that the end user has control over their own computer to avoid looking evil! How dastardly of them.

Either way, right now Secure Boot gives me more control over what runs on my computer, that's a fact.


It's protecting users from running unsigned operating systems on their computer, protecting the user from a malicious operating system having full access to their computer. You're arguing against the implementation, which both the author and I agree is flawed, where hardware manufacturers are the only ones regularly controlling keys.


> Are you opposed to package managers, publicly posted hashes, GPG, and even TLS works with the same method.

A lot of people are opposed to centralized package managers - not the (desktop) Linux ones, since Linux distros aren't monopolish enough to have political power, but their proprietary equivalents, the Windows Store and Mac App Store (also mobile app stores but that's a bit different). Nothing stops you from installing software from outside the store on Macs (in fact the store's pretty dead), but there's no end of complaints about walled gardens and speculation about an iOS-style lockdown being implemented in the future. Ditto on Windows, but Gabe Newell famously called Windows 8 a catastrophe (and was widely cheered for it) and started a big Linux push, just because Microsoft created a store to compete with his own company's centralized store.


a more precise analogy might be surveillance cameras. while an argument could be made for their immorality, the vast majority of society accepts at this point that while they can be used for bad (mass surveillance), their uses are varied and beneficial enough that we accept their costs as one of society's.


What benefit [0] does anchoring the trust root to immutable keys [0] have over anchoring the trust root to a non-trapdoored proof of work?

[0] Besides to a manufacturer, being able to retain control of hardware they do not own.

[1] In a scheme anchored in specific keys, the keys must be immutable to prevent against evil maid.


> Actually no, secure boot is evil, and comparing it to GPG is fallacious. The objection isn't to code signatures in general, but the inevitability of baked-in privileged keys inherent in its design.

I mean, both my desktop and laptop have extensive BIOS options for key management - including removing the initial keys and loading your own.


Yes, and this is actually encouraged by Microsoft to avoid anti-trust action. As I said, take a look at ARM for what an actual locked-down ecosystem looks like.

Manufacturers inherently want to lock stuff down (for one, they sell more devices when someone wants to try out a new OS), but functionality being called a "standard" should actually be accessible to device purchasers.


Do you think their reaction to not having a standardized version would be "aw shucks, I guess we totally won't go with our own proprietary version then!"


If markets were perfectly efficient, then we wouldn't be having this conversation. In reality defaults matter, whether for a manufacturer following a standard or a consumer buying an off the shelf PC.


I can see why you as a consumer might not want hardware that you can't install custom software on. I can see why it might not be the best solution for legitimate non-nefarious engineering problems as well. What I've never related to, however, is the application of a moral dimension to someone selling such a thing.

I'm all in favor of pragmatic arguments for openness and hackability for the sake of good engineering, transparency, generosity, customer expectations, etc. Those are all good things. I've never gotten the sense that the absence of those things are necessarily evil, however. You clearly do, based on your choice of words.

Maybe that's the difference between the (for lack of a better term) BSD philosophy vs. RMS. The BSD (permissive?) philosophy seems rooted in the sense that sharing is a virtue, and not sharing is neutral (not necessarily evil). RMS' philosophy has always felt like, sharing is neutral, and not sharing is evil.

It's a gross generalization, but copyleft vs. permissive reminds me of western vs. eastern religions. The FSF philosophy is defined by a large number of very strict thou shalt nots, defined as carefully and precisely as possible. Permissive licenses like BSD/MIT/Artistic/Apache/etc. are defined by their lack of conditions, which usually just amount to "do whatever you want, but you can't sue the author".

Similarly, one of the ways western vs. eastern religions (from the outside looking in, at least) feel different is the focus on not being evil, by being aware of all of the behavioral laws you shouldn't violate, vs. the focus on trying to attain enlightenment by trying to elevate yourself. Stamping out bad behavior vs. encouraging good behavior. I see the merits of both, but for whatever reason the latter has always been much more relatable for me. I wonder if an individual's preference of copyleft vs. permissive is correlated with whether they think humans are inherently evil, vs. humans being inherently good or neutral. As cynical and pessimistic as I feel about humanity sometimes, deep down I do have a conviction that people are inherently good. It might be naive but I've never been able to totally shake it.


> Maybe that's the difference between the (for lack of a better term) BSD philosophy vs. RMS

I like your characterization of this, but keep in mind you're proceeding to apply a moral dimension here ;)

> I wonder if an individual's preference of copyleft vs. permissive is correlated with whether they think humans are inherently evil, vs. humans being inherently good or neutral. As cynical and pessimistic as I feel about humanity sometimes, deep down I do have a conviction that people are inherently good.

Most people have an internal narrative whereby they are doing good, yet we still get emergent evil behavior. I tend to look at good/evil in terms of describing constructive end results, rather than as intent behind individual actions.

Pragmatically, I'm seeing an awful lot of Free software that has been locked down through one scheme or another, making it non-Free for its end users. Since the intention of at least some of this software was to be Free for end users, I'd say nullifying that goal is moving in an evil direction.

While fewer restrictions is indeed "simpler", such a regime isn't necessarily sufficient to achieve certain goals. When something is existentially defined as "base primitives", complexity will build on top of it to undermine the goals that fostered the axioms. Only by defining universal quantifications on behavior can specific qualities be preserved through time.


> "do whatever you want, but you can't sue the author"

Do what thou wilt shall be the whole of the law, but thou shalt not sue me! ;-) SCNR!


Maybe it's evil in consumer devices, but secure boot is very effective when you don't want people to tamper with your hardware, e.g., preventing counterfeit commercial-grade routers from entering the market. Cisco for example is facing a HUGE problem with counterfeits and is looking at secure boot as a potential solution. I'm not affiliated with Cisco, but I did attend a talk by one of their researchers at Georgia Tech.


Secure boot solves the opposite problem from counterfeit hardware. If you make the hardware, you can insert whatever signing key you'd like - including Cisco's.

This argument demonstrates why architecture is so insidious. Yes, "Secure Boot" solves real problems. As I said, these problems can be solved through similar functionality that does not anchor the trust root to private keys. But now that Microsoft (et al) have promulgated their naive fully-trusted-publisher implementations, we're left having to dispel the primacy of "keys" in the first place.


I don't understand your point. You cannot "insert" Cisco's signing key into your own hardware because 1) the device does not possess it, and 2) only Cisco (ideally) has access to its signing key.

If you instead mean that you can build your own custom hardware that mimics the functionality of Cisco's TPM and uses another key pair for signing boot code, then I give you a pat on the back for the accomplishment.


Sorry, by "signing key" I meant "public part of the signing key", which is what is used for verification. Counterfeit hardware can also just ignore signatures on binaries.

Presumably Cisco is really talking about using trusted hardware to preserve secrecy of their binaries (as the march of software eats their custom hardware, too). Which as I said, is possible to do without privileging specific signing keys.


IMHO UEFI is an overly complex solution that seems to be optimised for multibooting in various complex ways, a use-case that is probably extremely rare; how many "average users" do you know which have more than one OS natively installed on their machine? And of those, how many have more than two? Especially in this era of VMs, multibooting is becoming increasingly rare.

On the other hand, BIOS boot is optimised for the overwhelmingly common case of one OS; select a boot device, load its first sector into memory, and jump to it. The BIOS does not need to know at all about filesystems, partitioning, or other things which are more at the OS level, which I think is a good application of the principle of "separation of concerns"; UEFI however seems to have become much of an OS itself, with all the associated complexity and increased failure modes of such. Two memorable incidents:

https://news.ycombinator.com/item?id=11008449

https://news.ycombinator.com/item?id=5139055

There is no BIOS specification. BIOS is a de facto standard – it works the way it worked on actual IBM PCs, in the 1980s. That’s kind of one of the reasons UEFI exists.

It's more like a collection of specifications, but you can find much of the important ones here and much of the API is in the IBM PC/AT Technical Reference:

http://cs.dartmouth.edu/~bx/blog/resources/bios.html

The particularly relevant one for this article is here:

http://www.phoenix.com/resources/specs-bbs101.pdf

(Note the URL and the date of the document.)

The design doesn’t provide a standard way of booting from anything except disks. We’re not going to really talk about that in this article, but just be aware it’s another advantage of UEFI booting: it provides a standard way for booting from, for instance, a remote server.

Network booting in the BIOS world is almost always done through PXE.


Thats interesting. I've booted UEFI machines using PXE and judging from the PXE ROM versions displayed, the PXE code hasn't changed much, on my boxes at least.

I would agree that UEFI is an OS of its' own. But thats the whole point isn't it? That way writing drivers for peripherals can be done in C, etc.

I have a BIOS machine I cant boot using my SD card for example, which I need to do. I imagine writing (or porting) a driver for UEFI to interface with that SD card reader would be an order of magnitude simpler than doing the same by modifying my BIOS (which I've seriously considered doing).


The BIOS way to do this is with an Option ROM. During boot, the BIOS scans for a special signature, and jumps to a fixed offset in that block. The OPROM init routine then hooks the various software interrupts that the BIOS, boot-loader and other OPROMs use to read from disk.

You can inject a custom one into your BIOS image, but if you have PCI/PCIe available, you can get cards that have an OPROM onboard. The easiest/hacky way is an old NIC that someone has reverse engineered enough to flash its OPROM; you're just using it for the OPROM, the NIC functionality is unused.


Very interesting thank you. You may already be aware, but when I first saw the replies here mentioning Option ROMs I immediately thought of the thunderstrike EFI vulnerabilities for Macbooks. I've yet to actually mess with them, but apparently that was how they worked at least in the first generation, I haven't followed the research much further yet [https://trmm.net/Thunderstrike]. So since option ROMs, for obvious reasons, survived the "transition" to EFI, does this mean the same mechanism could be used in BIOS machines? At least this might also have implications for backporting peripheral drivers too (for whatever reason)...


If you really want to boot from an SD card, try putting it in a standard USB card reader. Booting from USB mass storage has been supported for over a decade in BIOS, well before UEFI.

Otherwise, if you want to actually integrate a driver for an internal SD card reader (which isn't a USB one) into the BIOS, then an Option ROM (INT 13H hook) would be the way to go.


Go ahead. Try writing a custom uefi module, I dare you. Its such a complicated mess it takes a team, and still you end up with blinders on every corner(intel reference code full of bugs etc).


I have no doubt that its a mess. I suppose I've just bought into the hype about what (U)EFI was supposed to be when it came along. Maybe one day it or something like it will make this all easier.


Excellent summary.

A decade ago I was very well versed in how the bits and bytes of the MBR, partition table and BIOS boot process worked.

Since then, I was completely out of the loop, and this filled me in.


I wrote a 5-minute tutorial on this (would appreciate feedback): http://www.boostedsignal.com/blog/a28c2988-151e-42cd-b0ab-1c...


I like how it is concise and still seems to get the important points across. Well done.


Thank you! That was my goal. I banged my head so much against the wall trying to understand the process, even though it should have been conceptually simple. If there are other topics (not necessarily computer-related) that you feel might benefit from a short explanation as well, feel free to let me know, and if I know about them I'd love to write about them.


> Unless you’re dealing with Macs, and quite frankly, screw Macs.

See also yesterday's story about the 2016 MBP not supporting Linux...

https://news.ycombinator.com/item?id=12924051


You mean yesterday's story about Linux not supporting the latest MacBook Pro?


Yeah, that. Oops, looks like my phrasing got me downvotes. Oh well.


It got an upvote from me, for what that's worth.


With all due respect for the Author (the article is seemingly a very nice one) there is the everstanding confusion between disk and partition/volume. I had hoped that - particularly for something intended to explain things - I would have had not to read "format a disk". Just in case, a disk is partitioned (in either MBR or GPT "style"), and the partition(s) or volume(s) on it thus created are later formatted with a given file system.


You can certainly format a whole disk. I did it, the disk in my laptop doesn't have a partition table at all. The whole disk is one big encrypted filesystem. The laptop boots off of an external USB stick.


Sure you can, but that disk won't be bootable in UEFI (it may still be in BIOS). The article is about UEFI booting, with parted he checks the partitioning "style", not the actual formatting. >See that Partition table: msdos? This is an MBR/MS-DOS formatted disk. If it was GPT-formatted, that would say gpt.


Read was interesting, but I found myself pushing through by force of curiosity. It was hard, since the post itself was very agressive to me - the reader.


The flaw with UEFI is that the list of installed operating systems is stored in the motherboard's NVRAM and not in the EFI system partion on the disk itself.

When your motherboard dies and you need to put in a new one, now you have to boot off a CD/DVD/USB Stick to get to the boot manager utility. Then you have to figure out the command line parameters since the first time the OS installer did it for you.

At minimum the motherboard setup menu should give you FULL editing cabability of the NVRAM variables. I should be able to add, modify and delete any and all entries without having to boot something first.

Last motherboard I tried using the efibootmgr on got bricked. And there was no way to clear it like you can with normal BIOS NVRAM.


This is nonsense. UEFI has a fallback path and all common OSes can or do install a fallback bootloader which restores the NVRAM variables.


You may not agree with me but don't call it nonsense.

From the article about the fallback path: "This mechanism is not designed for booting permanently-installed OSes."

Plus there is only one fallback path so we're back to the situation of multiple OSes fighting over the fallback path.

And from the article a disadvantage of the BIOS: "It’s inconvenient to deal with – you need special utilities to write the MBR, and just about the only way to find out what’s in one is to dd the contents out and examine them."

But now with UEFI we need a special utility to manipulate the EFI NVRAM variables. The motherboard setup only lists the names of the entries. No editing and no listing of the details.

So my criticism is valid. The information stored in the UEFI bootmanager NVRAM should have been in a file(s) in the system partition.


2014




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: