shim is an EFI bootloader commonly used by Linux distros that want to enable Secure Boot. Distros want an easy path to enabling Secure Boot using the default preloaded Microsoft signing key, rather than require users to enroll their own key. But Microsoft refuses to sign GPL bootloaders like grub in general, so shim was made to be able to be signed by the Microsoft key. (shim then implements its own separate signature verification of what it boots, based on a different key called the Machine Owner Key, MOK.)
When you tell shim what EFI binary to boot, you can specify it as an HTTP URL. If you do this and the HTTP server is malicious, it can trigger an out-of-bounds write.
However, usually you use it to boot a local second-stage bootloader like grub (hence the name "shim"), so it's unlikely to be a problem for most installs.
Regardless, Secure Boot was built from the start to allow previously-signed binaries to be revoked via the DBX list - a list of signatures that can be loaded into the UEFI to make it reject binaries even if they have a valid signature. When (If) the signatures of old shim binaries with this bug are added to the list, you can update the list in your own machines. Updates may be distributed as capsule updates (via LVFS etc), or if you manage your SB keys and vars you can download and enroll the list from https://uefi.org/revocationlistfile
OP/bug finder here with some clarifying information. It's a common misconception that this issue can only be abused if you use HTTP boot. That is not the case at all, otherwise it wouldn't be Critical. This bug can be abused locally (privileged malware can overwrite the EFI partition), from an adjacent network if PXE boot is enabled (w/ MiTM), or remotely if HTTP boot is used (w/ MiTM).
More details on these scenarios:
1. A remote attacker with no privileges in a man-in-the-middle (MitM) position could leverage the issue against a victim machine that uses HTTP boot. No direct access to the victim machine is required.
2. A remote attacker with privileges and code execution on the victim machine could leverage the issue to bypass Secure Boot, even if the victim does not already use HTTP boot (as long as firmware has HTTP support). How? Several ways:
- An attacker can edit the boot order variable to specify a controlled attacker server.
- An attacker can chain shim->GRUB2->shim (via HTTP). For this technique, an attacker would overwrite the boot loader in the EFI partition to a legitimate shim and GRUB2 image. The attacker would create a grub.cfg that chainloads a new shim via HTTP. This is possible because GRUB2's device syntax allows you to specify any supported device, including HTTP (if available).
3. An adjacent attacker with no privileges in a man-in-the-middle (MitM) position could leverage the issue against a victim machine that uses PXE boot. PXE is separate from HTTP boot, but similar to the local vector, an attacker can chain together shim (via PXE)->GRUB2 (via PXE)->shim (via HTTP).
Yes, if the attacker can edit the the victim machine's EFI vars or the contents of its ESP, then they can make the victim machine use HTTP boot even if the victim machine didn't use HTTP boot originally. However at that point they can also wreak more havoc without involving HTTP boot.
For the case where the default configuration has been set up to just chainload grub (ie what distros use shim for), and where an attacker editing EFI vars / ESP is not in the threat model, there is no concern. Yes that is just "It's not a concern because you defined it to not be." but that is the reality for most users of Secure Boot on Linux.
Also note that the reason I wrote that paragraph is because the HN submission was originally submitted with a title along the lines of "Every install of shim is affected".
> Yes, if the attacker can edit the the victim machine's EFI vars or the contents of its ESP, then they can make the victim machine use HTTP boot even if the victim machine didn't use HTTP boot originally. However at that point they can also wreak more havoc without involving HTTP boot.
How? The whole point of secure boot is that an attacker with even that level of access can't boot the machine in an authenticated way (and e.g. make the disk encryption key available).
Someone with enough privileged access to write to the ESP (ie root) can also add their own MOK to the ESP that the user might blindly accept next time they boot. Especially if they time it for when there is a legitimate new MOK in the ESP waiting to be accepted on next boot, so that the user is predisposed to accepting a new key.
They can also replace shim with other binaries with other vulnerabilities that were signed by the MS key in the past, in case DBX hasn't been updated with their hashes.
>The whole point of secure boot is that an attacker with even that level of access can't boot the machine in an authenticated way (and e.g. make the disk encryption key available).
Someone with enough privileged access to write to the ESP (ie root) can also just exfiltrate your disk contents at that point.
> Someone with enough privileged access to write to the ESP (ie root) can also add their own MOK to the ESP that the user might blindly accept next time they boot. Especially if they time it for when there is a legitimate new MOK in the ESP waiting to be accepted on next boot, so that the user is predisposed to accepting a new key.
> They can also replace shim with other binaries with other vulnerabilities that were signed by the MS key in the past, in case DBX hasn't been updated with their hashes.
Neither of those sounds like a sure thing. The first relies on the user not checking the key, and is exposing the attacker to a lot of risk if they do. The second relies on DBX not being updated, for which the remedy is "don't do that".
> Someone with enough privileged access to write to the ESP (ie root) can also just exfiltrate your disk contents at that point.
The idea is that your main data partition is encrypted with a key held in a secure enclave and can only be retrieved after a secure boot. (Or, y'know, any of the other things people would use secure boot for). Your boot partition has to be unencrypted so you can boot from it, but there's no sensitive data on there, and an attacker with write access can't "rootkit" it because if they replace the bootloader with a different one then it will be unsigned and break the chain of trust. Again if this stuff didn't work then there would be no point in secure boot at all.
>The second relies on DBX not being updated, for which the remedy is "don't do that".
Not updating DBX is the default state. Updating it is what requires effort.
How many devices actually have up-to-date DBX? I know I mentioned LVFS in my first comment, but I have to wonder how many Linux devices with SB enabled actually use it. The ones that don't will not have updated their DBX since they were manufactured.
>The idea is that your main data partition is encrypted with a key held in a secure enclave [...]
You're missing the point. An attacker that can write to the ESP is root on the live system right now. It can exfiltrate the contents of `/` right now. Or if it can't exfiltrate right now, it can install an OS service to do that on future boots.
If the boot partition isn't encrypted, doesn't this mean an attacker with physical access to the machine can remove the drive, plug it into their own machine, overwrite the boot partition, then restore the drive back in the original machine? In that scenario they don't have access to the unencrypted root filesystem.
You can set up 'measured boot' so the TPM will only 'unseal' the disk encryption password if you're running a certain version of your BIOS, a certain set of Machine Owner Keys, a certain version of shim, a certain kernel, a certain kernel command line and so on.
Very few normal users do this because it's a great deal of effort/risk for very modest security improvements. But the option is present - it's sometimes used by big corporations making TiVo-style products to lock out the owners from messing with the hard disk in the manner you've described.
> Not updating DBX is the default state. Updating it is what requires effort.
Up to a point, but that's true for almost everything in software. Not updating your OS etc. is the default state, and if it's not up to date it will be full of holes. That's life.
> You're missing the point. An attacker that can write to the ESP is root on the live system right now. It can exfiltrate the contents of `/` right now. Or if it can't exfiltrate right now, it can install an OS service to do that on future boots.
If they have root on the live system then they don't need to mess around attacking secure boot at all. The point is "evil maid" style attacks where someone messes with the boot partition (and/or firmware) by booting off another device. Again, this is the whole point of Secure Boot; if you don't care about that kind of scenario then why would you ever be using secure boot at all?
> because MS refuses to sign GPL bootloaders like grub in general
This is because MSFT lawyers think that if you sign a GPLv3 licensed bootloader like GRUB, you might have a right to compel the developer into handing over their signing keys.
I'm a strong advocate of open source who also happens to agree with microsoft's lawyers on this. Whether you believe this consequence of GPLv3 is intentional or unintentional, it is a consequence. The idea of somehow putting blame back on Microsoft for this is similarly twisted.
Vendors shipping absolute shit implementation of SecureBoot that don't give the users authority over their own systems are the problem.
Let's revisit this topic in the future to see how many systems actually get patched to revoke these signatures. My guess is that nearly 100% of shipping systems 1 year from now will still have these keys and still boot these vulnerable signed binaries right out of the box.
One might hope that this issue thrusts the theater of Secure Boot into the public discourse, but like most other forms of irrelevant DRM, it will simply remain a hurdle that everyone has to keep jumping over until the end of time.
> Vendors shipping absolute shit implementation of SecureBoot that don't give the users authority over their own systems are the problem.
Microsoft itself has been the _first_ vendor who shipped systems where SecureBoot could neither be disabled nor the whitelist of signatures/keys replaced with your own. This would be _the only_ scenario where the GPLv3 would have anything to say ... if Microsoft were to be also shipping GRUB in that system, which they weren't .
And such scenario is precisely one of those the GPLv3 was designed to impede. So it is most definitely intentional.
> Microsoft itself has been the _first_ vendor who shipped systems where SecureBoot could neither be disabled nor the whitelist of signatures/keys replaced with your own.
It’s misleading to mention that but not say that you’re referring to Surface RT tablets, which were Microsoft’s equivalent of an iPad ecosystem — only Store apps, OEM OS only, etc. It’s also running a different flavor of Windows on an ARM processor.
Surface Pro devices have always had toggleable & configurable Secure Boot.
Why misleading? It shows exactly what the long-term MS strategy was, what SecureBoot was designed for, and it also shows exactly who shipped the first "shitty" SecureBoot implementation. People had to resort to cracks in order to run plain Win32 apps! Except, of course, Office. That one is the only Win32 app which was allowed. Good ol', classy Microsoft.
> Surface Pro devices have always had toggleable & configurable Secure Boot.
Not at all. While it is true that Secure Boot has been deactivable in the "Pro" family, disabling Secure Boot results in Scary Boot Prompts (TM) (a _permanent_, literal red screen warning during the boot process that would drive most users away). _To this day_, there is no way to install your own Secure Boot keys in the Surface UEFI setup. Again, one of the "shitty" implementations, and comes from Microsoft!
It's actually worse than that. At least the first two Surface Pro iterations didn't even ship with the UEFI CA key, meaning you could not even install MS-signed Linux distros! The only way would be to disable Secure Boot, and thus have to suffer the red Scary Boot Prompts (TM) on every boot prompt. Again, MS leading the way for the other OEMs in terms of shittiness.
Almost two years afterwards, MS started shipping a WMI-based method that would allow you to install the MS UEFI CA. So you had to install Windows and run a Windows program in order to be able to install an OS signed with your own keys. This is _still true_ even in the latest iteration of Surface Pro devices. This is the example that MS sets for other OEMs.
And almost 10 years later, Lenovo starts disabling the UEFI MS CA key by default....
You seem to still have an impression that “secure” here is somehow related to “security” of the users or their “authority over their own systems”, when in fact it only means “securing” the position of the company that gets to control the thing. Even if Microsoft fires everyone tomorrow and closes, Secure Boot key infrastructure and agreements will still cost a lot. Just like patent pools of former industrial giants that don't just disappear.
In the same manner, insane DRM schemes are not made to be solid and unbreakable, or because sellers are worried about home copies (made by people who won't buy the product anyway otherwise). DRM is just a convoluted enough mesaure to protect corporations from their fellow corporations who would gladly make the same thing, but more convenient, or cheaper, or more profitable, and ride the “piracy” wave. In the same manner, Google didn't shove HTTPS into everything because it was worried about the users, but because it was worried about other companies with massive traffic spying abilities grabbing the data Google collected for itself. Et cetera, et cetera.
Couldn't they release a signature separate from the thing they signed? Leaving it as work that the open source world has to do to combine them into the derivative work? Surely an attestation/signature would be a fair use derivative work?
Suppose I write some GPLv3 code. You incorporate that code into a program called shim-gpl3 and release the result under the GPLv3. You also release a binary. Microsoft signs that binary and releases the signature but does not redistribute the binary. Microsoft says that they don’t think that the signature is a copyrightable work but that, to the extent that it is, it is permissively licensed (CC0, MIT, whatever). A Linux distribution builds an installer that contains, shim-gpl3 and Microsoft’s signature. The recipient of the distribution asks the distributor for source to the GPL v3 parts of their distro.
The problem (as I see it) is that the distributor cannot comply. This is as intended! The media, as is, runs on an effectively TiVo-ized machine, and neither the distributor nor the end-user can build it from source such that the result works if modifications are applied.
If I publicly attest to something, and provide cryptographic proof that you can verify the version you are looking at is the one that I attest to, then I do not see how that is meaningfully a derivative work of what we are attesting. It is not substantial enough in size or form for it to be a derivative work. So, from that end, it seems we agree that the signature itself is not a problem.
Now, it seems what you are implying is that they need a new signature that must be created at build time. I'm asking for a much more restrictive bootloader that would have to be buildable with a repeatable process so that someone else can create the same one down to the bit level. Is this as useful as one that is not as restricted? No, but it would be a way past the legal problem?
> I'm asking for a much more restrictive bootloader that would have to be buildable with a repeatable process so that someone else can create the same one down to the bit level.
Reproducible builds are fantastic. But I doubt that, in this instance, it would be a valid way around the GPL.
Maybe if one, as an art project, made a project that, when compiled, had a particularly aesthetically pleasing binary representation, it would be okay. But we’re talking specifically about a signature needed to run the software, which appears to fit the definition of “Installation Information” in section 6 of the GPLv3.
My point is the signature can be combined after the fact. Basically, I can give you a binary that is the signature for another binary that is the boot loader, built to certain specifications. You build that, and combine with this signature, and you are good to go. If the build doesn't build exactly the same, signature will be invalid and you can't use it.
Note that this is VERY limited in how much it could possibly help. But I don't see why it wouldn't be permissible by all licenses involved. It doesn't require the signing key, but a valid signature. For that device, it is locked down enough that the only installer that can work is the one that matches this signature. If installing somewhere else, you don't need the signature anymore, and can make your own or use an unsigned loader.
> The idea of somehow putting blame back on Microsoft for this is similarly twisted.
Microsoft are responsible for pushing secure boot on PCs in the first place. They are absolutely to blame for all problems it causes for regular users that did not explicitly seek a computer with secure boot.
> Vendors shipping absolute shit implementation of SecureBoot that don't give the users authority over their own systems are the problem.
If this were true, it probably applies to both GPLv2 and v3, since they both require installation instructions.
v3 adds an additional requirement that if you mere-aggregate GPL software and other software, and put the aggregate into a consumer product, you can't disable the other software because the GPL software was modified. I don't believe this would apply in the situation that these shims are intended for - i.e. installing an OS onto a desktop machine, as those are supposed to allow you to enroll new Secure Boot keys anyway.
I also don't understand why a signature from Microsoft would actually trip any of the above-mentioned requirements. Microsoft is not distributing the software, they're just signing it - i.e. distributing a message saying "this shim with this hash can run". That's not a copyright violation, so the GPL doesn't come into play at all.
Perhaps there's something else Microsoft does with their signatures that might trip GPL...
Maybe Microsoft just doesn’t want to sign a GPL bootloader and so they had their lawyers come up with nebulous concerns as a fig leaf. Microsoft <3 Open Source after all, can’t be seen as reluctant for business reasons.
To my knowledge, they aren't the ones signing the software. Microsoft runs a UEFI CA and just bless the developer's sub-CA. The developer is fully in control of their own CA and could sign whatever they want.
No. I don't know if Microsoft "partners" have some special privilege, but for 3rd parties Microsoft will _never_ sign your CA. They sign individual binaries.
Note that under no circumstances can it "compel" you to hand over the signing keys. It can compel you to _stop infringing_, though. Which you may do by either stop redistributing GRUB , or by allowing your customers to install their own signatures, or if you are really stupid, by giving out your private keys. But you cannot be forced to do the later.
I wonder if this is a consequence of the judicial system in the US. In Europe there is something called promesse de porte-fort (contract at the expense of third parties) as Microsoft is not party in the licence agreement I would say. Some effects of the GPL ( or rather interpretations) are just plain ineffective. I say this while not questioning the copy left in general.
> FYI the reason is that the GPL would require you to hand over the signing keys. If you sign a GPLv3 licensed bootloader like GRUB, you're forced to also make public the signing key... which defeats the purpose.
That seems to be a ridiculous interpretation of the GPL (no personal offense meant, I'm sure you're a fine person but this idea is genuinely very silly).
Signatures on binaries are not part of the source code, and therefore the GPL isn't concerned with them.
Think of:
- every signed binary for an open source app on a Linux distro or Windows install
- Every open source web app delivered over a TLS cert
Nobody has ever considered that the private keys used to sign the package, executable or web site cert to be required by the GPL.
>Tivoization is a dangerous attempt to curtail users' freedom: the right to modify your software will become meaningless if none of your computers let you do it. GPLv3 stops tivoization by requiring the distributor to provide you with whatever information or data is necessary to install modified software on the device. This may be as simple as a set of instructions, or it may include special data such as cryptographic keys or information about how to bypass an integrity check in the hardware. It will depend on how the hardware was designed—but no matter what information you need, you must be able to get it.
>This requirement is limited in scope. Distributors are still allowed to use cryptographic keys for any purpose, and they'll only be required to disclose a key if you need it to modify GPLed software on the device they gave you.
> Code submitted for UEFI signing must not be subject to GPLv3 or any license that purports to give someone the right to demand authorization keys to be able to install modified forms of the code on a device. Code that is subject to such a license that has already been signed might have that signature revoked. For example, GRUB 2 is licensed under GPLv3 and will not be signed.
> any license that purports to give someone the right to demand authorization keys to be able to install modified forms of the code on a device
There's nowhere in the GPLv3 that says that (then again, that sentence doesn't imply there is). Anyone can download and modify the grub source code, and compile it. It doesn't mean Microsoft is obliged to sign their fork. I wonder if there's something that confused (or was bad faith interpreted) by Microsoft's lawyers?
Maybe someone can just use another open source license? Ie, anything not called 'GPLv3' which, like every other OSS license, does not purport to give someone the right to demand authorization keys to be able to install modified forms of the code on a device.
> There's nowhere in the GPLv3 that says that (then again, that sentence doesn't imply there is)
It does, commonly called the anti-tivoization clause. Here's the text:
> “Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
> If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
> The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
> Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
From GPLv3, section 6. Conveying Non-Source Forms.
> [...] any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work [...]
From what I understand, allowing the user to install their own trusted keys is enough; there's no need to allow users to sign with the "official" trusted key as long as that alternative exists.
I suspect that Microsoft's worry is that they don't control the firmware (the motherboard manufacturers do), and some UEFI firmware might be broken and not allow installing alternative trusted keys. I believe shim solves this by adding an intermediate layer which also allows the user to manually install their own trusted keys, even if the firmware doesn't.
They're probably referring to this part of the GPLv3 license:
> “Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
> If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
I believe the MSFT lawyers are right here, but obviously I'm not a lawyer.
> The information (ie 'methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work') must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
The purpose of allowing signing is exactly to ensure modified code does not function in that environment.
Yes MS (and you) would be right. GPLv2, BSD or MIT would be fine but GPLv3 would not.
-----------------------------------
Edit reply to baobun (rate limit means I can't post)
I did think about that, but:
> continued functioning of the modified object code
is vague. Does it mean a new firmware binary should continue working on the same device as the unmodified predecessor or using is a new device (that allows arbitrary entities to sign) considered to be "continued functioning"?
Not exactly. AshamedCaptain gets it. If the device can be configured to trust a user-supplied key instead of or in addition to Microsoft's, there is no need to provide any keys. This is up to the hardware supplier.
Actually, if you ask the Software Freedom Conservancy, not only is this a requirement of the GPLv3, it is in fact also required by the GPLv2 [0]:
> GPLv2 assures, to the purchaser of an embedded product, their absolute right to receive the information necessary to install a modified version of the GPLv2'd works. [...] installation of the GPL'd works must succeed and operate in a useful and functional fashion on the device.
I believe the FSF position is different, but it's unclear. At least one member of the SFC who was a member of the FSF in the past (Bradley Kuhn) [1] believes that the FSF has the same position - that both GPLv2 software and GPLv3 software must be accompanied by full instructions (and ability) for a user to install their own modified versions on any hardware containing this software. He believes that the only difference with GPLv3 is that, in addition to this requirement, other proprietary software on the device must continue to operate just as before; whereas, with the GPLv2, it's fine for other proprietary software to say "you're running an unsigned version of Linux, failing to start". Linus Torvalds' public statements about the GPLv3, as well as the FSF page on Tivoization [2], seem to suggest otherwise.
Either way, this is not some invention of MS lawyers meant to make the GPL look scarier, it is very much how the free software community sees the GPL working.
So... really no. The overall spirit of the GPLv3 text is that the license demands preservation of the ability of an end user to modify and run the software they receive, see all the discussion at the time it was being drafted about "TiVoization". It does not specifically speak to encryption and signatures being used by the loader environment, but it's absolutely reasonable to interpret it that way.
If I buy a laptop from Lenovo, and it uses a signed GPLv3 bootloader, and I want to modify and run a new version of that bootloader, I'm prevented by the lack of a signature. That action is exactly what the license demands Lenovo permit.
Now, obviously this collides badly with the security requirements of the system. So it's likewise totally reasonable to imagine a court flipping the other way and refusing to literally enforce the GPLv3 because of the potential damage to the market. But if that happens, then what gives Lenovo any right to ship that GPLv3 software at all? Now they're subject to the bootloader authors demanding they stop shipment of the infringing product.
No, the permissively-licensed shim compromise was the right choice.
> No, the permissively-licensed shim compromise was the right choice.
Why? Why on earth?
IF Lenovo wanted to really ship a GPLv3 bootloader, all it imposes on Lenovo is they ship a method to install your own bootloader keys. Do you really disagree with that?
Lenovo has already shipped hardware which requires magical incantations to install anything other than Windows (see Secured Core hardware). Why on earth anyone shipping FLOSS software would drop their pants to allow precisely this shitshow is over me.
I'm not the person you should be arguing with. I didn't write the license and I'm not the judge that would have to make the decision. I was explaining why the collision between software freedom and end user security requirements (which is a very real thing!) is seen as a risk to the people shipping and supporting actual hardware.
There is no collision _whatsoever_ between software freedom and security requirements. It's an illusion which happens to be highly convenient to those who want to promote an artificially controlled market and sell that as snake-oil security.
As a junior IT person long ago we used remote image booting in our lab to quickly test new software builds on real hardware. Back then it involved janky TFTP.
I'm curious how shim avoids the issue Microsoft is concerned about. Presumably if the anti-Tivoization clause of GPLv3 would require a Secure Boot signing key to be provided on request, it also requires a MOK signing key to be provided on request. The net result, then, seems to be that any arbitrary party can obtain signing keys which allow them to sign arbitrary code which will be booted (indirectly) via Secure Boot. How is that a meaningfully different outcome from the one we would have if Microsoft simply issued signing keys for GPLv3 projects like grub?
MS doesn't "issue" signing keys. MS has a signing key that it uses to sign bootloaders, and which is the default key in every UEFI that wants to be able to boot Windows with Secure Boot enabled. (*)
Their argument is that if they signed a particular distro's GPL-licensed binary, then the user of that binary can ask for the source to be able to regenerate that binary, and that would require the signing key for completeness to be able to boot that binary.
shim is MIT-licensed so that requirement does not apply.
(*) To be precise, the key used to sign Windows and the key used to sign the rest are different, but both are enabled by default. That said, in 2022 there was talk about some UEFIs disabling that latter key by default: https://news.ycombinator.com/item?id=32066919
> I'm curious how shim avoids the issue Microsoft is concerned about.
IIRC, shim has a way to allow someone physically present to add additional keys which will also by trusted by that instance of shim. That's enough to satisfy the GPL requirements.
Yes, that is how the MOK enrolment process works. The distro drops its MOK (the one it used to sign the kernel into the distro kernel package) into the ESP. On next boot shim notices the new key and asks the user if they want to enroll it. Similarly if the user wants to enroll additional keys for kernel modules they built themselves or got from somewhere else, they can do the same process.
>However, usually you use it to boot a local second-stage bootloader like grub (hence the name "shim")
Would Windows Boot Manager not serve that purpose also? I have WBM chainloading into GRUB on one of my older BIOS machines, but I never got around to doing that on a UEFI machine so I'm not aware of any showstoppers.
Windows is a showstopper itself. It might play nicely sometimes but it isn't a workable solution. I can no longer ever try dual booting because it doesn't respect anything but itself. It'll just decide to get rid of other bootloaders at its own discretion.
I'm talking about dumping the subsequent bootloaders such as GRUB to a file located in a Windows-accessible location, and then adding a menu entry for Windows Boot Manager to chainload into it.
As far as I know, Windows doesn't just overwrite the BCD like that unless you explicitly tell it to.
Also, obviously the assumption is Windows isn't a showstopper such as in my case. I'm inquiring if there are any practical or technical concerns that would preclude using WBM as a shim.
> When you tell shim what EFI binary to boot, you can specify it as an HTTP URL. If you do this and the HTTP server is malicious, it can trigger an out-of-bounds write.
Can a malicious server just send a compromised binary directly?
For anyone wondering "Why would you boot from an untrusted/compromised server?" or "Isn't this moot? If the server is compromised, it can just send a malicious binary." the short answer is:
The binary that the shim ends up booting has to be signed by the MOK. So it should continue to provide the same security guarantees whether you're booting from within a compromised network, over HTTP, or if you're booting from a compromised server (even if it's over HTTPS).
It's also important to note that secure boot doesn't prevent downgrades, so the fact that a compromised server could be utilised in a downgrade attack is not relevant. You would need to implement downgrade attack prevention in a more robust way regardless.
That being said, I don't know _why_ the shim needs to support http boot (after all, nothing stops this from being implemented as second local EFI binary which handles it and is signed by the MOK), aside from maybe that it was thought to be a relatively simple feature to implement.
Any idea why this code is using two measures of the body at all?
Per the RFC, in HTTP/1.1, the Content-Length is the authoritative information for how long the HTTP request/response body is. Any data on the wire past the content length is part of another message, by definition. Conversely, if Content-Length is larger then rx_message.BodyLength, that means that we didn't yet receive the whole message, so we should either wait more, or issue a timeout error. Either way, rx_message.BodyLength is a bogus value if it's not guaranteed to be equal to Content-Length.
On the other hand, if you want to be more permissive, then why look at the Content-Length header at all? Just use rx_message.BodyLength as the buffer size and try to interpret all the data on the wire as the received message.
The current version of the code is needlessly complex, and that is how bugs like this make it in.
Looking at the commit in isolation can be somewhat misleading. If you look at the surrounding code https://github.com/rhboot/shim/blob/0226b56513b2b8bd5fd281bc..., it's receiving chunks of data in a loop, checking to make sure new data does not exceed the buffer capacity (determined by Content-Length) each time. However, previously it wasn't checking that on the very first read outside the loop, hence the bug.
I don't see any final check on downloaded == *buf_size (Content-Length) though. Violating that could indicate premature connection close.
Oh, thanks for that extra context. I did look a bit further down, but didn't see the loop... This makes much more sense.
Looking at the loop though, I think it does ensure that downloaded == *buf_size: the loop continues while downloaded < *buf_size, and there is a different check for not overflowing. So, overall, downloaded must be exactly equal to *buf_size to exit the loop without an error. So, this seems OK overall.
I have to say this is definitely a bug and I'm glad it's fixed ... but what kind of psycho boots their machine from an untrusted host? If the attacker controls the http service tightly enough to send malicious headers, avoiding this overflow is the least of your problems, since they have compromised your certs and also can just send a compliant payload with more malware on it.
The same kinds of psycho who promote Secure Boot. They _really_ believe in a security strategy which _requires_ everything that could be potentially be compromised to not be signed by Secure Boot. The moment you have something signed which has a vulnerability, you can use it to decrypt everyone's SecureBoot+TPM encrypted hard drives.
Why that ever was considered a valid security approach is over me, as there is an entire gallery of vulnerabilities like this one. It also completely ignores the elephant in the room called Windows.
Note that despite Linus' reticences, in a lot of distros integrity mode is indeed enabled when you boot with Secure Boot on. Likely because of MS politics and distros wanting an MS UEFI signature being forced to "go through the hoops" as explained in that thread. As a result enabling Secure Boot usually cripples your distro, preventing you from e.g. hibernating.
The S in HTTP is to be used when important things are happening. This includes booting your device. HTTPS headers have always been encrypted. Still a good catch.
The flaw here is only exploitable by (1) a malicious server, which could anyway just send a malicious binary, no need for header shenanigans, or (2) a MITM. Case 1 is moot, case 2 would be prevented by properly implemented HTTPS.
On the other hand, I don't think it's practical to actually implement HTTPS properly in UEFI, since you'd have to constantly update the trust store, and you'd have to have actual internet access to be be able to check the certificate revocation lists (otherwise, you are vulnerable to surreptitious malicious activity from otherwise trusted CAs).
> Case 1 is moot, case 2 would be prevented by properly implemented HTTPS.
That's not true. It's significantly easier to ensure the security of an offline signing key than it is to ensure that an arbitrary HTTPS server avoids ever becoming compromised.
Are you sure? It's pretty common for small tools like this to skip HTTPS certificate verification because of space constraints (typical CA trust root collections are around 100 KB in size). Since this is doing certificate verification of the downloaded file, HTTPS verification is usually redundant. If the HTTPS certificate verification is skipped then MITM of the HTTPS connection is trivial.
I took a quick look at the code and I'm not seeing the usual steps for certificate management, although I may have missed it.
I'm not sure HTTPS is an option for this as it requires accurate time/date for encryption. Maybe the RTC could be valid, but I'm not sure it handles different time zones well and it might have lost time anyways.
I would classify something that doesn't care what century it's booting into as an unimportant device. If the clock is off by a lot, it still should not boot over network.
But do these builds of shim include httpboot? AFAIK shim is just there to execute another EFI binary on the disk. I don't think I've ever seen the netboot functionality of shim get used.
Maybe I'm not well versed enough, but I thought most http clients read at most content-length specified, and consider it an error if read bytes < content-length.
The HTTP client is provided as an EFI driver by the UEFI. AFAICT the UEFI spec doesn't specifically say what the behavior should be if the content-length header doesn't match the response body length, so it might very well be possible that some implementations just make `connection:close` requests and don't check the content length.
The vulnerability was reported by MSRC and none of the text about the CVE mentions an actual exploit. It might be revealed later, or it might just be theoretical.
The HTTP spec does say though: in HTTP/1.1, the HTTP body length is the value of the content-length header. Anything else coming on the wire is part of the next HTTP request/response (which may of course be invalid). Reading everything from the stream until the connection is closed is HTTP/1.0 behavior.
Of course, if the UEFI HTTP client is not correctly implementing the spec, it's up to shim to defend itself, so I'm not saying this change is wrong or unnecessary.
> When retrieving files via HTTP or related protocols, shim attempts to
allocate a buffer to store the received data. Unfortunately, this means
getting the size from an HTTP header, which can be manipulated to
specify a size that's smaller than the received data. In this case, the
code accidentally uses the header for the allocation but the protocol
metadata to copy it from the rx buffer, resulting in an out-of-bounds
write.
From the description, it allocates a buffer based on Content-Length, but copies the size of the buffer it received, thus writing out of bounds of the allocation.
> Critical bug that exists in every Linux boot loader signed in the past decade
Huh? This is a bug in a single bootloader. It has nothing to do with other bootloaders. And while on the subject, consider whether you actually need a bootloader at all.
Indeed. The header reads to me as “the server says this content length will be…” so I have always taken precautions with it. Servers can be wrong as well as malicious.
Yep I’m surprised this made it through even basic code review. It makes it seem like this code was written by a junior dev without any code review. Accidents happen; it’s human and understandable; but wowzer this one seems like a beginner level mistake.
Fuck "secure" boot. The fact that it's given one megacorp a monopoly over "security" should make you think twice about who actually benefits.
The apologists will always say "but you can turn it off" "use your own keys" etc., yet M$ lost an antitrust suit over merely having IE bundled with the OS.
It's sad to see the once-open PC platform slowly turn into a walled garden.
I pray that there are more "bugs" which remain undiscovered.
When you tell shim what EFI binary to boot, you can specify it as an HTTP URL. If you do this and the HTTP server is malicious, it can trigger an out-of-bounds write.
However, usually you use it to boot a local second-stage bootloader like grub (hence the name "shim"), so it's unlikely to be a problem for most installs.
Regardless, Secure Boot was built from the start to allow previously-signed binaries to be revoked via the DBX list - a list of signatures that can be loaded into the UEFI to make it reject binaries even if they have a valid signature. When (If) the signatures of old shim binaries with this bug are added to the list, you can update the list in your own machines. Updates may be distributed as capsule updates (via LVFS etc), or if you manage your SB keys and vars you can download and enroll the list from https://uefi.org/revocationlistfile