Hacker News new | past | comments | ask | show | jobs | submit login
Bootkitty: Analyzing the first UEFI bootkit for Linux (welivesecurity.com)
183 points by doener 10 days ago | hide | past | favorite | 82 comments





The researchers are keen to note things about this, but also likely want to avoid giving attackers "more ideas", which I feel limits the discussion. Plus, I highly doubt these attackers don't know everything we should be discussing.

This is obviously a low hanging fruit and first PoC implementation. The fact that secure boot can "mitigate" some of this attack right now is mostly due to the attacker being lazy or deploying an unfinished product. The researchers describe this as "unless they install the attackers certificate", which is a nice way of saying that the attacker has not spent much time fishing through DKMS and abusing the keys used for this purpose.

There are a lot of systems that are affected by this type of attack because for various purposes they have to sign their own modules. The most common example of this (until extremely recently, sort of) is Nvidia.


>The fact that secure boot can "mitigate" some of this attack right now is mostly due to the attacker being lazy or deploying an unfinished product. The researchers describe this as "unless they install the attackers certificate", which is a nice way of saying that the attacker has not spent much time fishing through DKMS and abusing the keys used for this purpose.

Can you explain, or link to a source explaining this?


if you can add keys and sign things on the fly secure boot doesn't matter. it only protects you from downward payloads. if the one above the one that cares about secureboot is compromised its useless. you're confused because it's sold differently from this.

That all makes sense, but is it really that easy for an attacker to add keys? If so, the entire thing seems a little silly.

Attackers don't need to add their own keys. They can piggyback off of the key that you added legitimately to get DKMS working.

What fraction of Linux laptop users do this?

Seems to me that in an ideal world, you would only have to add the public key, and an attacker wouldn't be able to forge a signature without the private key...


The point of DKMS is to compile kernel modules on the same host where they'll be used, so it needs the private key to be accessible. And isn't DKMS a rather common thing on Linux, e.g., for Nvidia drivers and for VirtualBox?

On Arch most DKMS packages have a separate package that is compiled directly against the stable kernel (and some against the lts kernel). IIRC they all don't support loading with SB though since the keys that are embedded in the kernel for other modules are discarded after the kernel build.

This is to say, its not impossible that those can be signed from the distro. Just Arch doesn't.


I keep my fingers crossed for Arch and Valve cooperation: https://lists.archlinux.org/archives/list/arch-dev-public@li...

> Arch

Check out "sbctl" by Foxboron, it's a UEFI key and signature manager [1] that's pretty nice.

But other than that I agree with you there, I wish that upstream kernel builds would be signed by the distro for secureboot usage. Maybe this should be part of the archlinux-keyring package?

[1] https://github.com/foxboron/sbctl


But that relies on having the private key available locally, so it doesn't help with the scenario discussed here. Ideally, you'd want to sign the image on a different machine than the one booting it.

True, but that kind of also requires some way of distributing the bootable binaries, e.g. via netboot image via a TFTP server.

I usually store these keys on a LUKS encrypted flash drive. Not the best opsec, but at least good enough to prevent this kind of malware from spreading around. Can't update the kernel without the flash drive though :D


> I usually store these keys on a LUKS encrypted flash drive. Not the best opsec

Why would it not be the best opsec?

I replied to your other comment suggesting encrypting your local signing keys. I am not sure if I would use a flash drive though, why not just using the local disk?


If you have malware running on your system, couldn't it inject its bootkit code into whatever you're about to sign?

I haven't looked into the tooling much, but does it at least support pkcs11? That way you'd at least be able to store the key on a smart card or Yubikey.

I don't know. I actually asked myself this very thing while typing the above comment, but I'm too busy/lazy to look it up.

One issue I can see with this, though, is that if the malware is already present on your system and can run things, nothing would prevent it from hijacking the modules or the boot image before they're signed.


Yes. Edit /etc/dkms/framework.conf, set mok_signing_key to something like "pkcs11:id=%01", and mok_certificate to point to a file containing the certificate. You can extract the certificate using eg "pkcs11-tool -r -y cert -d 01 > .../cert.der".

Unfortunately, using your own keys is a massive pain because it involves the command line. Nobody made good user-friendly tooling for it yet, though the systemd tooling has improved things a lot, but it's not in a place where it can be part of the normal install wizard just yet.

It's kinda ridiculous reading the comments in here.

This is a persistence stage exploit mechanism, meaning in order to install it, privilege escalation happened before that and it already got root rights.

The people here that claim "secureboot prevented that". No, it didn't. A simple call to sbctl to sign the rootkit is missing, because, as every Linux device, you will have to have the signature keys available locally. Otherwise you can never update your kernel.

That is the conceptual issue that cannot be fixed, and also not with TPM or whatever obscurity mechanism in between.

Linux needs to be a rootless system, meaning that there needs to be a read only partition that root can never read. That would limit access to this kind of thing to physical access or the kernel ring at the very least. Technically, this was the intent of efivarfs, but look at where vendor firmware bugs got us with this.


> The people here that claim "secureboot prevented that". No, it didn't. A simple call to sbctl to sign the rootkit is missing, because, as every Linux device, you will have to have the signature keys available locally. Otherwise you can never update your kernel.

The majority of Linux machines out there are running vanilla, distribution-signed kernels. For most people, the only reason to build your own kernel (modules) is Nvidia.


What? The vast majority of Linux systems running in a secure boot world run vendor-signed kernels.

> Technically, this was the intent of efivarfs

As the original author of efivarfs I can absolutely say that this was not the intent


> The people here that claim "secureboot prevented that". No, it didn't. A simple call to sbctl to sign the rootkit is missing, because, as every Linux device, you will have to have the signature keys available locally. Otherwise you can never update your kernel.

If, hypothetically, you were using a system without custom keys, e.g. with a third party kernel trusted via the Microsoft / Red Hat shim program, [1] wouldn't you be safe, so long as secure boot was enabled? The bootkit would not be able to sign itself with a trusted key since the private key would never exist on the system to begin with.

Obviously, I'm aware that this approach has other problems and has had vulnerabilities in the past.

[1] https://wiki.ubuntu.com/UEFI/SecureBoot


You don't need to do your signing locally, it is possible to build your network around a build machine that does the signing for you. That being said, SecureBoot has always been security theater for anyone that isn't a major OS manufacturer or industry player. The fact is, as soon as cryptography comes into the picture the majority of the computing populace have already left the conversation.

> A simple call to sbctl to sign the rootkit is missing, because, as every Linux device, you will have to have the signature keys available locally.

Now that would be something! Unfortunately, I haven't discovered Microsoft's private key on my computer yet.


If you roll your own keys, as MS happens to lose some. But as I replied to parent, storing them on a fido2 device or in a crypted file would alleviate the issue. If not, please educate me.

> obscurity mechanism

I am wondering: could you store the signing keys on a Fido2 device? Or in a crypted file?

I would think this would not be mere obscurity, as this makes sure that just being root does not give you access to the signing keys.


I found this a rather interesting read, nice.

I cannot help but think the move to UEFI and Secure Boot made things less secure :(


What makes you think that? Secure Boot prevents this rootkit from running and is the recommended mitigation:

> Bootkitty is signed by a self-signed certificate, thus is not capable of running on systems with UEFI Secure Boot enabled unless the attackers certificates have been installed.

> To keep your Linux systems safe from such threats, make sure that UEFI Secure Boot is enabled

In fairness, the blog post confusingly says this in the next bullet point:

> Bootkitty is designed to boot the Linux kernel seamlessly, whether UEFI Secure Boot is enabled or not, as it patches, in memory, the necessary functions responsible for integrity verification before GRUB is executed.

However, this would still require Rootkitty to have gained execution already, which it wouldn't be able to if Secure Boot was enabled and the malicious actor's certificates weren't installed.


Hello, I am the Bootkitty developer. The reason our bootkit is self-signed is because it uses the LogoFAIL vulnerability to register a MOK on the system to bypass secureboot, which is why our signature is included. I will leave an analysis article about LogoFAIL at the link below. https://www.binarly.io/blog/logofail-exploited-to-deploy-boo...

Secure boot prevents this proof of concept but it doesn't prevent all UEFI boot kits and this particular kit will likely evolve.

On window: It took several years until the first two real UEFI bootkits were discovered in the wild (ESPecter, 2021 ESET; FinSpy bootkit, 2021 Kaspersky), and it took two more years until the infamous BlackLotus – the first UEFI bootkit capable of bypassing UEFI Secure Boot on up-to-date systems – appeared (2023, ESET).

Per article.


It was just a way for Microsoft's partners to limit the ease with which one can install alternative OSes. Try explaining to your mother how to disable SecureBoot to install Ubuntu. It used to be a single sentence - pop the CD in and follow the instructions, but Microsoft couldn't have that. As is always the case with Microsoft, security is never the goal unless they gain a competitive advantage or make it harder for their customers to move away in the process.

> Try explaining to your mother how to disable SecureBoot to install Ubuntu

Good news: you don't need to!


"It was just to keep people from installing something other than Windows" seems very counter-indicated by it taking ~7 years for a Windows UEFI bootkit to come out, and 13 years for one for Linux.

...and this bootkit is not able to work if Secure Boot is set up.

UEFI is also a godsend in terms of fixing a lot of the legacy BIOS crap


> UEFI is also a godsend in terms of fixing a lot of the legacy BIOS crap

In my experience there's a lot more crap in UEFI than there ever was in BIOS, if only because there's so much more of it.


> and this bootkit is not able to work if Secure Boot is set up.

wrong.

> UEFI is also a godsend in terms of fixing a lot of the legacy BIOS crap

that's like saying cutting the baby in half to end the dispute also solved the crying


> UEFI is also a godsend in terms of fixing a lot of the legacy BIOS crap

From a user perspective, no, it is not. Booting is far more complicated with UEFI.


What do you mean? The boot menu now works with the mouse and I can click on the operating system I want to run.

And my bloody computer is potentially trying to make god-blessed network calls before the OS has even loaded, and before my machine even provides the bare minimum human interface, you want me to navigate cryptography?

The trusted computing initiative was a disaster to the learnability of the computing field.

Devs are users too. Especially the unskilled/ignorant ones.


It is a mixed blessing, the ever-upward bass of complexity.

Most people are probably glad that their computers have guis and they can install applications with just a few clicks. The price of those features is complexity.

Ultimately, we're all like the Amish: we make decisions about our preferred level of technology. I won't have a smart TV in my house, but I'm happy to have UEFI systems.


I thought you said UEFI.

The article says that Bootkitty does not work if Secure Boot is enabled. How do you figure Secure Boot made things less secure?

Gonna assume it's because you have to disable it to run your operating system of choice, unless you beg Microsoft to let you.

Most distros will run just fine without disabling secure boot. I don't think the *BSDs are supported by the shim loader yet, but even Gentoo boots with secure boot enabled, without loading any user keys.

You only need disable it until you've got that OS installed, and then you can re-enable it. All the major linux distros have supported Secure Boot for years (which I was not aware of, and will now look into setting up!)

You don’t need to disable it at all for distributions that support secure boot.

I don't understand the tech: could it be the case that an older machine is missing CA roots that causes it to reject legitimately-signed code?

I had to disable SB to boot the installer for either F39 or Mint 22, i forget which, on an old laptop.


So that would be undesirable if true, but how would it be less secure than not having secure boot?

Of course, most/all SB BIOSes enable setting your own platform key.


Because it can lock the door behind itself in an opaque hardware-dependent layer users have no control over.

If i were to design security from the ground up it would be a small external sdcard for firmware and kernel (with a hardware r/w toggle), and optionally a external sdcard adapter that verifies the hash of the content.

Everything else is as dumb as bricks and gets its firmware loaded from the sdcard.

We didn't do that because secure boot was solving the problem of large orgs with remote administration in mind, and designed by orgs happy to sell yearly advanced cybersecurity protection shield plus certification subscriptions.

Designing for remote administration by an IT department will.. increase the attack surface for attackers to remote administrate my device.


> If i were to design security from the ground up

You might be interested in Librem Key, based on free firmware?


FUD: you can install your own keys, enable secure boot and run the OS of your choice.

You got a user friendly easy to follow guide? You can start by telling me what you mean by keys...

Is the implication that anything that is more complicated is necessarily less secure? Because I think that turns security on its head. A deadbolt is more complicated than a door with no lock.

We can argue about whether there is sufficient user demand and benefit to make secure boot easier for lay people. But that is completely orthogonal to whether it increases or decreases security of the system.


"Secure boot" is not actually meant to improve security.

It exists as a moat to make it harder to install Linux on your (Microsoft) PC.


does it also help keep drm keys safe? that's how it works on Android, they even delete the 4k keys if you root your phone.

What!? Last I checked even with ring0 the system didn't have access to the WideVine keys. Talk about yet another reason to just pirate everything

or buy them, but obtain a pirated version to recover what's yours when they lock you out.

and identity. most of the world now replaced your credit card and government id with apps that rely on the OS assurences to prove you're yourself with vendor keys, mandatory selfies and such.

What?

I agree the move to UEFI added a huge new attack surface and that most UEFI implementations (notably, even the open source ones) are teeming with horrible bugs.

And yes, then linking the trust architecture for Secure Boot so deeply with UEFI means that UEFI bugs are also Secure Boot bugs.

But to say this is less secure? No way. Traditional BIOS-based MBR backdoors are like 1980s oldschool classic stuff. Most adversaries would require a good degree of development work to backdoor / root kit a PC they were given with Linux, Secure Boot, and an encrypted filesystem. With a BIOS based PC there would literally be nothing to do.


It's more secure than not having Secure Boot.

I think UEFI has many problems. However, you should not confuse separate (but related) issues from each other. If the initial booting functions can be altered by the operating system, that is a different issue (which perhaps UEFI makes it more severe). An internal hardware switch to disable this function would be helpful, and possibly a software function that the BIOS disables once the system starts (so it can only be altered by the BIOS setting menu, or by a BASIC or Forth in ROM or something like that). Functions being restricted by internal hardware switches would improve security, especially if also the initial booting functions are made less complicated too; if you are paranoid then you could also use glitter or whatever to detect hardware tampering.

> An internal hardware switch to disable this function would be helpful

For desktops and mobos, maybe. Gonna be hard to make that work for laptops and phones.

But generally I'm in agreement. By the time I'm booting into and using the system the BIOS is no longer a discussion point; if I need to update it then I need to shut it down and get under the hood.


And significantly more complicated to setup and maintain in my limited experience :|

UEFI itself is way too complex, has way too much surface (I'm surprised this didn't abuse some poorly written SMI handler), and provides too little value to exist. Secure boot then goes on to treat that place as a root of trust, which is security architecture mistake, but works ok in this case. This all could be a lot better.

Hello everyone, I am the developer of BootKitty. I am studying IT in Korea and I am making bootkit as a private project in BOB, a security program training. If you find it hard to believe that I am a developer, I can prove it. If you have any questions about BootKitty, please ask me :)

I guess we need to go to back to socketed eeprom chips.

Or just in general machines that are wholly controlled by the owner.


A physical jumper or switch to enable/disable writing to the firmware flash could end a lot of these kinds of problems.

Don't be silly, they can't put a subscription on an eeprom ;)


Neat-o wonder where this was discovered and what telemetry is being used to say it isn't used in the wild (guessing commercial anti-v products)


I think they put the Y in the wrong place; should have called it bootykit!

Not everyone is into pirates you know?

hehe, or humour apparently

What does the discovery of the Bootkitty UEFI bootkit for Linux systems suggest about the evolving landscape of cybersecurity threats?

Nothing. This is just a proof of concept that is ridiculously easy to detect. If your attackers can drop files in your /boot or /boot/efi directory, I think you have much worse things to worry about than this.

In fact, this bootkit would be about the least thing I would worry about. Because by the time an attack can write to /boot, they can also write to /etc/init.d . And the later is not protected by "secure boot".


> Because by the time an attack can write to /boot, they can also write to /etc/init.d . And the later is not protected by "secure boot".

Bootkits are to make the infection both more difficult to detect and remove, so whether /etc/init.d is writable is pretty irrelevant.


How is an infection hidden somewhere in the friggin entire rootfs easier to detect and remove that one that literally replaces the one file for your kernel /boot ? What advantage could the latter possibly have ? Not to mention that something from a bootkit bootstrapping an infection in the root filesystem is the realm of useless tech demos like this one; while for something that can already write your rootfs, infecting the kernel is trivial.

The entire boot system has much, much fewer places for malware to hide compared to the entire "rootkit" OS attack surface which is astronomically larger. Secure Boot has always targeted the smaller and most useless of the swiss cheese holes.


It means "just trust us" is not and never was secure.

Trustworthy people don't ask you to trust them.


Indeed. For example, none of those CA in the built-in bundle in my browser ever asked me to trust them, that's how I know they are trustworthy.

You were asked by the browser publisher to trust them.

But those are merely defaults which you do posess ultimate control over, unlike the blobs and secrets in various bits of hardware.


No, I wasn't "asked" by the browser publisher to trust them unless you use the word "ask" in a very broad (almost to the point of meaninglessness) sense: when I installed my browser, it simply started using its pre-packaged bundle of CA certificates. Which it regularly updates, I imagine, although it also never asked me about what the update source I'd like to use either.

You can say that I implicitly trust the browser vendor's judgement in what CAs to trust, by the virtue of using the browser, and I'd agree with that. But saying that I was asked by the browser publisher to trust them? No, I disagree, I wasn't. It was a silent decision.


Ask as in expect.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: