Hacker News new | past | comments | ask | show | jobs | submit login
Linux Kernel Lockdown and UEFI Secure Boot (mjg59.dreamwidth.org)
197 points by zdw 11 months ago | hide | past | web | favorite | 129 comments

Let's be clear here. This is about enabling Lockdown when UEFI Secure Boot is enabled by default.

So the concern is essentially that binary distributions, which are going to be responsible for kernel flags, may enable this, whether it is default in the default kernel config or not. As best as I can tell that is the crux of Linus' concerns. MJG suggesting that the kernel flag can be turned off, while true, is not a response which considers the defacto usage which may result.

But as Linus said, kernel lockdown has nothing to do with secure boot. Both are separate things. Why tie them together by default? It's like "automatically burn CD" if a CD-R is inserted into the drive. I can burn a CD sometime later. Or I can insert a CD-R and play it, not burn it.

From the top: the goal of the lockdown mode is to prevent someone who has UID=0 ("root") in the kernel from being able to modify or tamper with it (ring0 / CPL0 access). This can be done through /dev/mem, kexec, or even by asking a PCI card or FireWire device to DMA into physical system memory. Basically, there are a lot of hooks to tamper with the kernel, and "lockdown" mode turns them off.

Now, this only really makes sense if you are in some sort of trusted untampered environment already: if not, someone with root can stomp a new kernel in /boot/ which gives them a backdoor. So, lockdown without a trusted boot environment is security theater, since you can just take the long road anyway.

Additionally, in a trusted boot environment, the lockdown flag is wanted. If not, "trusted boot" really isn't such: someone that booted through a signed kernel can simply tamper with the kernel anyway after booting up through an init hook. The dangerous part: once they have the ability to tamper with the kernel, they can chainload another operating system and functional as a poisonous bootloader. The danger here is that if an attack like this is seen in the wild, and the attacker used, say, Fedora's kernel signing keys to do the attack, it has the possibility to get Fedora's keys blacklisted by OEMs. Linux would do best not to be used by an attacker as a "malware bootloader".

That means it's reasonable for, by default, lockdown to be informed by the presence of a trusted boot environment.

Presumably in that case you'd also want the kernel to fail loudly when it isn't booting securely. Say you enable lockdown but somehow manage to misconfigure secure boot, You'd much rather have the kernel go "oh whoops I've got lockdown enabled but I wasn't booted securely" than to boot the kernel insecurely without notice.

And just like that it stops being a magically enabled thing and becomes something you have to either have fully on or fully off.

One would hope your system would complain loudly in the BIOS if Secure Boot wasn't on regardless of the operating system. If some malware manages to misconfigure Secure Boot, it can also patch your kernel to remove the complaining.

The kernel isn't able to determine whether you booted in a verifiable manner in all cases, so there's functionality that allows the bootloader to make that decision rather than the kernel. But if you don't have a verified boot environment and someone is deliberately attacking you, it's easy for them to fake up your early boot environment such that the kernel believes it's booted securely even if it hasn't been.

If you want to protect against accidents then you can do that in userspace. There's no need for the kernel to be involved.

Wait, all these are protections are useless because someone with root can defeat them? Huh?

These protections should vastly reduce the risks of someone gaining root and make it much harder to do so to a running system with physical access. Or?

Seems to me that it is just as much security theater to pretend that it isn't game over if someone already has root.

edit: and perhaps the (by far?) biggest benefit of lockdown is to make it harder for someone to gain root?

There's a kernel-imposed boundary between being a user and being root. Right now there's also a partial barrier between being root and being the kernel that's fairly easily circumvented. Now that we can protect the on-disk copy of the kernel from root, there's also interest in protecting the in-memory copy of the kernel from root.

Yes. And lockdown helps with (right?):

a) locking root out of kernel

b) locking users from becoming root

I interpret the reasoning as: if we can't do a AND b we won't bother with b. Which is confusing because b is many orders of magnitudes more important than a.

b) is a barrier that's already supposed to exist, so these patches don't have an impact on it.

Supposedly yes, but shoudln't it vastly reduce the attack surface by not having to worry (compared to now) about all device drivers that use DMA (which otherwise is just impossible to keep track of).

In my opinion reducing that attack surface alone on b) has a much higher impact than a) could ever achieve.

The stuff lockdown impacts is stuff that only root can do right now, so in theory it's only handling channels that are irrelevant to b). Users should already be prevented from triggering arbitrary DMA, for instance.

Well, should, again, device drivers are notoriously bad and some protection against that is extremely valuable.

a) prevents a temporary breach of b) from turning into a persistent attack. b) is still very important, so important that when b) breaks it is considered a CVE!

So... you know, it occurs to me that none of this would be a problem if people understood two things:

A) Computers are tools designed to do whatever the user tells them to do.

B) The original architecture of the Internet assumed a network connecting implicitly trusted users

Stop trying to mess up a perfectly good tool by making it nigh impossible to work with, and redesign a networking/computing infrastructure that doesn't implicitly trust all users as a fundamental assumption.

This type of thing is getting completely out of hand.

If you want security SO badly you are willing to sacrifice billions of potential users freedoms to use their hardware as they see fit, coming up with a second "SECTERNET" architecture should be a price you're willing to pay right?

This pattern of trying to lock a user out of any tier of control of their own machine reeks of the continuing attempts by companies to deprive users of the privilege of ownership.

I realize it may be difficult to see for some, but this is a literal case of the road to Hell being paved with good intentions.

People don't need to be "protected" by building their volition out of the system.

"root user" is a term that we choose to mean what we want it to. Some people in the Linux community want to restrict the root user / cap_sys_admin so that even them cannot tamper with a secure kernel or run ring0 code under certain configurations. I think this is a great idea.

The dangerous part: once they have the ability to tamper with the kernel, they can chainload another operating system and functional as a poisonous bootloader.

Here's the bit I don't get: even if you can't be used as a chainloading malware bootloader, can't you still be used as a slightly more complicated malware bootloader that fires up the victim OS under virtualisation?

under secureboot, you would need to get that boot loader signed

But you are signed - you're a vanilla distro kernel booting a custom userspace that fires up the victim OS under virtualisation.

You're suggesting a construction like Blue Pill[0], right, where you have a "malware hypervisor"? The concept is known, security people are aware, it's considered moderately infeasible, and multiple techniques can prevent it[1].

[0] https://en.wikipedia.org/wiki/Blue_Pill_(software) [1] http://northsecuritylabs.blogspot.com/2008/06/catching-blue-...

That mitigation technique relies on the anti-malware executable starting before the malware does.

This is what @mjg95 should be saying.

This is literally what I have been saying

Apologies, that wasn't relayed to me in what I've read so far. Good luck.

> the goal of the lockdown mode is to prevent someone who has UID=0 ("root") in the kernel from being able to modify or tamper with it (ring0 / CPL0 access)

Exactly, and that's why lockdown and secure boot must be discarded. These are attempts to take control away from the user and give it to the OS maker or device manufacturer. This is exactly why I cannot root my android without losing hardware features and this is exactly why ios devices are not general purpose computers.

Fuck these anti-user efforts. They have no place in Linux.

All Linux distributions that support UEFI secure boot allow you to disable any restrictions if you have physical access to the machine. If you have a device vendor who's implemented something more restrictive, blame the device vendor rather than the software.

or blame the FLOSS people working so hard to enable them.

This is only anti-user from one point of view (where jailbreaking is beneficial). It's completely pro-user, where the user wants to ensure that the environment is what they expect it to be. It really depends on who the user is and what they want to achieve.

Otherwise you could describe something like process memory protection the same way: not being able to read/write directly to a root process is taking the control away from the user after all.

> this is exactly why ios devices are not general purpose computers.

What general purpose computation requires root?

I second this. I use linux because i want root. Linux needs users to be kernel devs. If hardware vendors want to lock down kernels let them patch in the lockdown.

It's something that users are guaranteed the ability to disable for all distro kernels.

How so? Adding such an ability removes the security right? If someone can set kernel flags, they could just as easily set the lockdown flag. If the package maintainers provide a different kernel image, then you still have the next boot issue. The fact that the shim exists erases any real protection against this.

Therefore the only real protection comes when you put your own keys in. In which case, once again, enable lock down in the kernel config. I imagine that is exactly something enterprise vendors would be looking at.

In the process of writing this, I have come to the conclusion that consumer secure boot is meaningless while the shim exists. The Microsoft approval process is, as I believe you said many years ago, fairly simple anyway. A company telephone number and a fee, if we're looking at other serious malicious actors.

I acknowledge I may be misunderstanding this, and that you've already been over most/all of these issues before. I don't mind if you feel a response is not necessary.

> How so?

Run sudo mokutil --disable-validation

> Adding such an ability removes the security right?

No, because the mechanism used for this requires proof that you have access to the console - a remote attacker can't do it. A local attacker could, but there's support for passwording it to prevent this (it's equivalent to passwording the firmware setup)

> I acknowledge I may be misunderstanding this

You are, but there's a huge amount of misinformation about this stuff out there that makes it hard to find the actual information, so it's not surprising :)

I forsee this as a step in many online guides on "how do I [x] on linux"

Edit: Having thought about this some more, I think perhaps we can model this workaround after SELinux. Sure, there were trivial cases (my wine game mmaps 0x0 and now doesn't work, after wunderbar emporium), but more often it was something like "My apache can't read my directory" "Just run setenforce 0 and try again" which then, rather than being followed up with audit2allow, usually results in the search "How to disable SELinux permanently".

The fundamental problem is that, and you've covered this ground plenty of times, Secure boot is not secure by default. We're not really affording much more security for the most vulnerable systems which is run by small time system administrators rather than large tech corporates.

Sure it would be great if we could educate people to protect their chain of trust (and arguably the best solution), but instead what we have is something which hampers usability, and opens the door to people who plan to come around and barricade the door after you.

So by tying in this security by default measure, we could arguably afford and even greater illusion of protection, which runs counter to the intent of extending the protection of secure boot. Anyone willing enough to protect their chain of trust explicitly is also going to be able to research the issue.

The flip side is that SELinux was adopted anyway. If there aren't actually that many server use cases which require this work around, then: Protection, awesome. But unlike SELinux policy, Lockdown policy is baked into the kernel (right?). If there are any issues, there is only one solution: disable Kernel Lockdown, which might be achieved by disabling secure boot validation.

The reason we're seeing this blog post is quite clearly due to this statement from Linus:

'This discussion is over until you give an actual honest-to-goodness reason for why you tied the two features together. No more "Why not?" crap.'

Notice LKML is from the 3rd of April, and MJGs blog post from the 4th of April.

Edit: correct initials

Eh I'm not going to convince Linus by writing a blogpost. This is just to make Reddit stop complaining.

Then I guess I'm completely wrong and don't deserve any points XD

Somewhat annoyingly this puts me in a quandary, remove the incorrect top level post, thus the history in which the error was highlighted.

>until you give an actual honest-to-goodness reason for why you tied the two features together

What's the point of trying to explain this outside of LKML?

Convince third parties? He needs to convince Linus in LKML, rather than this sad attempt at brigading.

I wonder how long until the kernel is checking signatures on code pages as they're mapped into memory or modified (iOS style). Obviously no one wants the vendor gated platform lockdown aspects of any of this, but user-defined key whitelists for both kernel and userspace binaries would be really cool. I think I've heard Linus is not in favor of extending this notion to userspace binaries, or at least that he doesn't believe it should be the kernel's responsibility (for Linux anyway), though.

As well as the IMA Appraisal feature that mjg59 mentioned, you can also integrity check your entire filesystem with verity[1]. Android uses it to make sure the system partition isn't tampered with.

[1] https://www.kernel.org/doc/Documentation/device-mapper/verit...

Because God forbid you tamper with the system partition.

The security features of android are the best argument for the stance Linus takes on these things.

The kernel already has support for verifying signatures on binaries - check out IMA appraisal.

Apropos: https://lwn.net/Articles/488906/

Hmm does this only apply to files though? I'm talking about kernel signature engorcement on any page mapped into memory.

No, there's a presumption that any signed app isn't going to generate untrusted code

Thinking about it more that seems fair enough provided as a user I can operate the system in such a way that I'm only running trusted code.

Read the whole thread on lkml and didn't see a good argument as to why lockdown being enabled or not should depend on the way the machine was booted. Linus seemed to suggest having lockdown on by default would be reasonable, independent of boot method.

Indeed, having lockdown on by default would be reasonable.

There is an argument that "it's useless" [1] without SecureBoot but so what? Separation of concerns is a good principle and allows external people to reason about the behaviour much more easily.

There is no point implementing logic like "if x == 0 then y = 0; return x * y" when a simple "return x * y" will suffice. In fact it's actively harmful as it raises the complexity.

[1] http://lkml.iu.edu/hypermail/linux/kernel/1804.0/01621.html

There's an argument that it's harmful without a verified boot chain - it doesn't add that much security in that case, but it does limit functionality. If you have enough underlying infrastructure to mean that it does add a meaningful amount of security, then the loss of functionality is a worthwhile tradeoff, but otherwise it's likely to cause justifiable user anger.

The counter argument to that is if it causes user anger, then maybe it should not be enabled at all.

Also, I am not sure about it adding no security without verified boot - a machine rebooting is something that can be noticed.

An attacker doesn't have to be in a rush, they can wait for the next time you reboot (by, say, forcing a notification telling you that you need to reboot for security updates)

this is the argument that doesn't appear to be getting much traction with the lkml. it was an interesting exchange to read as both sides appear to be speaking past each other repeating the same points. a case of agree to disagree perhaps.

there is really no such thing as a verified boot chain on x86 anymore, microsoft leaked keys that essentially allow any binary to be booted on UEFI that uses microsoft keys, im not sure what the point of all this fuss is.

They didn't. No keys were leaked. I wrote about this at the time (https://mjg59.dreamwidth.org/44223.html) but the short version is that the leaked tooling needed to carry out that attack was specific to ARM, required someone with physical access to the console to confirm the installation, and was blacklisted anyway.

You can, at least on some machines, remove the existing keys and roll your own chain of trust. If you care about secure boot environment, you should probably start with that anyway.

See Linus' concerns about the patchset: http://lkml.iu.edu/hypermail/linux/kernel/1804.0/01597.html

And the /r/linux discussion from earlier: https://www.reddit.com/r/linux/comments/89mtyt/linus_torvald...

Ok, I this might sound pesky but what's Linus referring to when he says "my kernel". Is he referring to his personal kernel or Linux kernel in general.

Pretty sure he means "the kernel I have running on my computer" (i.e. as a user of the kernel), not "the kernel I work on" (i.e. as a developer of the kernel).

Yup. It is basically a user story.


It seems to me like the former, but with a kind of generalized "me" to indicate "all of the people who sort of think like me, with me as an example". Why do you think it could be the latter?

I don’t know. Sometimes I’ve difficulty with comprehension. I don’t know how to explain. I thought it was better to ask then to assume. Makes sense?

Maybe not a native speaker or maybe someone who has difficulty comprehending written words.

LWN article discussing the same topic[1]


I rember there was resistance from Torvalds to merge a similar FreeBSD-style secure level implementation parchset a while ago (to prevent runtime kernel module loading and such).

Will this parchset get merged?

> I'm done with you. You're not listening, and you're repeating bogus arguments that make no sense.

> No way in hell will I merge anything like this.

> Linus [0]

I highly doubt it.

[0] http://lkml.iu.edu/hypermail/linux/kernel/1804.0/01607.html

Don't worry, the mean time between Linus saying something like that and then merging anyway is about ten minutes.

The "no way in hell" part clearly refers only to automatically enabling lockdown mode if the kernel was booted using secure boot.

Lockdown mode itself is very likely to be merged.

It becomes clearer if you read http://lkml.iu.edu/hypermail/linux/kernel/1804.0/01597.html which was linked elsewhere in the comments here.

I read a fair way down the thread, but Garett and Linus continue to butt heads.

Michael Garett seems to think lockdown without Secure Boot makes no sense. The kernel team seem to disagree.

I didn't get to a point where any resolution was met, but Linus did complain about over 100 emails wasted before a legitimate reason they should be linked came out.

> Michael Garett seems to think lockdown without Secure Boot makes no sense. The kernel team seem to disagree.

That's not my name, but anyway. I don't think it makes no sense. I think asserting that it adds meaningful security in the absence of some other mechanism for gaining trust in the kernel is misleading, and I think the reduction in functionality associated with it isn't worth it unless you have that security benefit. As a result, I think having the default behaviour in a general purpose distribution be "Enable lockdown even if we didn't have any indication that we booted securely" doesn't make sense - more specialised projects may have a reason to do so.

> I think asserting that it adds meaningful security in the absence of some other mechanism for gaining trust in the kernel is misleading

Do you think it's possible to do security at all without being educated? You seem to be advocating "doing the right thing" for the user, but if the user doesn't know how to do the right thing himself then any hope of security is out of the window anyway.

A perfectly informed user will never run an application that will modify their SSH configuration, so by that argument there's no need to prevent arbitrary user access to that file. We do our best to protect users without getting in their way because it's impossible to expect everybody to do everything right all the time.

Nitpick - a perfectly informed user can certainly run an application (such as a browser) that gets exploited by some external input to attempt to modify their SSH configuration.

A perfectly informed user would never have navigated to a site that was running malicious code. That's basically the point - the threat model today is sufficiently complicated that there's no such thing as a perfectly informed user, so we have to design systems to be resilient against users making mistakes.

Well… it's certainly much more practical to decide "I'll only run applications I trust" than "I'll only ever visit websites I trust" or, in full generality, "I'll only ever use any application to process data whose creator I trust".

But I guess that's not really the point you're making, and of course, in an ideal world native apps would be no more dangerous than websites anyway…

> That's not my name, but anyway.

Apologies, my words get a bit scrambled now and then.

The actual quote:

> "Just look at this thread. It took closer to a hundred emails (ok, so I'm exaggerating, but not _that_ much) until the real reason for the tie-in was actually exposed."


Casually following the discussion on lkml has me under the impression that something will probably be merged in this vein - they're just bickering over important details and having communication problems.

Edit: *Eventually

Considering Linus has been arguing in favor of the value of lockdown in lieu of SecureBoot he's obviously on-board with the patchset conceptually. The problem seems to be in how SecureBoot and lockdown are currently related in the implementation when he's arguing they're entirely independent of one another. The things just fall apart in the communications.

Linus seems perfectly fine with the implementation.

Except for it being tied to UEFI boot.

I'm a little confused about something: in a locked-down kernel, what would prevent the user from just replacing the kernel image and rebooting? Would the kernel lock down the image?

EDIT: Sorry for the confusion, I mean lockdown without Secure Boot -- I thought Linus was saying they were independent things, so I was trying to see how lockdown could work independently of Secure Boot.

There's a few options:

1) Your kernel is in flash and your firmware won't update it unless it's signed. In that case the fact that there's no secure boot as such is irrelevant, because you can assert that the hardware is providing a root of trust anyway

2) Your firmware measures the kernel into something like a TPM and the TPM won't release a secret that's required to boot unless that measurement is correct. https://mjg59.dreamwidth.org/35742.html describes a couple of shortcomings in that approach and how you can get around them - if you do that appropriately, there's no risk

3) Your system has some other kind of verified boot implementation (such as ChromeOS)

But otherwise yes, that's the point - defaulting lockdown to on while it's still easy for an attacker to replace the kernel may not be a great default, because it removes functionality without offering much additional security.

Andy suggests one mode of operation: a truly read-only boot filesystem that the kernel itself can't modify. And there is general agreement from all parties to add a kernel command line flag to support that use case.

Otherwise, on a more general-purpose computer with a writable /boot/, some verified boot like secure boot is required to ensure that your chain of trust is established.

If secure-boot is enabled, the new image would need to be signed in order to be booted. If the user can sign an image with a trusted key, than sure, but it's assumed that an attacker does not have secureboot keys.

Secure Boot not allowing to boot your unsigned replacement image?

The console manufacturers have a major financial stake in preventing ‘jailbreaks’ and they all have failed. There is no such stake on the PC platform and it has a legacy spanning decades.

It’s really far-fetched to believe it’s anywhere near possible to secure it.

Vulns happen. It doesn't me we should just go back to sending private info in plaintext on the internet. I don't seen why user-empowered secure boot is fruitless... especially when it's to protect the user not the vendor.

Secure boot can only work if the keys are held by an untouchable entity. It’s not user-empowered. But that has nothing to do with the fact that the PC platform has an enormous legacy of openness and cannot be made secure, even if you sacrifice the parts that allowed it to dominate.

That's just wrong. Why do the keys have to be held by an untouchable entity? They just have to be unable to be changed outside of having physical access to the device. Stop spreading misinformation based on a shallow understanding of the technology.

I don't see how that would work on any kind of scale, but indeed you can use alternative keys if you really want to and know what you're doing.

>Overall: the patchset isn't controversial, just the way it's integrated with UEFI secure boot. The reason it's integrated with UEFI secure boot is because that's the policy most distributions want

A security feature that can be circumvented in many non-secure-boot cases is still a footgun protector. And there are already plenty of those. I'm not surprised the LKML thread pushed back as hard as it did. If that's his reason for digging in his heels, then suspicion that there could be more at play doesn't seem all that unwarranted.

Kernel and hardware lockdown could be an extreme but effective solution against cheaters in online gaming. Too controversial to being put in use though.

That would still depend on a remote attestation solution that provided complete understanding of the state of the system during runtime, which isn't really close to possible with existing technology.

The coupling of a cryptoprocessor[1] to avoid network packets and circuitry tampering with a locked kernel could stop people to use cheats. The CPU would only execute a signed kernel[2] while the kernel would only load signed modules. Today, each gaming studios/publishers are working on their own anticheat solution in their corner. Here, the anticheat would be the platform itself, you would only have to focus on keeping the kernel secure to turn all games cheats free (in theory). video game consoles are going this route with varying degrees of success[3]

[1] https://en.wikipedia.org/wiki/Secure_cryptoprocessor

[2] https://en.wikipedia.org/wiki/Code_signing

[3] https://en.wikipedia.org/wiki/PlayStation_Portable_homebrew



Well imagine Blizzard and Apple made a deal where they worked together so that Blizzard games would only connect to verified servers if the hardware platform can cryptographically attest that the currently running kernel has been signed by Apple (doing so would prove it hasent been modified). Blizzard gets to distribute encrypted binaries and assets to users that only their registered Apple hardware running secure booted software can decrypt.

Now replace Blizzard with any iOS application and you have the iOS AppStore ecosystem. We're already there. And I'm also pretty sure Blizzard and Apple have some dark back room deal like this, although it's more about Apple allowing Blizzards shit to do whatever it wants on its platform and less about methods of attestation considering everything is already in place for that.

Well, right now they couldn't do that because the Apple hardware platform doesn't allow it, and because the PC platform doesn't either any attempt on Apple's behalf to do so would result in gamers abandoning Apple?

What do you mean? When you execute code on iOS you are executing signed binaries that the kernel verifies at install time and then user specific encrypted pages that the kernel decrypts/verifies as they're mapped into memory at execution time. I feel like we are not on the same (metaphoric) page.

What I'm saying is this already exists today. iPhone hardware can attest to the integrity of the running system because Apple practices secure boot and maintains a PKI (supported by the kernel and TPM) responsible for verifying the integrity of software running on its system. Apple can say to a Blizzard "game" with certainty (barring security vulns), "there is no unauthorized software running on this system".

And as far as Apple is concerned wrt their platform, Blizzard is just another app developer in their ecosystem. They don't have to pay a special "attestation premium".

> Apple can say to a Blizzard "game" with certainty (...), "there is no unauthorized software running on this system".

What does that assertion even mean unless you completely lock down app signing keys though? Sure - this system doesn't run any unauthorised software, only "foobar test" signed by a valid developer key; everything is fine.

It works for consoles, because you won't have the trusted signing keys.

And so can Nintendo on a Switch, but Apple can't do that on a Mac. This is clearly a theoretically possible scenario, the question is whether it's practical on something that's sold as a general purpose computing platform (which iOS devices aren't)

Sorry I'm not trying to split hairs. I don't see why it wouldn't be practical on a macOS device (agreed that on a Mac today this is not possible based on the state of publicly available macOS software). Apple has been slowly moving that direction and IMO the only reason they haven't dialed it up to 11 is because they don't want to break everything including users' workflows. But they could release a state of the art macOS laptop ~tomorrow that squeaks like iOS from a security angle.

Anyway you're right that's besides the point now. My original point (rephrased to Linux) was that a desktop leveraging secure boot plus the recent work to harden the boundary between kernel and root plus something like (as you pointed out in the other thread) IMA could, I think, meet the original commenter's requirements.

We have that. They're called consoles.

Why link his personal website instead of LKML, where we could actually read the replies from Linus and other kernel devs?

Can Linux just ignore the entire cancer that secure boot represents? There is literally no upside for the user.

Secure Boot is cool when you control the keys. That's quite a lot of work though. Given how easy it is to bypass for consumer systems, yes the defaults suck. However, it makes a lot more sense in enterprise, where someone might actually be employed to manage the chain of trust.

Other than ensuring that you're booting what you think you're actually booting

I don't know if this is a general problem, but I have never had this issue. My system always boots whatever I installed.

How are you sure you don't have a so-called "boot kit" on your machine?

Because there are only 2 ways of that happening:

1. I run a malware as root which modifies my image. I don't run anything as root except apt. So this is impossible.

2. Someone steals my computer and modifies the disk. Again, if they have physical access, I have already failed in my security. So this scenario is irrelevant.

Am I missing anything?

1. local-root exploits exist and are quite common. It is possible for an unknown kernel bug to get root access. Remote code execution exploits are also common in browsers and also just all across the Linux desktop in general.

2. apt packages can run arbitrary bash scripts as root as part of their install and update process. Hopefully your third-party apt repo hygiene is good.

3. Just because you only explicitly run apt as root, does not mean no program runs as root. setuid executables exist and are unfortunately quite common in Debian.

> local-root exploits exist and are quite common. It is possible for an unknown kernel bug to get root access. Remote code execution exploits are also common in browsers and also just all across the Linux desktop in general.

I highly doubt it. Privilege escalation exploits are rare and fixed with utmost priority.

> apt packages can run arbitrary bash scripts as root as part of their install and update process. Hopefully your third-party apt repo hygiene is good.

Yeah, a root user should be responsible.

> Just because you only explicitly run apt as root, does not mean no program runs as root. setuid executables exist and are unfortunately quite common in Debian.

And I assume that when the root user installed those apps, he was careful and selective and did not include malware.

> Privilege escalation exploits are rare and fixed with utmost priority.

A statement which would be much improved with numbers. How rare? Fixed how quickly?

You run malware as a user that's able to subvert some other component of your distribution (such as beep on Debian) to gain root and it does that?

Gaining root without credentials is a bug of the highest order and must be fixed immediately. Existence of rare exploits that do allow privilege escalation does not mean we take capabilities away from root.

It actually does - why do you think /dev/mem is more restricted than it was in 1993?

Seccomp, selinux, grsec, namespaces, blocking module loading and many others absolutely exist to take capabilities away (or can be used that way) from full user access (root or other) to limit what an unknown-at-the-moment exploit can do.

You're missing the entire field of remote exploits. Meltdown, for example, could be exploited by JavaScript code running as non-root.

It's not about what you explicitly choose to run as root at the command line. That's a tiny fraction of the code that executes as root on your machine. It's about what software can do to somehow gain access to root's privilege level via exploits.

> You're missing the entire field of remote exploits. Meltdown, for example, could be exploited by JavaScript code running as non-root.

Yeah and that was a really big deal. Fixes were pushed in record time.

> It's about what software can do to somehow gain access to root's privilege level via exploits.

The correct way to deal with that is to plug exploits or write code that is less likely to be exploited. Taking away privileges from root is the exact opposite of what we should be doing.

Root access should be protected, and any bugs that break this barrier without the user's credentials should be fixed/prevented.

The root user should have absolute power to do anything. This is the basic ethos of Linux.

Taking away power from the user results in ios. I don't want Linux distros/systems to start behaving like ios.

>Root access should be protected, and any bugs that break this barrier without the user's credentials should be fixed/prevented.

>The root user should have absolute power to do anything. This is the basic ethos of Linux.

They're just splitting 'root' into two parts. No power has been taken away.

> 1. I run a malware as root which modifies my image. I don't run anything as root except apt. So this is impossible.

1. What if apt is compromised?

2. What if apt installs something which is compromised?

3. What if you run something which is compromised and escalates privilege?

> 1. What if apt is compromised?

Then fix it.

> 2. What if apt installs something which is compromised?

This implies apt is compromised. Goto above.

> 3. What if you run something which is compromised and escalates privilege?

If something is able to escalate privilege that's a kernel bug of the highest order and must be fixed asap.

I'm guessing defense in depth is a foreign concept to you?

There are other ways to do that, e.g. by encrypting the disk and booting from usb key or sftp/https network sources.

But you don’t know if the OS you’re trying to boot and enter passwords into to decrypt has been tampered or not.

Not without Secure Boot.

Really... why would I as user not want it?

Because almost no vendor wants to give you the keys to your system voluntarily. If secure boot puts the users in charge, I'm all for it. But there are forces at work to subvert the system to achieve a better vendor lock-in. And providing them with better means to that end is unethical.

I've never been on a system where I couldn't swap the keys in UEFI. Does mokutil not work on your system?

You know that this feature only works because Microsoft added it to the requirements of the Windows Logo program for x86 based computers almost at the last minute. Mainboard manufacturers would otherwise cut corners and add the MS signing key as the only unchangeable key to the system.

At the same time, MS specified for ARM computers that want to run Windows that the key must be fixed and irreplaceable.

Go into the BIOS and set your own keys.

I can do that now. But who guarantees that my next mainboard still has that feature?

Shim allows you to branch to an arbitrary additional root of trust

I wish that we would at least have the option to buy a new computer without it.

You can't just go out and buy one, but there are millions of AMD Ryzen chips with a known flaw in their BIOS and PSP signature-validation routines just ripe for a PSP-Cleaner type application. Go forth and be fruitful.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact