The problem crops up because redhat submitted a pull request to enhance the existing in kernel live inclusion of additional trusted x.509 certificates. Note that this is 100% upstream and live. The pull was to add the ability to extract these x.509 certificates from UEFI PE binaries, as this is the only format they are available in from the only CA UEFI secure boot computers are guaranteed to trust - Microsoft's CA.
Linus decided he didn't like it because he didn't like the idea of extracting a certificate instead of having it alone. Understandable, probably, except that leaves you in a situation where a secure kernel that was executed due to a microsoft CA chain of trust now can't make use of that CA's code signing services to decide if it wants to run a module solely because linus doesn't approve of parsing the file format that contains the key.
The biggest place this comes up is binary only graphics drivers from ati and nvidia - without changes os's like fedora are going to refuse to run them because they're unsigned, which is unfortunate considering how uneven some of the open source 3d drivers are and the heavy reliance on 3d in all modern desktops environments.
Meanwhile microsoft is perfectly willing to sign these drivers and has an existing substantial CA operation. Both ati and nvidia submit their windows drivers for signing to this CA all the time, so it'd be almost no extra effort for them to get their linux shims signed as well.
But because linus thinks parsing a PE for a signed module key is asinine, he goes on to provide a series of rather off the cuff alternatives:
a) Every distro should parse the PEs and add every key of every 3rd part module they wish to allow to run and embed these in their signed kernels, issuing a new kernel every time a driver revs.
b) ok, maybe that's not ideal. how about every distro that wants to allow binary drivers to run builds their own CA infrastructure, verification and qualification team and revocation infrastructure. So that's a team at cannonical, redhat, novell, mint, oracle, ibm, etc. etc.
c) ok, maybe full ca teams are a bit burdensome. How about all you distro guys just blind sign the binary drivers with your own signing key - worrying that MS might revoke your key because you blind signed an exploit is pointless fearmongering.
d) ok, you're right this is harder than i thought. let's just collectively decide that users with secure boot enabled will be prevented from running any module not shipped by the os vendor. Aka fuck off unless you're using intel video.
e) alright, maybe that's a little severe. Instead let's just punt entirely - even though we're going through the trouble of a chain of trust from firmware to boot loader to kernel to most modules, let's allow any unsigned binary module to be loaded by default.
f) ok, i guess that kind of defeats the purpose. None of this is good security anyway - what we should be requiring is any user that wants to use secure boot should generate their own signing keys, add them to the firmware and then parse and sign everything they trust, repeating the process every time they update while of course protecting the signing key from attackers.
I think that about covers it. Linus is really smart, but sometimes he makes a snap decision and then will perform whatever mental gymnastics are necessary to defend it to the death. And most of his inner circle will publicly go along with it because of the real chance he'll pay you back by randomly torpedoing something of yours sometime in the future.
Linux's signed code infrastructure is currently the worst in the industry and Matthew and Redhat have provided the bulk of the improvements that everyone is using. It's going to provide real user benefit, even if the users are paralyzed by FUD. Getting in the way of the process or trying to punt it out of mainline and onto everyone who ships a distro isn't going to help anyone.
Hmm, could it be argued that this is less a snap judgement and more one of strengthening the long-term political and technical health of the OS? Rather than take the easier short-term path, which may eventually put Linux's metaphorical balls into a Microsoft vice, he would rather expose some pain now to defend as much as possible long-term autonomy. I would imagine this to be true given Torvald's historic ability to keep Linux healthy and viable in spite of one of the world's most powerful corporations.
Thus, in the short term, the user must perform some gymnastics to boot new kernels, but if this inconvenience is really that painful, it will create market disequilibrium that will motivate creative solutions. Some naive, off-the-cuff ideas:
- vendors pre-installing trusted root certs for Linux distros or a consortium of them,
- vendors making it easy to disable SecureBoot (physical switch?),
- vendors forcing SecureBoot configuration/opt-in on very first boot,
- UI, tools, or documentation enhancements to make local key management and signing easier, or
- simply a slightly more aware userbase (the same way phone locking/unlocking became a mainstream concept).
If that was the primary concern where was the criticism when his employer, The Linux Foundation, announced with great fanfare that they had released a microsoft ca signed boot loader to support linux uefi secure boot with the default oem trust setups.
If there is anyone that has a chance at providing a FOSS UEFI signing alternative to Microsoft it's the Linux Foundation. But they're unwilling or unable to do it. Redhat is unwilling to and has proven they have trouble securing their signing keys. Canonical tried and failed to sell vendors on including a ubuntu key, and they weren't even looking to sign for other companies.
There will soon be plenty of people running linux on their computers with secure boot enabled and relying on Microsoft's CA for a chain of trust. It's a done deal and no one is out there stepping up to provide a viable FOSS PKI alternative.
The only thing being debated here is if many of them will be running rather slow and buggy video drivers.
Market-driven vendor support of Linux is not unprecedented. Ever heard of Dell, Lenovo, Acer, Nvidia, or the Android ecosystem?
If you, as a vendor, want to support secure boot (as a new, optional, extra feature), I think option B (be the CA) is the only right way to do it. If vendors don't like having to do duplicate work then they can cooperate and form a shared organization to be the central Linux CA. Relying on Microsoft to do the signing is not a good idea long term.
The Linux kernel is simply not the place to parse Windows binaries. It's not Linus's fault the de facto standard is a Windows binary, it's just another harmful side-effect of the Windows monopoly.
Really SecureBoot should just die on the vine. But the monopolist wants to force it on their customers, so here it is.
Signing something from an untrusted source (Having dealt with having to revoke a bunch of Microsoft signed stuff not too long ago, I assure you Microsoft is not infallible) doesn't buy you a whole lot. If you build a driver yourself and sign it, ok. Actually that's probably the best way, download the nvidia driver, sign it with your own one-time key, and voila, even if someone pulls a Folgers Commercial on your driver, they really can't sign it, they don't even know where the one-time key came from.
For a normal user, the effort to try and hack your own signing mechanism seems like it would be non-trivial. Figuring out how to hack a giant 'trusted' signer is a pot of gold at the end of the rainbow. I would not be suprised if there are more compromised signing authorities right now. Flame was signed with a msft key that was actually not meant to sign code, but for several years, since this key was from msft, people happily loaded and ran flame.
I don't really think it's FUD to say trusting your security to the entity that attracts the most attention is not a great plan.
Your points are good, except I disagree that Linux's only beef is parsing PE files, I think that's really the least of his concerns. Msft could deliver up its keys in any form, it won't solve the insecurity of not knowing if your signer has been breached.
Good security is hard, I think that's the main problem with all of Linus's ideas.
It isn't a big deal to the average user because os internals aren't a big deal to the average user. Never the less, linux hasplenty of industry standard security measures like IOMMU support, DEP, ASLR, containers, RBAC, seccomp and so on.
You can't expect the average user to know what technical countermeasures they need anymore than you can expect them to know what garbage collection or interrupt coalescence strategy suits them best. It's important for folks with domain knowledge and pull requests to look out for them. The only problem in this case is the politicization rooted in hatred of a version of microsoft that differs considerably from the one that exists today.
Running as a user and protecting root without running unnecessary services is your first step, after that it's all kind of iffy.
That's advice from a different era. On the desktop (any flavor) the attack vectors are almost entirely malicious attachements, plugin exploits, browser and supporting library exploits, other random outbound clients and social engineering.
While linux isn't a popular target, it is at least as vulnerable to these attacks as the others. And due to design issues in X windows, once you can run client native code it is trivial to sniff credentials from the next elevation event.
While it's not infallibile, requiring the rootkit or bootkit to be signed by a CA raises the bar dramatically.
I don't think running as an unprivileged user as often as possible is out of date. Yes, there are plenty of attack vectors, but there's no reason to make it easy. You could run your browser as a different user, so it can't ever see your sudo commands or whatever, just common sense. I'm not familiar with X window exploits to sniff credentials, but I would assume the application being run as a different user than the x session would add difficulty?
FYI my day job is dealing with Windows, driver signing/loading issues, and what have you. I have very low expectations for the security-mindedness of average users. Many people will click on a link after an AV "warning", because "Hey, the AV will stop it if it's really an issue!"
Why do they have to embed the ID of whitelisted modules in the signed kernel? Why not have a kernel that will load any module the root user tells it to load, then have userland insmod utility verify the signatures using a configuration file like /etc/accepted_module_signers.conf?
Otherwise, you can easily construct a harmful payload consisting of the distro-signed Shim, that kernel, and a modified userspace that loads an arbitrary module which takes over the kernel and runs arbitrary code in ring 0. And then that distro's Secure Boot signing key gets revoked.
Anyway, thanks for an excellent and detailed technical take on the situation as opposed to the knee jerk bashing in many of the other posts.
There's also some excellent detail in the below article without the histrionics.