We are taking several steps to remove this risk:
• First, today we released a Security Advisory outlining steps our customers can take to block software signed by these unauthorized certificates.
• Second, we released an update that automatically takes this step for our customers.
• Third, the Terminal Server Licensing Service no longer issues certificates that allow code to be signed.
Interestingly, one of the revoked certificates expires in 2017 and used SHA-1. This one had certificate signing enabled! It's possible that either the private key for that certificate escaped, or it's being revoked as a precaution to remove the unnecessary capabilities.
There are very cheap collision attacks, which let you to generate two values that hash to the same MD5 signature; an attacker can exploit that by making you sign one object, and then swapping it out with another with the same hash.
The articles though seem to be suggesting that the "real" certificates were being granted with too many powers. Microsoft's advisory seems to suggest that both were potentially used (MD5 collisions and overly-broad purposes on real certificates). It's hard to tell what really happened here...
Update: others say its that the certificate was mis-configured to allow those fetching the cert to sign themselves as MS... oops.
So both the code-signing bit and a real life collision! Get the popcorn :)
There are plenty examples of companies keeping something secure. There aren't many of industry bodies.
I'd trust a bank (well, some banks) before I'd trust either the W3C or IETF with the keys to the universe.
An alternative was producing some sort of overall Linux key. It turns out that this is also difficult, since it would mean finding an entity who was willing to take responsibility for managing signing or key distribution. That means having the ability to keep the root key absolutely secure and perform adequate validation of people asking for signing. That's expensive. Like millions of dollars expensive. It would also take a lot of time to set up, and that's not really time we had. And, finally, nobody was jumping at the opportunity to volunteer. So no generic Linux key.
It's not like other software projects or companies didn't have grave security failures.
If secure boot were about securing your computer from third parties (e.g. malware, rootkits cough) installing unauthorized software, then the correct implementation would generate a private key on each machine at the time of 1st boot, used to sign all software before running. This would ensure that the only software that could run on a machine was specifically authorized by the user. In addition, there would be no worry with regards to leaked keys, because the risk of key exposure is a single machine.
Unfortunately, secure boot is not about preventing the spread of malware. It's about securing the computer from you, the owner. It's about DRM, and ensuring that the code running on a computer (e.g. Blu-Ray player) is controlled and trusted by a third-party, who is then able to dictate how certain information available to the machine is processed.
Don't let the name fool you.
The problem is users aren't able to tell if a given downloaded piece of code is safe or not, so there is a need for some kind of trusted code distribution system as well. Plus, once you trust a vendor, you probably trust most of their code, or at least most of the updates to specific packages, and may not want to audit it each time yourself.
Disk integrity (which your solution would provide) is necessary but not sufficient for secure computing.
A one-machine-one-key scheme is impractical for so many reasons, one of them being exactly that: onus is on the user to keep the key available but secret. People who blindly double-click on random .exe attachments would supply the key pronto as soon as any malware would ask for it. As all sysadmins know, securing a machine from its owner is often the right thing to do, and in that sense UEFI is not a bad thing.
Besides, if UEFI really becomes a staple of the Windows world, I can see enterprise/sme customers requiring a sanctioned way to add custom cert authorities, to which Microsoft won't be able to say no. Because taiwanese manufacturers like to reuse parts wherever possible, the feature will trickle down to the consumer market.
The more I read about UEFI, the more the scaremongering seems like paranoia.
It most certainly is. We're not talking about an IT department controlling company-owned machines. We're talking about Microsoft controlling user-owned machines.
Was a Microsoft employee involved?
Can't we have some sort of chain of trust beyond the reach of sovereign governments?
"When we [Microsoft] initially identified that an older cryptography algorithm could be exploited and then be used to sign code as if it originated from Microsoft, we immediately began investigating Microsoft’s signing infrastructure to understand how this might be possible. What we found is that certificates issued by our Terminal Services licensing certification authority, which are intended to only be used for license server verification, could also be used to sign code as Microsoft. Specifically, when an enterprise customer requests a Terminal Services activation license, the certificate issued by Microsoft in response to the request allows code signing without accessing Microsoft’s internal PKI infrastructure."
Interesting. The Microsoft Terminal Services CA has the corporate identity, but not the corporate trust chain. Is this a deliberate back-door provided to the US government, or simply an ordinary vulnerability?
We need some sort of certification, an independent audit, of X509 certificate authorities and their trust chains. Agreed?
I probably wouldnt rush to fix a convenient exploit once discovered, though, if existing USG policies could protect against it (removing most CAs, using DOD CAs) for government operations.
Certificate signing is only as good as the weakest points which are 1) humans, 2) code, 3) maths respectively.
You can't trust (1), ever.
(2) is flawed by the fact that (1) made it.
(3) is pretty good but relies on a few assumptions which we appear to arrogant enough to assume will always stand.
1. The certificate appeared to be available to anyone who was looking hard enough. Microsoft provided the misconfigured certificate to anyone activating their Terminal Services product (!). Pretty embarrassing.
2. It's not evident what the signing requirements are for Microsoft Automatic Updates code (at least I can't find them). Presumably they validate an explicit Windows Update chain, but if they don't, this could perhaps enable an attacker to auto-install the Flame virus as an update. I doubt that would be the case, but their security announcements aren't very forthcoming.
Regardless, it appears that a signed driver is enough to pwn any modern Windows box via USB. "The system is installing driver software for your device..."
EDIT: What it most likely would work for over the network would be a man-in-the-middle attack on users who "Always trust ActiveX controls from Microsoft". Not to mention plain old impersonating websites for users of MSIE and Chrome.
A scary but plausible possibility is that an attacker with such a cert could forge client certificate credentials to obtain remote access via RDP, MS Terminal Services Gateway, ISS certificate mapping, etc.
"...Flame has a module which appears to attempt to do a man-in-the-middle attack on the Microsoft Update system..."
More info. https://www.securelist.com/en/blog/208193558/Gadget_in_the_m...
Via USB you can pwn any modern OS by implementing standard mouse, keyboard and display device classes.