More details in the MS technet post:
We have discovered through our analysis that some components of the malware have been signed by certificates that allow software to appear as if it was produced by Microsoft. We identified that an older cryptography algorithm could be exploited and then be used to sign code as if it originated from Microsoft. Specifically, our Terminal Server Licensing Service, which allowed customers to authorize Remote Desktop services in their enterprise, used that older algorithm and provided certificates with the ability to sign code, thus permitting code to be signed as if it came from Microsoft.
We are taking several steps to remove this risk:
• First, today we released a Security Advisory outlining steps our customers can take to block software signed by these unauthorized certificates.
• Second, we released an update that automatically takes this step for our customers.
• Third, the Terminal Server Licensing Service no longer issues certificates that allow code to be signed.
This doesn't seem to make a lot of sense. On one hand they say that it's an "older algorithm" which presumably implies that the vulnerability is a simple software implementation bug. On the other they point out that the provided certs had "the ability to sign code", which is a straight up authorization failure (and a staggeringly bad one given the regime). Which is it?
As far as I can tell from the patch, the "Microsoft Enforced Licensing Intermediate PCA" certificate had code signing enabled. This is a little silly, but not an issue as long as the corresponding private key is kept safe. Unfortunately, the certificates were signed using MD5(-RSA), which is broken. I believe that what happened is that someone found a collision for this certificate and used the resulting fake certificate with known keys to sign Flame.
Interestingly, one of the revoked certificates expires in 2017 and used SHA-1. This one had certificate signing enabled! It's possible that either the private key for that certificate escaped, or it's being revoked as a precaution to remove the unnecessary capabilities.
Though MD5 is considered utterly broken for cryptographic purposes, AFAIK there are as of yet no known practically viable preimage attacks. Finding a collision for an already signed certificate would require a preimage attack.
There are very cheap collision attacks, which let you to generate two values that hash to the same MD5 signature; an attacker can exploit that by making you sign one object, and then swapping it out with another with the same hash.
That's true. However, it's possible to do it with two co-created certificates (http://www.schneier.com/blog/archives/2008/12/forging_ssl_ce...), and depending on how the licensing process works, that could easily have been done. I would imagine that the creators of Flame have more than enough computing power to do that.
The articles though seem to be suggesting that the "real" certificates were being granted with too many powers. Microsoft's advisory seems to suggest that both were potentially used (MD5 collisions and overly-broad purposes on real certificates). It's hard to tell what really happened here...
Ah, OK. My read was that the certs granted by the Enforced Licensing thing were able to sign code. Your explanation makes a ton of sense to me. They just forgot to audit for MD5-based keys and left this alive in the wild.
Another alternative... a virus which looks a lot like being coded by the NSA or a similar agency is now found and in the open to be analyzed by everyone. It turns out that it's using a Microsoft signature. Microsoft needs some explanation which isn't completely pissing off all it's customers in the middle east, or wait - let's make that any state-customers worldwide. Yeah, crazy conspiracy theory stuff, I'll put on my tin foil hat now.
No, I think the MD5 collision hypothesized by rb12345 above makes a lot more sense. Someone went out and audited the full set of code-signing certs, discovered this oddball one, and exploited it. There's no secret in the process that couldn't have been discovered by a suitably determined attacker. It was a very understandable MS process goof that allowed this oddball cert to live.
Yeah, the strange part is just that we're talking about a virus which according to all reports is mostly used for attacks on the middle east. And according to the Kasperksy guys it has a complexity that hints a lot at state sponsorship. We just learned this week that the US has a cyberwar program and works there together with Israel - and that's not even some crazy conspiracy theory but officially acknowledged. Also an Israeli minister hinted that they would use such tools hours after the flame news got reported. The question is - would such an agency rather try to hack the Microsoft certificates or simply ask and tell Microsoft to prepare a good excuse once it blows up? I mean if flame was written by anyone else I'm pretty sure they hacked Microsoft, but that would mean there's someone out there now writing viruses at a level which makes virus experts from Kaspersky think that it can't be done without state sponsorship. Or we have the NSA hacking Microsoft now - well, that would be at least some fun.
Most info I can get on this is through a reverse engineer done in 2004 of the RDP (http://efod.se/media/thesis.pdf), it was established that when terminal services set up in Application Server Mode(i.e. corporate environments) its configured to sign requests using the x509 protocol. Wouldn't be much of a stretch to extract the certificates used if they were configured incorrectly. My guess is that due to this being engineered as a licensing issue (i.e. DRM added after the fact and not as a trust issue) corporates weren't issued with specific certs, theres very little that could be done to trace down the point where this leaked.
It's just one of many reasons why Microsoft alone shouldn't decide what operating systems can work on UEFI machines. If the future is UEFI, then we need an industry body like the W3C or something to work with OS vendors, not just Microsoft.
The idea that an industry body would be more secure than the security department of major software company is something I don't see a lot of evidence for. Apart from the root DNS servers I can't think of any.
There are plenty examples of companies keeping something secure. There aren't many of industry bodies.
I'd trust a bank (well, some banks) before I'd trust either the W3C or IETF with the keys to the universe.
An alternative was producing some sort of overall Linux key. It turns out that this is also difficult, since it would mean finding an entity who was willing to take responsibility for managing signing or key distribution. That means having the ability to keep the root key absolutely secure and perform adequate validation of people asking for signing. That's expensive. Like millions of dollars expensive. It would also take a lot of time to set up, and that's not really time we had. And, finally, nobody was jumping at the opportunity to volunteer. So no generic Linux key.
I'm going to jump out of the woodwork to answer this one, because the solution is really obvious.
If secure boot were about securing your computer from third parties (e.g. malware, rootkits cough) installing unauthorized software, then the correct implementation would generate a private key on each machine at the time of 1st boot, used to sign all software before running. This would ensure that the only software that could run on a machine was specifically authorized by the user. In addition, there would be no worry with regards to leaked keys, because the risk of key exposure is a single machine.
Unfortunately, secure boot is not about preventing the spread of malware. It's about securing the computer from you, the owner. It's about DRM, and ensuring that the code running on a computer (e.g. Blu-Ray player) is controlled and trusted by a third-party, who is then able to dictate how certain information available to the machine is processed.
TCPA/TPM's measured/trusted boot gives you basically that.
The problem is users aren't able to tell if a given downloaded piece of code is safe or not, so there is a need for some kind of trusted code distribution system as well. Plus, once you trust a vendor, you probably trust most of their code, or at least most of the updates to specific packages, and may not want to audit it each time yourself.
Disk integrity (which your solution would provide) is necessary but not sufficient for secure computing.
I shudder at the thought of having to generate and store a crypto key for the lifetime of a machine in order to run it. This is why, btw, I've abandoned full-disk encryption: I once lost a key (in shambolic fashion), hence losing an entire disk.
A one-machine-one-key scheme is impractical for so many reasons, one of them being exactly that: onus is on the user to keep the key available but secret. People who blindly double-click on random .exe attachments would supply the key pronto as soon as any malware would ask for it. As all sysadmins know, securing a machine from its owner is often the right thing to do, and in that sense UEFI is not a bad thing.
Besides, if UEFI really becomes a staple of the Windows world, I can see enterprise/sme customers requiring a sanctioned way to add custom cert authorities, to which Microsoft won't be able to say no. Because taiwanese manufacturers like to reuse parts wherever possible, the feature will trickle down to the consumer market.
The more I read about UEFI, the more the scaremongering seems like paranoia.
There are crypto chips that provide secure storage and signing for private keys. If these are used in UEFI, then the chip can sign e.g. Red Hat's boot certificate without the private key ever touching the CPU.
"When we [Microsoft] initially identified that an older cryptography algorithm could be exploited and then be used to sign code as if it originated from Microsoft, we immediately began investigating Microsoft’s signing infrastructure to understand how this might be possible. What we found is that certificates issued by our Terminal Services licensing certification authority, which are intended to only be used for license server verification, could also be used to sign code as Microsoft. Specifically, when an enterprise customer requests a Terminal Services activation license, the certificate issued by Microsoft in response to the request allows code signing without accessing Microsoft’s internal PKI infrastructure."
Interesting. The Microsoft Terminal Services CA has the corporate identity, but not the corporate trust chain. Is this a deliberate back-door provided to the US government, or simply an ordinary vulnerability?
We need some sort of certification, an independent audit, of X509 certificate authorities and their trust chains. Agreed?
Allowing code signing with a certificate that we generate for and hand out to enterprise customers was certainly not something we intended to do at Microsoft. It was sloppy and a mistake on our part that we rushed to address as soon as we discovered the issue. Honestly, I think many of us are terribly embarrassed by this failure and a little surprised that no one (us included) had discovered this issue years ago. :(
Humans are always the weakest link. I worked on Microsoft's crypto QA team many years ago. VeriSign issued some test certificates for our internal testing. A co-worker called VeriSign customer service, asking them to revoke one of the test certificates so we could test CRLs (Certificate Revocation Lists). Without any additional authentication or verification, the customer service person revoked Microsoft Corporation's root certificate! This was misunderstanding was quickly remedied, but it was a perfect demonstration that CA trust chains is a house of cards.
I agree that is by far the most likely case, but if you wanted to give the USG a backdoor to sign code, this is also exactly how you'd do it -- as a deniable oversight, which, when found, could be explained as a simple oversight and rapidly corrected.
Yeah, so would I. It also got put in one ago, which is before putting back doors into commodity OSes was such a useful thing for national security (probably compromising crypto systems and specific RF hardware used in air defense, etc was the primary goal back then).
I probably wouldnt rush to fix a convenient exploit once discovered, though, if existing USG policies could protect against it (removing most CAs, using DOD CAs) for government operations.
1. The certificate appeared to be available to anyone who was looking hard enough. Microsoft provided the misconfigured certificate to anyone activating their Terminal Services product (!). Pretty embarrassing.
2. It's not evident what the signing requirements are for Microsoft Automatic Updates code (at least I can't find them). Presumably they validate an explicit Windows Update chain, but if they don't, this could perhaps enable an attacker to auto-install the Flame virus as an update. I doubt that would be the case, but their security announcements aren't very forthcoming.
The Windows Update signing requirements are, AFAICT, not documented and they do require a special chain. Whether having Microsoft in the root is special enough is another question.
Regardless, it appears that a signed driver is enough to pwn any modern Windows box via USB. "The system is installing driver software for your device..."
EDIT: What it most likely would work for over the network would be a man-in-the-middle attack on users who "Always trust ActiveX controls from Microsoft". Not to mention plain old impersonating websites for users of MSIE and Chrome.
A scary but plausible possibility is that an attacker with such a cert could forge client certificate credentials to obtain remote access via RDP, MS Terminal Services Gateway, ISS certificate mapping, etc.