Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft proves backdoor keys are a bad idea (theregister.co.uk)
299 points by ChuckMcM on Aug 10, 2016 | hide | past | web | favorite | 100 comments

(disclaimer, MS employee, non-security expert here).

I've read through the article, here, and in other places, and I'm seeing sentiment that this is a big fuck up on Microsoft's part. I might be completely misunderstanding, but I just don't see it.

In order to use the backdoor, you've got to flash firmware, so, you've got to have physical access to the device. If an attacker has physical access to your device, you're already screwed.

So, I don't doubt that the key exists (I had to use it myself when testing RT devices back in the win8 days), but what's the exploit here? Why is it, as the title suggests, a 'bad idea'? Isn't a secure boot policy that can be bypassed with physical access more secure than none?

The point the register makes is not that this allows unlocking devices (though that's interesting in it's own right), but that is done via a "secret key" that now got exposed. Very similar to what the government wants with key escrows and other backdoor mechanisms for decryption of communication.

Maybe to clarify: it highlights the mechanism (golden key) is flawed. That Microsoft uses it for boot loaders is unimportant.

Ok, I get it. The message is "don't use backdoors, because they'll inevitably get leaked", which I agree with.

Unfortunately, I don't think that's the message that's being interpreted by the vast majority of readers. I'm delving into opinion territory now, but when the word 'backdoor' is used, aren't most people going to assume that it's an FBI backdoor, instead of a test/development backdoor? This seems like the kind of article that fans the fuels of conspiracy theorists, and no one seems to be doing anything to correct the record.

I s there any difference between a test/development backdoor and a FBI backdoor?.

If you let backdoors in the system, of course the secret services will demand to have it.

In fact, backdoors that were put in place because secret services' pressure, will be suited as developer backdoors as an excuse when found by the mainstream.

First they install backdoors in systems, in order for MS or the US gobertment to have complete access to any computer in the world, then they worry when the Chinese and Russians find them.

I think you're missing my point (and my poor wording probably didn't help).

A backdoor that requires physical access isn't a backdoor. If an attacker has such access, you're already screwed.

A backdoor that requires administrative privileges isn't a backdoor. If an attacker has such access, you're already screwed.

The so-called dev/test 'backdoor' really isn't a backdoor. It's a 'unlock' tool that's required for anyone who's going to engineer the device. My main beef is that this article appears to be re-branding the engineering unlock as a backdoor, and confusion is obviously ensuing.

Again, In my original post, I asked "What's the exploit"? and I understand that the existence of an exploit might not be the article's subject, but If you really think that there's a security problem here, I'll ask it again: "What's the exploit?"

> A backdoor that requires physical access isn't a backdoor. If an attacker has such access, you're already screwed.

The security model of "secure boot" is designed to mitigate physical access, so a private way of circumventing that property is a backdoor.

While breaking this one aspect leaves other useful security properties intact, if those properties had been the only goal, they could have been implemented with a simpler and more user-friendly system.

> A backdoor that requires physical access isn't a backdoor. If an attacker has such access, you're already screwed

so why did the original "load an unsigned debug build" functionality require cryptography rather than just, say, holding down the volume button during boot? Because it's not nearly as simple as "I have the hardware, I can do anything".

> A backdoor that requires physical access isn't a backdoor. If an attacker has such access, you're already screwed.

> A backdoor that requires administrative privileges isn't a backdoor. If an attacker has such access, you're already screwed.

Then why bother trying to lock it down in the first place?

Service providers that wish to provide subsidized devices as part of service contracts usually require that said devices can't be repurposed for the duration of the service contracts. Thus a means is needed to lock a device.

The term "backdoor" derives from the very notion of a door. In the back whatever it is you're trying to secure.

I'm finding your line of argument across multiple comments increasingly disingenuous. You're hurting your argument far more than you're helping.

With all due respect, What argument do you think I'm trying to make?

I ask this because I'm still rather unclear about the argument that the original article is trying to make. I'm even more confused about what conclusions slipstream would have us draw from his web-post. Some comments here say that it's not an exploit, that instead its a lesson about why you shouldn't, as a matter of policy, include backdoors. Others say that it is a backdoor/security hole, and are appropriately up in arms over it. The only thing I'm confessing is confusion.

I understand that you think I'm being disingenuous, and the nice thing is that we really don't need to continue this conversation. If it is as bad as some are suggesting, than wouldn't an exploit be out in the near future? I'd expect such an exploit, or even evidence of a backdoor, to make further news, and if I see it then I'll obviously have my answer.

I'll bite. The original article is from the Register. If you work at Microsoft and read HN I'd expect you have a reasonable idea of their average article quality.

I don't think they were trying to make any particular point rather than generate pageviews with a combination of "haha M$" and righteous anti-backdoor anger.

That said, I myself agree with other commenters that your stance "this isn't a backdoor because it requires physical access; if you've given up physical access you're already screwed" is beyond disingenuous. Disregarding a login screen bypass by the same logic would be rightly pilloried. Yes, physical security is the hardest to improve, but that's exactly the carrot Microsoft has used to try and convince the world Secure Boot isn't a pure anti-consumer move.

> "...your stance "this isn't a backdoor because it requires physical access; if you've given up physical access you're already screwed" is beyond disingenuous. "

Ok, I'm open to the possibility that my views might be dated on this.

But, to be fair:

1) the 'physical access' rule was an absolute given in training that I've taken. (I'll let you draw your own conclusions since that the training was hosted by Microsoft). I guess I've had it drilled into my head for so long that I didn't even think the assertion would be controversial here.

2)Schneier commented on here (https://www.schneier.com/blog/archives/2009/10/evil_maid_att...) stating: "As soon as you give up physical control of your computer, all bets are off."

Granted, Schneier's comment was in 2009, and it's possible that expectations on security have changed since then, but

3) this stackexchange question (http://security.stackexchange.com/questions/19334/what-can-a...) is a bit more recent. Some quotes:

"Physical security is a critical (arguably the most critical) part of IT Security. At the end of the day, almost anything can be overridden with local access to the hardware."

"If a "hacker" with any real experience or skill has physical access to a PC, I would just throw away the hard drive and start fresh."

4) even some other comments in this thread (https://news.ycombinator.com/item?id=12264137) don't paint my notion as disingenuous as you might.

Again, I'm not a security expert, but do you think I could be forgiven for making such an assertion?

Not to split hairs, but there's physical access and there's physical access. I've got a Windows RT device up there in my livingroom, but as much as I can pick it up, heft it at the wall, or poke my pinkie into its USB port, I'm not the kind of dude who can crack it open and steal an encryption key that's being transmitted across a bus on the third layer down of its motherboard. I can, however, log in as administrator, download a CMD script (or whatever), and run it. Which one of those is physical access? I guarantee that if you asked one of those greybeards who told you about physical access, they'd back up to the stealing-a-key-from-a-bus scenario, which is out of bounds of reality for most of us. The exploit at hand is not, and I think that's the difference.

> Again, I'm not a security expert, but do you think I could be forgiven for making such an assertion?

Yeah, I forgive you.

The "backdoor" isn't the package that got leaked, it's the private key that signed the package. Without the key, the deployed packages wouldn't be serviceable in case a real exploit was found. Without a serviceability plan, they couldn't have released Secure Boot. So the question isn't whether there was a back door - there had to be a back door - it's whether Secure Boot is a legitimate thing to have.

Deflecting all possible arrows, comes to mind.

Spaghetti to a wall as well.

You're not arguing coherently, effectively, logically, or using terms defined as they're commonly understood.

It's like saying the admin/root account is a backdoor because it allows you to do anything to the system.

> It's like saying the admin/root account is a backdoor because it allows you to do anything to the system.

It's more like saying having a admin/root account and a post-it with a password hint is a backdoor (which is also wrong, of course).

Installing an OS when Microsoft doesn't want you to is the exploit. The legitimate user is the "attacker," Microsoft (or their policy) is the "target."

(I'm not sure if that's what the Register was trying to say, because they're kind of screechy and incoherent at the best of times, but that's how I personally read it.)

A backdoor is a backdoor = unauthorized access to a user's device. What does it matter if it's Microsoft, FBI, or the Chinese government using it? By definition, it can't be "just Microsoft" anyway, as the people who discovered this pointed out, whether it's up to Microsoft or not.

> and no one seems to be doing anything to correct the record.

Maybe Microsoft feels guilty about it? It reminds me of when a journalist asked them about Bitlocker being backdoored, and they refused to comment.


In other words, if Microsoft can't keep the proverbial cat in the bag what hope do other less tech-savvy companies or organizations have?

Like the IoT makers who lock their devices by means of clear text passwords hardcoded into the device firmware?

Most companies don't go to the extent of having a cryptographically secure key validated by a TPM. At least MS is trying to secure the system.

MS are trying to secure their monopoly. If they wanted to secure the system, key management would be up to the user.

But it's ok if Samsung, Google, Apple or whoever else lock you out of the device? Or at least they're not doing it to protect their monopoly.

Also, we're talking about Microsoft ARM based devices so what monopoly are they trying to protect? The monopoly on landfill space they used to dump all the Windows RT devices they failed to sell?

> But it's ok if Samsung, Google, Apple or whoever else lock you out of the device? Or at least they're not doing it to protect their monopoly.

No, it is not ok. Kindly point me to the place where I said it is ok. (That said, Google sells phones with unlocked boot loaders, and have similar "Developer Mode" support for Chromebooks. Samsung and Apple to the best of my knowledge do not sell unlocked phones but do sell unlocked laptops).

> Also, we're talking about Microsoft ARM based devices so what monopoly are they trying to protect? The monopoly on landfill space they used to dump all the Windows RT devices they failed to sell?

Ah, but they wouldn't have produced millions of these devices if they didn't think they could sell them. The ARM based devices were a test of the waters.

Microsoft requires, on x86, that trust of the MS key be preinstalled and that it be possible to turn off UEFI - but they do not require (e.g.) that there be an ability for the user to remove the MS trust or add new trust. (All of these requirements support their monopoly).

I cannot know, but I suspect Microsoft is playing the long game here, and that at some point after Win7 extended support (4 years from now), they plan to require that UEFI cannot be turned off for a machine to be certified to run Win10 "dominion edition". They've started with restricting kernel drivers in the "anniversary edition". I suspect The ARM UEFI was a test for this.

If I am right, the failure of WindowsRT will not change the course of this long game; hopefully something else will (right now, the emergence of reasonable Chromebooks is the only thing that appears to have the potential to stop it)

> Google sells phones with unlocked boot loaders

They also sell carrier editions with locked boot loaders just like everyone else. Locking boot loaders is usually a carrier requirement imposed on manufacturers, not something OEMs arbitrarily decide to do, unless you're TP-Link.

> have similar "Developer Mode" support for Chromebooks.

CoreBoot on Chromebooks is just as locked down as UEFI Secure Boot if not more so. The RW Firmware isn't loaded unless the RO Firmware permits it and you can't modify or replace the RO Firmware by software. It requires hardware modification. Enabling Developer Mode on a Chromebook is analogous to disabling Secure Boot in UEFI.

> Samsung and Apple to the best of my knowledge do not sell unlocked phones but do sell unlocked laptops).

Samsung UEFI laptops have been known to brick if you try to boot Linux, not fail to boot but BRICK. Apple's laptops are absolutely locked down using EFI, you can't install any other OS unless Bootcamp allows it.

> Microsoft requires, on x86, that trust of the MS key be preinstalled and that it be possible to turn off UEFI

Secure boot is a security feature that validates code prior to execution via PKI in a signature database. The PK, is created and owned by the OEM and it can add KEKs from vendors (or distros) to allow them to sign their code. This database is owned, controlled and populated by the OEM, not Microsoft. Microsoft has no control over it beyond making their software refuse to function on not compliant systems. However the purpose of the feature is the inverse of that. That being said, there are several Linux Distros that support Secure Boot and offer PKIs for vendors.

Starting with Windows 8.1, Microsoft requires that new systems support Secure Boot have it enabled with Microsoft's PKI. That doesn't stop you from disabling it to use another OS. It doesn't inherently lock anyone out using whatever software they want, that decision is made by the vendor. The whole point of the system is to protect users from a compromised system, if someone tampers with Windows or it's drivers then the UEFI will refuse to boot it. It has absolutely nothing to do with locking people out.

Now on their RT/Mobile devices you can't disable UEFI Secure Boot. That's akin to locked ROMs and it's a requirement from Microsoft. There's no expressed reason for this but it's obviously because Microsoft gives OEMs subsidies to make Windows RT/Mobile devices so the hardware is often less expensive than comparable Android hardware. The last thing MS wants is for people to replace Windows on the devices with CyanogenMOD.

> They've started with restricting kernel drivers in the "anniversary edition".

Your implication is that this is a user hostile action when it's really a security measure to prevent naive users from installing hostile drivers. You can disable the feature by turning on developer mode. Now that's ok for Google's Chromebooks but not Windows?

You're attacking Microsoft for things you're giving everyone else a pass on.

> Apple's laptops are absolutely locked down using EFI, you can't install any other OS unless Bootcamp allows it.

Not exactly true, or even a little bit true; however, I'd say that these tools aren't exactly well known.


> This page describes rEFInd, my fork of the rEFIt boot manager for computers based on the Extensible Firmware Interface (EFI) and Unified EFI (UEFI). Like rEFIt, rEFInd is a boot manager, meaning that it presents a menu of options to the user when the computer first starts up, as shown below. rEFInd is not a boot loader, which is a program that loads an OS kernel and hands off control to it. (Since version 3.3.0, the Linux kernel has included a built-in boot loader, though, so this distinction is rather artificial these days, at least for Linux.) Many popular boot managers, such as the Grand Unified Bootloader (GRUB), are also boot loaders, which can blur the distinction in many users' minds. All EFI-capable OSes include boot loaders, so this limitation isn't a problem. If you're using Linux, you should be aware that several EFI boot loaders are available, so choosing between them can be a challenge. In fact, the Linux kernel can function as an EFI boot loader for itself, which gives rEFInd characteristics similar to a boot loader for Linux.

You're right, this is 100% not true. I've installed VMWare ESXi on modern Apple hardware using tools like rEFIt. It's actually easier than on an equivalent PC since you never have to deal with the Secure Boot garbage.

I think thereg didn't get that quite right. From reading the original advisory, I would conclude that this is not a backdoor (a by-design security "override"), but an actual vulnerability in the boot loader, where it does not check the type of the signed data blob (policy vs. supplemental policy) and thus can be exploited to disable further signature checks. The effect is the same of course, both to attackers and to device owners.

One could speculate if this is a backdoor created with plausible deniability, maybe paid for by some three letter agency. But the evidence doesn't really point into the direction of an intentional backdoor.

> Isn't a secure boot policy that can be bypassed with physical access more secure than none?

Of course it isn't. Impossibility of bypassing is the only reason for secure boot technologies to exist. They are invented so that you are NOT automatically screwed if an attacker has physical access to your device. Secure boot technology is fine in principle, it's just stupid position the manufacturers hold. They ignore recent history. At their own peril, I'd say. https://en.wikipedia.org/wiki/Clipper_chip

And I'm not even sure physical access factor is strictly necessary for adversarial bypassing.

(I'm not a security expert, too.)

You need admin rights, but not physical access.

requiring admin rights for an exploit is certainly a lower barrier to entry than is requiring physical access, but I think my point still stands: if someone has those rights, you're already screwed.

> if someone has those rights, you're already screwed.

I think that this was once true, but Secure Boot was an attempt to improve the situation. My understanding is that Microsoft's Virtualization Based Security (which Device Guard/Credential Guard are built upon) rely on the assumption that the boot process is secure. If an attacker could have their root kit load before the OS/Hyper-V, then they render those mitigations useless.

>> Isn't a secure boot policy that can be bypassed with physical access more secure than none? >Of course it isn't.


Suppose that you are choosing a new device. You can choose one that let's a remote hacker install a rootkit via a malicious website, or you can choose one that let's a hacker install a rootkit only when they have physical access to your device.

Which one do you choose, and why is it the latter?

What about the third option, where a hacker can't install a rootkit even if they have physical access? That's what MS tried to build, but is now completely broken.

Secure boot's intentions are to ensure a malicious modified OS cannot run (when taken at face value, there is also the potential for vendor lock-in, but noone will defend it based on that).

Remember that I-phone the FBI wanted apple to hack? With this golden key, anyone could've hacked into that I-phone.

I am not sure how e.g. bitlocker uses the TPM, but it might be that being able to load a modified OS allows for bypassing OS-implemented rate-limiting on TPM access.

The TPM controls the rate-limiting. A compromised OS shouldn't affect it. (Otherwise a live CD could compromise your disk encryption, as the encryption is usually done at the OS level, not in the BIOS/EFI).

Perhaps not rate-limiting, but on the i-phone, it was about circumventing the limit of 10 tries.

Besides, doesn't secureboot block an unsigned live-CD? Otherwise, I don't see the point of secure boot past vendor lock-in.

It is supposed to prevent boot level malware or people otherwise compromising the device. Implemented correctly, it can be beneficial to security. As an end user, I like having a TPM, and I like having boot level protection measures. I wouldn't trust my life to them, but after seeing indictments/FBI testimony, they seem like they are enough to deter many attackers.

The whole "only applies to RT devices" was a bit shifty looking from MS, though. Fortunately the broken Windows on ARM model died.

Edit: I'd also note that, e.g. console gamers love this stuff. It allows them to pretty much rely on others not being able to cheat very well.

So does it validate the bios/uefi firmware? Which then in turn can be setup to validate the OS (or not to validate it?)

>If an attacker has physical access to your device, you're already screwed.

The FBI had physical access to the San Bernardino iPhone. If this were true, what was the point of the public fight with Apple and eventual purchase of a zero day exploit to get into it?

I'm speculating here, but surely by this argument Apple has a backdoor to unlock the phone if it physically has it.

Of sorts. Apple (and unless they have been compromised, only Apple) could replace the firmware of a locked phone with a new firmware that will accept millions of unlock attempts without erasing the phone, at which point the password could be found by brute force.

The FBI asked Apple to create this firmware, and Apple refused.

And presumably Apple could have designed it to not allow firmware updates without being unlocked. Or require overwriting of "secured" secrets if an update is applied anyways. It probably wasn't worth the potential user hassles?

My loose understanding is that this is in the works (not an apple employee).


What Apple had was the option of updating the phone with a compromised (or compromisable) security implementation.

Apple refused to do that.

The FBI did find an exploit (which AFAIR they've refused to disclose, in controvention of previous policy if not regulation and/or law), but that represents a flaw in implementation which Apple can then remedy.


Rather than asking Apple to obtain and provide the device’s passcode, Tuesday's ruling orders Apple to provide the FBI with special software that can act as a workaround for the iPhone's built-in security. Court documents suggest Apple should either provide software that bypasses the need for a passcode entirely or software that allows the FBI to unlock the phone by running an endless combination of passcodes until they are able to unlock the device. The latter method would also require software that disables the iPhone's auto-erase feature, which wipes a device after a certain number of unsuccessful attempts at unlocking.


If Apple can take the phone and unlock it with the tools and knowledge at their disposal, then they have a backdoor for it. The distinction that you're drawing is meaningless.

A backdoor is typically reserved for solutions that bypass the existing security. As I understood it, the only thing Apple could do was provide a firmware without rate limiting. This would have allowed the FBI to perform an exhaustive search for the key. We already know that brute-forcing a key is always possible, that's not a new way to bypass security.

No, brute forcing a pin is not supposed to be possible. And yes, it was always possible, but only if you have Apple's secrets. A short PIN is only secure because of rate-limiting. Loading a firmware without rate-limiting is effectively hacking the phone. Apple possessed backdoor to phones that have a short PIN.

The FBI was simply asking Apple to walk through a door that Apple created (perhaps unwittingly) for themselves. The case had nothing to do with forcing Apple to backdoor their products, which would mean mandated vulnerabilities that Apple is forced to ship phones with.

Apple can, and apparently is, designing future phones to not be vulnerable to this flaw. TPM chips, widely used in PCs, are not vulnerable to such an attack because the rate-limiting is done in the TPM hardware itself.

The statement assumes the attacker knows what they're doing when they have access to the phone.

To be fair, the FBI did get into the phone, and could have much sooner had they not wasted their time fighting Apple.

They got into an iPhone5C. If that phone was a 5S (and the FBI screwed up the MDM process just like they did) they would be out of luck.

Don't the keys normally prevent "people with physical access to the device" from easily updating the firmware?

This includes everyone from J. Random Hacker who wants to install on thier own hardware a linux (which might or might not be a good build), and an attacker who has borrowed your device from a hotel room to put some rootkit on it.

I don't know the full set of security measures on Windows RT devices, but if it doesn't have an effective hardware-level flash write protection in place, you should be able to change the policy with admin/kernel code execution rights, allowing for remote installation of a rootkit.

Besides, it is a worthy goal to protect a device from physical manipulation as well, i.e. to prevent border agents or secret agencies of a nondemocratic country from planting malware on your device.

The recent Apple vs. FBI debate has brought up a large number of arguments pro and contra protection from physical attacks.

If an attacker has physical access to your device, you're already screwed.

I'm no security expert, but is this true?

Say I use full drive encryption, with whatever the popular Linux distros are offering at install time, and Say I use a strong password of 30 random characters.

Is it feasible to break this encryption in a reasonable time frame?

There's a lot that can go wrong if an attacker has physical access to your device.

As far as is publicly known, you're not directly vulnerable to having the password broken as long as the device is powered off when it falls into the wrong hands, but there are a lot of caveats. If it's powered on they can possibly extract the password from RAM using a "cold boot attack". They can also freeze the RAM to get its contents to last longer and then moving it to another computer and dump it, recovering your full-disk encryption key.

If the attacker has temporary physical access, they have a number of ways that they can tamper with the device. For example, they can replace the BIOS, the bootloader, the firmware on your keyboard, etc. The new code can record your passwords or send your data to the attacker over the internet.

One other way that you're physically vulnerable is that the attacker can possibly infect your hardware before it even arrives at your home, by replacing hardware while it's shipping. This is probably only available to nation-states.

MS' own advisory says you don't need physical access (only admin rights).

> If an attacker has physical access to your device, you're already screwed.

Tell that to the FBI. They seem to have been quite upset about not being able to unlock some iPhones.

Doesn't anyone wonder why you never seem to hear stories about FBI being unable to unlock encrypted user devices? Surely at the number of people who encrypt their Windows devices is in the tens of millions in the U.S. alone, and some of them must have been caught by law enforcement for various crimes.

Ok but then why bother with whole key thing then? That's a whole lot of code, API crap and other junk why not just force user to hold a key to specifically reboot to reflash / upgrade boot loader. So have to hold the devices in hand to do it.

It seems to me the mechanism was created exactly not allow even an owner to boot a non-approved OS image.

That was exactly the intent. Service Carriers often require devices be locked because they subsidize them and want to ensure they're only/always viable with their services.

I think the basic idea is that with physical access in general, the attacker may have an N% chance of accessing it. The adage about physical access isn't to guarantee they will get access, just that you must consider it as such and react appropriately. The adage isn't a declaration of the state of the machine, it's a declaration on how to respond to its loss.

With a backdoor via any means, the attackers now basically have (N+1)% chance of getting that access, and arguably it's much more than a single increment since whereas other exploits "might" exist, these ones will guaranteed exist.

The suggestion and complaint isn't that "no secure boot is better than one that's easily bypassed", the complaint is requiring such a bypass and not giving the actual owners of the machine control over setting up that secure boot. With the RT devices, the user has absolutely no say in the secure boot process, and they're entirely dependent on Microsoft or whatever organization to provide best security practices.

Also, If the user sets up their own key and does it wrong, they have only themselves to blame. If a company or government does it, the user has no one to blame, since often legal contracts prevent any reparations as result of damage done by stuff like this. The US Government, for reasons of varying validity, needs to give you permission to sue it, meaning they can eschew any complaints should they lose the key.

All in all, a mandated backdoor just increases the chance that there will be a violation, takes control away from the user, and provides no compensation if the forced backdoor results in damages to the user.

From the last section:

> To reiterate, these Microsoft-signed resources – the debug-mode policy and the EFI installation tool – are only meant to be used by developers debugging drivers and other low-level operating system code. In the hands of Windows RT slab owners, whose devices are completely locked down, they become surprisingly powerful.

> It's akin to giving special secret keys to the police and the Feds that grant investigators full access to people's devices and computer systems. Such backdoor keys can and most probably will fall into the wrong hands: rather than be used exclusively for fighting crime, they will be found and exploited by criminals to compromise communications and swipe sensitive personal information.

Of course, the problem with a digital key is that once it's out in the wild, its out for everyone and forever.

Same issue with a physical one. The solution is the same, too: get a new key (and lock).

For Windows Mobile RDU, it isn't flashing firmware even :) It's a package to be flashed via iutool. For RT, the official unlocking way is on http://woaunlock on the MS corporate network.

Like when the government intercepted routers in transit to install their own snooping devices. Now they can do that to all computers and it would be very hard to tell. And now it can be anyone not just the government.

>Isn't a secure boot policy that can be bypassed with physical access more secure than none?

Not if the bypass is a zero-day.

I think the worst part is that the trusted boot environment can no longer be trusted.

I genuinely hope this will influence the whole government mandated back door debate for the better but I'm afraid that this will just be forgotten in a matter of minutes.

Like Gove said "we've had enough of experts", especially when their educated opinions don't suit us.

Except the "no encryption at all" crowd gets a louder voice.

Did anything come of this? https://en.wikipedia.org/wiki/NSAKEY

If a terrorist attack occurred and it was clear that it could have been prevented if the authorities could have read encrypted information, would that change your opinion of backdoors? If not, why are you criticizing the other side for being just as steadfast in their beliefs as you are in yours?

The truth is that no policy is going to be 100% effective so I'm not sure why either side of the debate should overadjust based on a single failure

I would not criticize the other side of the debate for being steadfast. I will however criticize the belief itself. I will base my criticism on actual events such as this one instead of hypotheticals.

I do not think considering this case in the encryption/backdoor debate is an over-adjustment based on a single failure. I think this a relevant example of the risks of creating and using a golden key. If you discount every individual example what are you left with? As daenney stated, the hope is to influence debate, not base the decision entirely on one event.

Do you believe this situation has no relevance to the encryption and backdoor debate? Are you arguing that because no policy will be 100% effective we just shouldn't bother with a discussion?

>I will base my criticism on actual events such as this one instead of hypotheticals.

The problem is that due to its nature, you only hear about one side of these events. We never hear about the attacks that were stopped or could have been stopped by backdoors. Many people take that as proof that these events don't happen but as the old saying goes absence of proof is not proof of absence. Without that proof, all we can do is provide hypotheticals.

I am not arguing that this story isn't relevant. I am arguing that people who feel this story should change their opponents minds are guilty of hypocrisy. In my personal view, both side in this debate have pros and cons. However, most in the tech community refuse to acknowledge that which results in zero progress.

> absence of proof is not proof of absence

I don't think this applies here. I suppose your point is goverment's exploits might be effective but they keep it low not to expose the fact of their existence.

Well, this might very well justify anything from 1984 or any other anti-utopia — "let us do whatever, it is effective and needed, but we won't give any facts or details, because it might compromise our system".

You are proving my point. Not everything is black and white. Not every slope is slippery. There is room for discussion and compromise. You comparing the people on the other side of the debate to fans of 1984 style totalitarian government gets nothing accomplished just like people on the other side saying you are enabling terrorists gets nothing accomplished.

The only difference is that the other side of the debate is already in power. So if the tech community doesn't even want to discuss this issue, guess which side of the issue will win the debate and decide future encryption law.

Writing the law is easy. Enforcing it, on the other hand ...

Are you of the opinion that all beliefs are above ridicule, or just this one? Because in my opinion the idea of a government controlled key escrow is the most stupid thing I've heard suggested and its proponents are either willfully ignorant or just insane. Recent history is full of major failures on the part of the USG to properly safeguard data of the most sensitive nature, as well as cases where they had collected clear text intel and just failed to connect the dots. So there is nothing to gain and everything to lose; they have enough data, they should work on their analytics before they endeavor to intentionally weaken everybody's security.

I'm actually a bit confused about how this is a "golden key" problem (if I understand what that means).

As far as I can tell, the problem here is that there's a signed policy that was intended for newer versions of Windows, but is also interpreted by older versions of Windows as a valid policy with a different meaning. On Win10 1607 it means "under such-and-such conditions, merge these additional rules into the already applied policy" and on older Windows it just means "apply this policy".

But the only key here in both cases is Microsoft's regular signing key. Which I guess could be considered a kind of golden key/backdoor/whatever in itself - just as in the recent Apple vs. FBI standoff you could say the fact that Apple had the technical ability to sign and install a hacked OS was a backdoor to begin with - but that doesn't seem to be what people mean.

> The Register understands that this debug-mode policy was accidentally shipped on retail devices, and discovered by curious minds including Slip and MY123.

> The policy was effectively inert and deactivated on these products but present nonetheless.

Whenever I read things like this, I always envision that it's not a cock-up at all, but instead a deliberate effort by righteous free software-minded people who happen to work at Microsoft and are dismayed by the things they're asked to do.

But that is probably because I wish it so.

a deliberate effort by righteous free software-minded people who happen to work at Microsoft

Or maybe a deliberate effort by developers who are paid by a three letter agency to sneak in a backdoor that looks like an accidental bug.

In this case you might be right, but the last time a similar issue was widely circulated (Heartbleed in OpenSSL), it also looked like an accident (or rather gross negligence), but its effect was much more beneficial to agencies and not usable to increase FOSS domination.

> Or maybe a deliberate effort by developers who are paid by a three letter agency to sneak in a backdoor that looks like an accidental bug.

Why would a three-letter agency bother to do that, when they could just as well get their malware EFI module signed by MS, and thus pass the secure-boot requirement?

That way they wont risk exposing the existence of a backdoor on every single Windows-copy deployed worldwide.

I honestly don't see the value in it for them.

when they could just as well get their malware EFI module signed by MS

This is exactly what the FBI tried with Apple, causing an enormous public mud fight.

Besides, the outlined method would rather be deployed by NSA, or maybe a foreign service, without legal means to get a signed malware module.

Even if such legal means would exist, it would be in Microsofts best interest to fight them in court: once leaked, the malware would be clearly attributable to MS.

Is this code also used for the Xbox? It would be really cool if we could run linux/bsd easily on one of those.

What does leaking your private key have to do with backdoor keys? Isn't this like saying that CAs are backdoored because somewhere there exists a private key for those certs?

No private keys were leaked; however a signed policy file, that lets you disable the protections within secureboot was discovered and repurposed.

Its not so much a backdoor key, but an overly permissive mechanism within microsofts secureboot implementation that could be used to implement a backdoor within the system.

A similar analogy in the CA world would be when the Microsoft Terminal Server Licensing CA (which accepted user submitted signing requests) was signing certificates that worked in other contexts (ie: https). This didn't break the CA system globally just one overly permissive implementation.

Yeah, I see now, thanks to the other source. The Register's misuse of "key" followed by excessive drivel in that article had me navigating away with the wrong impression before I could make the connection.

Yes. The Register usually adds a lot of extra drama to their articles. The article gets a bit saner towards the end.

What leaked key? That's... not what happened in this case.

The researchers' writeup, in a very fun form, can be found at https://rol.im/securegoldenkeyboot/

With text as follows for those whom the joviality of the original presentation is undesirable:

irc.rol.im #rtchurch :: https://rol.im/chat/rtchurch

Specific Secure Boot policies, when provisioned, allow for testsigning to be enabled, on any BCD object, including {bootmgr}. This also removes the NT loader options blacklist (AFAIK). (MS16-094 / CVE-2016-3287, and MS16-100 / CVE-2016-3320)

Found by my123 (@never_released) and slipstream (@TheWack0lian) Writeup by slipstream (@TheWack0lian)

First up, "Secure Boot policies". What are they exactly?

As you know, secureboot is a part of the uefi firmware, when enabled, it only lets stuff run that's signed by a cert in db, and whose hash is not in dbx (revoked).

As you probably also know, there are devices where secure boot can NOT be disabled by the user (Windows RT, HoloLens, Windows Phone, maybe Surface Hub, and maybe some IoTCore devices if such things actually exist -- not talking about the boards themselves which are not locked down at all by default, but end devices sold that may have secureboot locked on).

But in some cases, the "shape" of secure boot needs to change a bit. For example in development, engineering, refurbishment, running flightsigned stuff (as of win10) etc. How to do that, with devices where secure boot is locked on?

Enter the Secure Boot policy.

It's a file in a binary format that's embedded within an ASN.1 blob, that is signed. It's loaded by bootmgr REALLY early into the windows boot process. It must be signed by a certificate in db. It gets loaded from a UEFI variable in the secureboot namespace (therefore, it can only be touched by boot services). There's a couple .efis signed by MS that can provision such a policy, that is, set the UEFI variable with its contents being the policy.

What can policies do, you ask?

They have two different types of rules. BCD rules, which override settings in the on-disk BCD, and registry rules, which contain configuration for the policy itself, plus configuration for other parts of boot services, etc. For example, one registry element was introduced in Windows 10 version 1607 'Redstone' which disables certificate expiry checking inside mobilestartup's .ffu flashing (ie, the "lightning bolt" windows phone flasher); and another one enables mobilestartup's USB mass storage mode. Other interesting registry rules change the shape of Code Integrity, ie, for a certain type of binary, it changes the certificates considered valid for that specific binary.

(Alex Ionescu wrote a blog post that touches on Secure Boot policies. He teased a followup post that would be all about them, but that never came.)

But, they must be signed by a cert in db. That is to say, Microsoft.

Also, there is such a thing called DeviceID. It's the first 64 bits of a salted SHA-256 hash, of some UEFI PRNG output. It's used when applying policies on Windows Phone, and on Windows RT (mobilestartup sets it on Phone, and SecureBootDebug.efi when that's launched for the first time on RT). On Phone, the policy must be located in a specific place on EFIESP partition with the filename including the hex-form of the DeviceID. (With Redstone, this got changed to UnlockID, which is set by bootmgr, and is just the raw UEFI PRNG output.)

Basically, bootmgr checks the policy when it loads, if it includes a DeviceID, which doesn't match the DeviceID of the device that bootmgr is running on, the policy will fail to load.

Any policy that allows for enabling testsigning (MS calls these Retail Device Unlock / RDU policies, and to install them is unlocking a device), is supposed to be locked to a DeviceID (UnlockID on Redstone and above). Indeed, I have several policies (signed by the Windows Phone production certificate) like this, where the only differences are the included DeviceID, and the signature.

If there is no valid policy installed, bootmgr falls back to using a default policy located in its resources. This policy is the one which blocks enabling testsigning, etc, using BCD rules.

Now, for Microsoft's screwups.

During the development of Windows 10 v1607 'Redstone', MS added a new type of secure boot policy. Namely, "supplemental" policies that are located in the EFIESP partition (rather than in a UEFI variable), and have their settings merged in, dependant on conditions (namely, that a certain "activation" policy is also in existance, and has been loaded in).

Redstone's bootmgr.efi loads "legacy" policies (namely, a policy from UEFI variables) first. At a certain time in redstone dev, it did not do any further checks beyond signature / deviceID checks. (This has now changed, but see how the change is stupid) After loading the "legacy" policy, or a base policy from EFIESP partition, it then loads, checks and merges in the supplemental policies.

See the issue here? If not, let me spell it out to you plain and clear. The "supplemental" policy contains new elements, for the merging conditions. These conditions are (well, at one time) unchecked by bootmgr when loading a legacy policy. And bootmgr of win10 v1511 and earlier certainly doesn't know about them. To those bootmgrs, it has just loaded in a perfectly valid, signed policy.

The "supplemental" policy does NOT contain a DeviceID. And, because they were meant to be merged into a base policy, they don't contain any BCD rules either, which means that if they are loaded, you can enable testsigning. Not just for windows (to load unsigned driver, ie rootkit), but for the {bootmgr} element as well, which allows bootmgr to run what is effectively an unsigned .efi (ie bootkit)!!! (In practise, the .efi file must be signed, but it can be self-signed) You can see how this is very bad!! A backdoor, which MS put in to secure boot because they decided to not let the user turn it off in certain devices, allows for secure boot to be disabled everywhere!

You can see the irony. Also the irony in that MS themselves provided us several nice "golden keys" (as the FBI would say ;) for us to use for that purpose :)

About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a "secure golden key" is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don't understand still? Microsoft implemented a "secure golden key" system. And the golden keys got released from MS own stupidity. Now, what happens if you tell everyone to make a "secure golden key" system? Hopefully you can add 2+2...

Anyway, enough about that little rant, wanted to add that to a writeup ever since this stuff was found ;)

Anyway, MS's first patch attempt. I say "attempt" because it surely doesn't do anything useful. It blacklists (in boot.stl), most (not all!) of the policies. Now, about boot.stl. It's a file that gets cloned to a UEFI variable only boot services can touch, and only when the boot.stl signing time is later than the time this UEFI variable was set. However, this is done AFTER a secure boot policy gets loaded. Redstone's bootmgr has extra code to use the boot.stl in the UEFI variable to check policy revocation, but the bootmgrs of TH2 and earlier does NOT have such code. So, an attacker can just replace a later bootmgr with an earlier one.

Another thing: I saw some additional code in the load-legacy-policy function in redstone 14381.rs1_release. Code that wasn't there in 14361. Code that specifically checked the policy being loaded for an element that meant this was a supplemental policy, and erroring out if so. So, if a system is running Windows 10 version 1607 or above, an attacker MUST replace bootmgr with an earlier one.

On August 9th, 2016, another patch came about, this one was given the designation MS16-100 and CVE-2016-3320. This one updates dbx. The advisory says it revokes bootmgrs. The dbx update seems to add these SHA256 hashes (unless I screwed up my parsing): <snip>

I checked the hash in the signature of several bootmgrs of several architectures against this list, and found no matches. So either this revokes many "obscure" bootmgrs and bootmgfws, or I'm checking the wrong hash.

Either way, it'd be impossible in practise for MS to revoke every bootmgr earlier than a certain point, as they'd break install media, recovery partitions, backups, etc.

- RoL

disclosure timeline: ~march-april 2016 - found initial policy, contacted MSRC ~april 2016 - MSRC reply: wontfix, started analysis and reversing, working on almost-silent (3 reboots needed) PoC for possible emfcamp demonstration ~june-july 2016 - MSRC reply again, finally realising: bug bounty awarded july 2016 - initial fix - fix analysed, deemed inadequate. reversed later rs1 bootmgr, noticed additional inadequate mitigation august 2016 - mini-talk about the issue at emfcamp, second fix, full writeup release

credits: my123 (@never_released) -- found initial policy set, tested on surface rt slipstream (@TheWack0lian) -- analysis of policies, reversing bootmgr/ mobilestartup/etc, found even more policies, this writeup.

tiny-tro credits: code and design: slipstream/RoL awesome chiptune: bzl/cRO <3

We should just swap the top-link to this post, thanks for the detailed write-up!

The original is also on the frontpage: https://news.ycombinator.com/item?id=12259911

Ah, thanks, I missed at first somehow.

So from this I think you could say this is a universal microsoft secureboot implementation bypass. all you need is the signed policy file and an older more obscure (signed) non-blacklisted bootmgr and you can exploit secureboot to glory.

its almost certainly going to be used for malware - a return of bootkits for invisibility/persistence?

microsoft will have to keep revoking older bootmgr's as they find them in jailbreak utils and bootkit malware. eventually they will run out, but for now, busted.

tempting to go buy some winRT devices for linux!

I think people should be able to sue companies that do this. They surely did not advertise it as "secure unless we lose the key". Having a backdoor in the first place could be counted as negligent (should be counted as outright fraud).

Is it really a backdoor? Basically it's a policy with very permissive settings, but even without this policy, the manufacturer still needs to use encryption to sign software, and they'd still need to have a master key which they need to keep very safe.

For example, in the FBI vs. Apple case, the FBI wanted Apple to write a custom version of iOS and sign it so that the phone would verify that it's legitimately from Apple, and install it. Obviously it's different than the FBI also having keys, but only because of protections of the law. I guess in Russia or China the 3 letter agencies can be a lot more persuasive about asking the manufacturers for their master keys, although the USA also has NSL's...

> I think people should be able to sue companies that do this. They surely did not advertise it as "secure unless we lose the key".

I think you misunderstand what this is. This is not a remote backdoor, which can be used by MS/NSA to hack your machine.

Windows RT devices comes with a bootloader locked to only allow UEFI secure-boot signed boot media. Effectively that means Windows RT only. No Linux, FreeBSD or other free OSen for you.

This is a hack, which requires you to have full admin-access on the device, which uses Microsoft's own UEFI mechanisms (policies and what not), to allow booting other non-signed media as well.

Effectively this is a bootloader unlocking. With this hack in place, you can now boot Linux. Or run malware. Anything goes. It's the same as being able to disable secure-boot.

And would you honestly sue HTC or whoever if you found out that the bootloader on your phone could be unlocked, to allow the installation of third party firmware?

Surely you would only see that as a positive thing? Or am I missing something?

I got what this is, my point is, that it is not what they advertised. Maybe I should have made this more clear.

I myself would be thankful to be able to install Linux if I had a device like that. Nevertheless some (enterprise) users would maybe like it otherwise and were screwed by Microsoft claiming something is secured, when they in fact knew, that it was not.

A scenario I can think of is when the machine is not owned by the one using it and the owner wants to sandbox the user as much as possible. In this in the view of the owner it really is a backdoor.

I would love to see companies getting punished for things like that. This case does not seem severe, still, what I want is that you are accountable for stuff you say. You cannot have a backdoor - being remote or not remote - if you say that the thing is secure. You knew it was not, you have committed fraud.

Seeing as this bypass requires admin rights to execute in the first place, a sandboxed user should by all accounts remain sandboxed anyway.

But fair enough point about different use cases.

Maybe that is the plan all along:

Create shitty, easy to find backdoors to show how stupid the whole concept is? And when ask, just say: The MPAA/RIAA/NSA forced us - go complain to them.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact