
Microsoft proves backdoor keys are a bad idea - ChuckMcM
http://www.theregister.co.uk/2016/08/10/microsoft_secure_boot_ms16_100/
======
bradford
(disclaimer, MS employee, non-security expert here).

I've read through the article, here, and in other places, and I'm seeing
sentiment that this is a big fuck up on Microsoft's part. I might be
completely misunderstanding, but I just don't see it.

In order to use the backdoor, you've got to flash firmware, so, you've got to
have physical access to the device. If an attacker has physical access to your
device, you're already screwed.

So, I don't doubt that the key exists (I had to use it myself when testing RT
devices back in the win8 days), but what's the exploit here? Why is it, as the
title suggests, a 'bad idea'? Isn't a secure boot policy that can be bypassed
with physical access more secure than none?

~~~
Knuff
The point the register makes is not that this allows unlocking devices (though
that's interesting in it's own right), but that is done via a "secret key"
that now got exposed. Very similar to what the government wants with key
escrows and other backdoor mechanisms for decryption of communication.

Maybe to clarify: it highlights the mechanism (golden key) is flawed. That
Microsoft uses it for boot loaders is unimportant.

~~~
bradford
Ok, I get it. The message is "don't use backdoors, because they'll inevitably
get leaked", which I agree with.

Unfortunately, I don't think that's the message that's being interpreted by
the vast majority of readers. I'm delving into opinion territory now, but when
the word 'backdoor' is used, aren't most people going to assume that it's an
FBI backdoor, instead of a test/development backdoor? This seems like the kind
of article that fans the fuels of conspiracy theorists, and no one seems to be
doing anything to correct the record.

~~~
pipio21
I s there any difference between a test/development backdoor and a FBI
backdoor?.

If you let backdoors in the system, of course the secret services will demand
to have it.

In fact, backdoors that were put in place because secret services' pressure,
will be suited as developer backdoors as an excuse when found by the
mainstream.

First they install backdoors in systems, in order for MS or the US gobertment
to have complete access to any computer in the world, then they worry when the
Chinese and Russians find them.

~~~
bradford
I think you're missing my point (and my poor wording probably didn't help).

A backdoor that requires physical access isn't a backdoor. If an attacker has
such access, you're already screwed.

A backdoor that requires administrative privileges isn't a backdoor. If an
attacker has such access, you're already screwed.

The so-called dev/test 'backdoor' really isn't a backdoor. It's a 'unlock'
tool that's required for anyone who's going to engineer the device. My main
beef is that this article appears to be re-branding the engineering unlock as
a backdoor, and confusion is obviously ensuing.

Again, In my original post, I asked "What's the exploit"? and I understand
that the existence of an exploit might not be the article's subject, but If
you really think that there's a security problem here, I'll ask it again:
"What's the exploit?"

~~~
Nullabillity
> A backdoor that requires physical access isn't a backdoor. If an attacker
> has such access, you're already screwed.

> A backdoor that requires administrative privileges isn't a backdoor. If an
> attacker has such access, you're already screwed.

Then why bother trying to lock it down in the first place?

~~~
cptskippy
Service providers that wish to provide subsidized devices as part of service
contracts usually require that said devices can't be repurposed for the
duration of the service contracts. Thus a means is needed to lock a device.

------
daenney
I genuinely hope this will influence the whole government mandated back door
debate for the better but I'm afraid that this will just be forgotten in a
matter of minutes.

Like Gove said "we've had enough of experts", especially when their educated
opinions don't suit us.

~~~
slg
If a terrorist attack occurred and it was clear that it could have been
prevented if the authorities could have read encrypted information, would that
change your opinion of backdoors? If not, why are you criticizing the other
side for being just as steadfast in their beliefs as you are in yours?

The truth is that no policy is going to be 100% effective so I'm not sure why
either side of the debate should overadjust based on a single failure

~~~
mulmen
I would not criticize the other side of the debate for being steadfast. I will
however criticize the belief itself. I will base my criticism on actual events
such as this one instead of hypotheticals.

I do not think considering this case in the encryption/backdoor debate is an
over-adjustment based on a single failure. I think this a relevant example of
the risks of creating and using a golden key. If you discount every individual
example what are you left with? As daenney stated, the hope is to _influence_
debate, not base the decision entirely on one event.

Do you believe this situation has no relevance to the encryption and backdoor
debate? Are you arguing that because no policy will be 100% effective we just
shouldn't bother with a discussion?

~~~
slg
>I will base my criticism on actual events such as this one instead of
hypotheticals.

The problem is that due to its nature, you only hear about one side of these
events. We never hear about the attacks that were stopped or could have been
stopped by backdoors. Many people take that as proof that these events don't
happen but as the old saying goes absence of proof is not proof of absence.
Without that proof, all we can do is provide hypotheticals.

I am not arguing that this story isn't relevant. I am arguing that people who
feel this story should change their opponents minds are guilty of hypocrisy.
In my personal view, both side in this debate have pros and cons. However,
most in the tech community refuse to acknowledge that which results in zero
progress.

~~~
elmigranto
> absence of proof is not proof of absence

I don't think this applies here. I suppose your point is goverment's exploits
might be effective but they keep it low not to expose the fact of their
existence.

Well, this might very well justify anything from 1984 or any other anti-utopia
— "let us do whatever, it is effective and needed, but we won't give any facts
or details, because it might compromise our system".

~~~
slg
You are proving my point. Not everything is black and white. Not every slope
is slippery. There is room for discussion and compromise. You comparing the
people on the other side of the debate to fans of 1984 style totalitarian
government gets nothing accomplished just like people on the other side saying
you are enabling terrorists gets nothing accomplished.

The only difference is that the other side of the debate is already in power.
So if the tech community doesn't even want to discuss this issue, guess which
side of the issue will _win_ the debate and decide future encryption law.

~~~
int_19h
Writing the law is easy. Enforcing it, on the other hand ...

------
contextfree
I'm actually a bit confused about how this is a "golden key" problem (if I
understand what that means).

As far as I can tell, the problem here is that there's a signed policy that
was intended for newer versions of Windows, but is also interpreted by older
versions of Windows as a valid policy _with a different meaning_. On Win10
1607 it means "under such-and-such conditions, merge these additional rules
into the already applied policy" and on older Windows it just means "apply
this policy".

But the only key here in both cases is Microsoft's regular signing key. Which
I guess could be considered a kind of golden key/backdoor/whatever in itself -
just as in the recent Apple vs. FBI standoff you could say the fact that Apple
had the technical ability to sign and install a hacked OS was a backdoor to
begin with - but that doesn't seem to be what people mean.

------
rocky1138
> The Register understands that this debug-mode policy was accidentally
> shipped on retail devices, and discovered by curious minds including Slip
> and MY123.

> The policy was effectively inert and deactivated on these products but
> present nonetheless.

Whenever I read things like this, I always envision that it's not a cock-up at
all, but instead a deliberate effort by righteous free software-minded people
who happen to work at Microsoft and are dismayed by the things they're asked
to do.

But that is probably because I wish it so.

~~~
ge0rg
_a deliberate effort by righteous free software-minded people who happen to
work at Microsoft_

Or maybe a deliberate effort by developers who are paid by a three letter
agency to sneak in a backdoor that looks like an accidental bug.

In this case you might be right, but the last time a similar issue was widely
circulated (Heartbleed in OpenSSL), it also looked like an accident (or rather
gross negligence), but its effect was much more beneficial to agencies and not
usable to increase FOSS domination.

~~~
josteink
> Or maybe a deliberate effort by developers who are paid by a three letter
> agency to sneak in a backdoor that looks like an accidental bug.

Why would a three-letter agency bother to do that, when they could just as
well get their malware EFI module signed by MS, and thus pass the secure-boot
requirement?

That way they wont risk exposing the existence of a backdoor on every single
Windows-copy deployed worldwide.

I honestly don't see the value in it for them.

~~~
ge0rg
_when they could just as well get their malware EFI module signed by MS_

This is exactly what the FBI tried with Apple, causing an enormous public mud
fight.

Besides, the outlined method would rather be deployed by NSA, or maybe a
foreign service, without legal means to get a signed malware module.

Even if such legal means would exist, it would be in Microsofts best interest
to fight them in court: once leaked, the malware would be clearly attributable
to MS.

------
ruste
Is this code also used for the Xbox? It would be really cool if we could run
linux/bsd easily on one of those.

------
lawnchair_larry
What does leaking your private key have to do with backdoor keys? Isn't this
like saying that CAs are backdoored because somewhere there exists a private
key for those certs?

~~~
45h34jh53k4j
No private keys were leaked; however a signed policy file, that lets you
disable the protections within secureboot was discovered and repurposed.

Its not so much a backdoor key, but an overly permissive mechanism within
microsofts secureboot implementation that could be used to implement a
backdoor within the system.

A similar analogy in the CA world would be when the Microsoft Terminal Server
Licensing CA (which accepted user submitted signing requests) was signing
certificates that worked in other contexts (ie: https). This didn't break the
CA system globally just one overly permissive implementation.

~~~
lawnchair_larry
Yeah, I see now, thanks to the other source. The Register's misuse of "key"
followed by excessive drivel in that article had me navigating away with the
wrong impression before I could make the connection.

------
tryp
The researchers' writeup, in a very fun form, can be found at
[https://rol.im/securegoldenkeyboot/](https://rol.im/securegoldenkeyboot/)

With text as follows for those whom the joviality of the original presentation
is undesirable:

irc.rol.im #rtchurch ::
[https://rol.im/chat/rtchurch](https://rol.im/chat/rtchurch)

Specific Secure Boot policies, when provisioned, allow for testsigning to be
enabled, on any BCD object, including {bootmgr}. This also removes the NT
loader options blacklist (AFAIK). (MS16-094 / CVE-2016-3287, and MS16-100 /
CVE-2016-3320)

Found by my123 (@never_released) and slipstream (@TheWack0lian) Writeup by
slipstream (@TheWack0lian)

First up, "Secure Boot policies". What are they exactly?

As you know, secureboot is a part of the uefi firmware, when enabled, it only
lets stuff run that's signed by a cert in db, and whose hash is not in dbx
(revoked).

As you probably also know, there are devices where secure boot can NOT be
disabled by the user (Windows RT, HoloLens, Windows Phone, maybe Surface Hub,
and maybe some IoTCore devices if such things actually exist -- not talking
about the boards themselves which are not locked down at all by default, but
end devices sold that may have secureboot locked on).

But in some cases, the "shape" of secure boot needs to change a bit. For
example in development, engineering, refurbishment, running flightsigned stuff
(as of win10) etc. How to do that, with devices where secure boot is locked
on?

Enter the Secure Boot policy.

It's a file in a binary format that's embedded within an ASN.1 blob, that is
signed. It's loaded by bootmgr REALLY early into the windows boot process. It
must be signed by a certificate in db. It gets loaded from a UEFI variable in
the secureboot namespace (therefore, it can only be touched by boot services).
There's a couple .efis signed by MS that can provision such a policy, that is,
set the UEFI variable with its contents being the policy.

What can policies do, you ask?

They have two different types of rules. BCD rules, which override settings in
the on-disk BCD, and registry rules, which contain configuration for the
policy itself, plus configuration for other parts of boot services, etc. For
example, one registry element was introduced in Windows 10 version 1607
'Redstone' which disables certificate expiry checking inside mobilestartup's
.ffu flashing (ie, the "lightning bolt" windows phone flasher); and another
one enables mobilestartup's USB mass storage mode. Other interesting registry
rules change the shape of Code Integrity, ie, for a certain type of binary, it
changes the certificates considered valid for that specific binary.

(Alex Ionescu wrote a blog post that touches on Secure Boot policies. He
teased a followup post that would be all about them, but that never came.)

But, they must be signed by a cert in db. That is to say, Microsoft.

Also, there is such a thing called DeviceID. It's the first 64 bits of a
salted SHA-256 hash, of some UEFI PRNG output. It's used when applying
policies on Windows Phone, and on Windows RT (mobilestartup sets it on Phone,
and SecureBootDebug.efi when that's launched for the first time on RT). On
Phone, the policy must be located in a specific place on EFIESP partition with
the filename including the hex-form of the DeviceID. (With Redstone, this got
changed to UnlockID, which is set by bootmgr, and is just the raw UEFI PRNG
output.)

Basically, bootmgr checks the policy when it loads, if it includes a DeviceID,
which doesn't match the DeviceID of the device that bootmgr is running on, the
policy will fail to load.

Any policy that allows for enabling testsigning (MS calls these Retail Device
Unlock / RDU policies, and to install them is unlocking a device), is supposed
to be locked to a DeviceID (UnlockID on Redstone and above). Indeed, I have
several policies (signed by the Windows Phone production certificate) like
this, where the only differences are the included DeviceID, and the signature.

If there is no valid policy installed, bootmgr falls back to using a default
policy located in its resources. This policy is the one which blocks enabling
testsigning, etc, using BCD rules.

Now, for Microsoft's screwups.

During the development of Windows 10 v1607 'Redstone', MS added a new type of
secure boot policy. Namely, "supplemental" policies that are located in the
EFIESP partition (rather than in a UEFI variable), and have their settings
merged in, dependant on conditions (namely, that a certain "activation" policy
is also in existance, and has been loaded in).

Redstone's bootmgr.efi loads "legacy" policies (namely, a policy from UEFI
variables) first. At a certain time in redstone dev, it did not do any further
checks beyond signature / deviceID checks. (This has now changed, but see how
the change is stupid) After loading the "legacy" policy, or a base policy from
EFIESP partition, it then loads, checks and merges in the supplemental
policies.

See the issue here? If not, let me spell it out to you plain and clear. The
"supplemental" policy contains new elements, for the merging conditions. These
conditions are (well, at one time) unchecked by bootmgr when loading a legacy
policy. And bootmgr of win10 v1511 and earlier certainly doesn't know about
them. To those bootmgrs, it has just loaded in a perfectly valid, signed
policy.

The "supplemental" policy does NOT contain a DeviceID. And, because they were
meant to be merged into a base policy, they don't contain any BCD rules
either, which means that if they are loaded, you can enable testsigning. Not
just for windows (to load unsigned driver, ie rootkit), but for the {bootmgr}
element as well, which allows bootmgr to run what is effectively an unsigned
.efi (ie bootkit)!!! (In practise, the .efi file must be signed, but it can be
self-signed) You can see how this is very bad!! A backdoor, which MS put in to
secure boot because they decided to not let the user turn it off in certain
devices, allows for secure boot to be disabled everywhere!

You can see the irony. Also the irony in that MS themselves provided us
several nice "golden keys" (as the FBI would say ;) for us to use for that
purpose :)

About the FBI: are you reading this? If you are, then this is a perfect real
world example about why your idea of backdooring cryptosystems with a "secure
golden key" is very bad! Smarter people than me have been telling this to you
for so long, it seems you have your fingers in your ears. You seriously don't
understand still? Microsoft implemented a "secure golden key" system. And the
golden keys got released from MS own stupidity. Now, what happens if you tell
everyone to make a "secure golden key" system? Hopefully you can add 2+2...

Anyway, enough about that little rant, wanted to add that to a writeup ever
since this stuff was found ;)

Anyway, MS's first patch attempt. I say "attempt" because it surely doesn't do
anything useful. It blacklists (in boot.stl), most (not all!) of the policies.
Now, about boot.stl. It's a file that gets cloned to a UEFI variable only boot
services can touch, and only when the boot.stl signing time is later than the
time this UEFI variable was set. However, this is done AFTER a secure boot
policy gets loaded. Redstone's bootmgr has extra code to use the boot.stl in
the UEFI variable to check policy revocation, but the bootmgrs of TH2 and
earlier does NOT have such code. So, an attacker can just replace a later
bootmgr with an earlier one.

Another thing: I saw some additional code in the load-legacy-policy function
in redstone 14381.rs1_release. Code that wasn't there in 14361. Code that
specifically checked the policy being loaded for an element that meant this
was a supplemental policy, and erroring out if so. So, if a system is running
Windows 10 version 1607 or above, an attacker MUST replace bootmgr with an
earlier one.

On August 9th, 2016, another patch came about, this one was given the
designation MS16-100 and CVE-2016-3320. This one updates dbx. The advisory
says it revokes bootmgrs. The dbx update seems to add these SHA256 hashes
(unless I screwed up my parsing): <snip>

I checked the hash in the signature of several bootmgrs of several
architectures against this list, and found no matches. So either this revokes
many "obscure" bootmgrs and bootmgfws, or I'm checking the wrong hash.

Either way, it'd be impossible in practise for MS to revoke every bootmgr
earlier than a certain point, as they'd break install media, recovery
partitions, backups, etc.

\- RoL

disclosure timeline: ~march-april 2016 - found initial policy, contacted MSRC
~april 2016 - MSRC reply: wontfix, started analysis and reversing, working on
almost-silent (3 reboots needed) PoC for possible emfcamp demonstration ~june-
july 2016 - MSRC reply again, finally realising: bug bounty awarded july 2016
- initial fix - fix analysed, deemed inadequate. reversed later rs1 bootmgr,
noticed additional inadequate mitigation august 2016 - mini-talk about the
issue at emfcamp, second fix, full writeup release

credits: my123 (@never_released) -- found initial policy set, tested on
surface rt slipstream (@TheWack0lian) -- analysis of policies, reversing
bootmgr/ mobilestartup/etc, found even more policies, this writeup.

tiny-tro credits: code and design: slipstream/RoL awesome chiptune: bzl/cRO <3

~~~
forgotpwtomain
We should just swap the top-link to this post, thanks for the detailed write-
up!

~~~
bri3d
The original is also on the frontpage:
[https://news.ycombinator.com/item?id=12259911](https://news.ycombinator.com/item?id=12259911)

~~~
forgotpwtomain
Ah, thanks, I missed at first somehow.

------
45h34jh53k4j
So from this I think you could say this is a universal microsoft secureboot
implementation bypass. all you need is the signed policy file and an older
more obscure (signed) non-blacklisted bootmgr and you can exploit secureboot
to glory.

its almost certainly going to be used for malware - a return of bootkits for
invisibility/persistence?

microsoft will have to keep revoking older bootmgr's as they find them in
jailbreak utils and bootkit malware. eventually they will run out, but for
now, busted.

tempting to go buy some winRT devices for linux!

------
allendoerfer
I think people should be able to sue companies that do this. They surely did
not advertise it as "secure unless we lose the key". Having a backdoor in the
first place could be counted as negligent (should be counted as outright
fraud).

~~~
josteink
> I think people should be able to sue companies that do this. They surely did
> not advertise it as "secure unless we lose the key".

I think you misunderstand what this is. This is not a remote backdoor, which
can be used by MS/NSA to hack your machine.

Windows RT devices comes with a bootloader locked to only allow UEFI secure-
boot signed boot media. Effectively that means Windows RT only. No Linux,
FreeBSD or other free OSen for you.

This is a hack, which requires you to have full admin-access on the device,
which uses Microsoft's own UEFI mechanisms (policies and what not), to allow
booting other non-signed media as well.

Effectively this is a bootloader unlocking. With this hack in place, you can
now boot Linux. Or run malware. Anything goes. It's the same as being able to
disable secure-boot.

And would you honestly sue HTC or whoever if you found out that the bootloader
on your phone could be unlocked, to allow the installation of third party
firmware?

Surely you would only see that as a positive thing? Or am I missing something?

~~~
allendoerfer
I got what this is, my point is, that it is not what they advertised. Maybe I
should have made this more clear.

I myself would be thankful to be able to install Linux if I had a device like
that. Nevertheless some (enterprise) users would maybe like it otherwise and
were screwed by Microsoft claiming something is secured, when they in fact
knew, that it was not.

A scenario I can think of is when the machine is not owned by the one using it
and the owner wants to sandbox the user as much as possible. In this in the
view of the owner it really is a backdoor.

I would love to see companies getting punished for things like that. This case
does not seem severe, still, what I want is that you are accountable for stuff
you say. You cannot have a backdoor - being remote or not remote - if you say
that the thing is secure. You knew it was not, you have committed fraud.

~~~
josteink
Seeing as this bypass requires admin rights to execute in the first place, a
sandboxed user should by all accounts remain sandboxed anyway.

But fair enough point about different use cases.

------
jagermo
Maybe that is the plan all along:

Create shitty, easy to find backdoors to show how stupid the whole concept is?
And when ask, just say: The MPAA/RIAA/NSA forced us - go complain to them.

