
How do we build encryption backdoors? - michael_fine
http://blog.cryptographyengineering.com/2015/04/how-do-we-build-encryption-backdors.html
======
cyphar
> A final, and salient feature on the key distribution approach is that it
> allows only prospective eavesdropping -- that is, law enforcement must first
> target a particular user, and only then can they eavesdrop on her
> connections. There's no way to look backwards in time.

Actually, its even weaker of an attack than that. Signal (for example) stores
a copy of the keys locally on the other person's device after a conversation
has been initiated (and notifies users if they've changed). You could augment
this with TUF or some other updating system to make additions of new devices
(or removal of old ones) also secure. So really the distribution attack only
works for _first connection_. And this is why PGP key signing parties are a
thing (and why I ask for two forms of government ID before signing their
keys).

~~~
titanomachy
The author said exactly this, in the paragraph right before the one you
quoted:

> Some communication systems, like Signal, allow users to compare key
> fingerprints in order to verify that each received the right public key.

~~~
cyphar
That's not what I said. Signal stores the key that you've already verified. So
changing the key in the keyserver doesn't do anything to a device, since you
haven't verified the new key from the keyserver (and it shows a warning).

~~~
sievebrain
You think. Remember that you don't know what binary you were delivered, unless
you personally reverse engineered it yourself.

~~~
cyphar
Or compiled and side-loaded it yourself.

------
Animats
He's missed the real approach - "work reduction". This is giving the
cryptosystem or the random number generator some hidden property which reduces
the amount of work required to break the key. We've seen this repeatedly in
cryptosystems deployed with bad random number generators.[1][2]

[1]
[https://www.schneier.com/blog/archives/2012/02/lousy_random_...](https://www.schneier.com/blog/archives/2012/02/lousy_random_nu.html)
[2] [https://umaine.edu/scis/files/2014/10/The-Sad-History-of-
Ran...](https://umaine.edu/scis/files/2014/10/The-Sad-History-of-Random-
Bits.pdf)

~~~
tptacek
I doubt he missed it, since he's one of the academic researchers working on
the purported BULLRUN disclosures; his name is, for instance, on the Juniper
Dual EC paper.

Rather, I think you've misconstrued the post. I think? this is the blog
version of a talk he did last year at Black Hat, in which he and Jim Denaro
investigated ways that a government could plausibly create an _above-board_
cryptographic back door.

Further: simply crippling a system's RNG doesn't create a workable backdoor,
because that's not a "NOBUS" flaw: anyone who knows the RNG is busted can
break the resulting cryptosystem. Dual EC isn't like that; instead, it
cleverly enlists the RNG as a key escrow scheme: Dual EC's outputs can only be
decrypted back to RNG state if you hold the ECC private key corresponding to
the generator.

Key escrow is discussed in this post.

------
reppard
I believe the master key sharding he mentions based on this
[https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing](https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing)
and has actually been implemented(though I'm not sure if it is at the scale he
implies) here
[https://www.vaultproject.io/docs/concepts/seal.html](https://www.vaultproject.io/docs/concepts/seal.html)

~~~
titanomachy
Shamir's secret-sharing is one of my favourite algorithms, and it would be
certainly be useful in an escrow system. But it actually doesn't address the
problem brought up by the author, which is the insecurity of having the whole
key present in a single location at the moment of encryption. I think it's a
fairly minor issue, since the vast majority of users would never have warrants
issued for their data and their keys would never be reconstructed (assuming
that a critical number of the escrow agencies follows the law).

Far more troubling is the idea that I could be arrested or fined or whatever
just for using strong encryption... although I don't think there is an
appetite for such unenforceable laws in my country.

EDIT: (from article)

> Threshold crypto refers to a set of techniques for storing secret keys
> across multiple locations so that decryption can be done in place without
> recombining the key shares.

Does Shamir's algorithm meet this requirement? My understanding was that the
fragments must still be brought together in one place and the key
reconstructed, although if there is a way to implement the algorithm without
doing this I'd love to know about it.

~~~
Canada
> the vast majority of users would never have warrants issued for their data
> and their keys would never be reconstructed (assuming that a critical number
> of the escrow agencies follows the law).

That would require a unique backdoor key for every device. Somehow these keys
would need to be generated, split into parts, and those parts securely
distributed to the independent escrow agencies.

There's no safe way to do that.

------
mike_hearn
That's good timing, given the discussion yesterday on a similar topic:

    
    
      https://news.ycombinator.com/item?id=12254960
    

There are a couple of things in the article I'm not sure Matt got quite right.

WhatsApp does let you compare key fingerprints, believe it or not. At least
you can scan QR codes to check. I don't know if doing that triggers a key
change warning in future.

End-to-end encryption doesn't seem to impact whether law enforcement can look
backwards in time or not. Simply not logging message content is sufficient to
prevent this. WhatsApp couldn't provide law enforcement with message content
prior to a tap being requested even before they integrated the Signal protocol
because they didn't log message content at all (or so they say). Introducing
E2E crypto in the style of WhatsApp solves only one specific threat model as
far as I can tell - if someone is capable of hacking your datacenter to the
extent that they can siphon off and log messages by themselves without you
noticing, but they aren't also capable of doing a key switcheroo. This would
be a strange but possible kind of hack. Note that this assumes the users
aren't storing their device keys and comparing them by hand and that the
hacker can't influence the code that gets shipped.

He assumes the user can detect key mismatches. Even if users can compare keys,
this assumes that their client does what they think it does. It's noted in
another comment here but all it takes to undo this assumption is getting
Google or Apple to push a dummy binary to the specific devices of interest
that claims things are encrypted even when they aren't.

You wouldn't need to deploy threshold crypto 'at scale' for the proposed
scheme to work. Some schemes like Shoup threshold RSA result in a normal
public key:

    
    
        http://www.shoup.net/papers/thsig.pdf   
    

So the only part that's non standard is the software for working with the
shares to decrypt, which only has to work and exist between the various
agencies.

But I'm not actually even sure you need special threshold crypto schemes. I
guess you could also take the session key(s) and encrypt them with key 1, then
encrypting that value with key 2, etc, to build up an onion of encryptions.
The various participants then have to pass around the value in the same order
hard-coded into the software to get it back. The advantage of this approach is
you can use ordinary HSMs to protect the keys, i.e. the hardware itself
enforces that the private key may never leave the hardware unless it's being
cloned to another HSM.

But these are all minor details. The point Matt makes is well made, which is
that you can build backdoors into cryptographic systems, and the reasons
people don't want to do this are primarily political rather than technical. I
continue to be concerned that the tech community may be about to burn its
credibility with the mainstream population for no reason by claiming this
stuff is impossible to do or is completely unthinkable, when it's actually
not. Opinion polling showed that there was no general consensus behind Apple's
refusal to unlock the phone in the FBI case: many people don't support the
tech industries absolutist position here (perhaps because they don't
understand the potential mass surveillance has).

Moreover, governments will generally not accept an answer of "you are
imperfect thus should not have the law enforcement capability you want".
Lawmakers understand and accept that civil servants will make mistakes or be
openly abusive and only generally want to control the levels of error/abuse,
not eliminate it. Certainly the sorts of positions the Obama administration is
looking for would accommodate key revocation procedures if the government
agencies in question somehow did screw up and their private key leaked out of
their HSMs. I suspect they'd happily agree to temporarily losing their
capability to restore system integrity if there was a procedure for restoring
their access once a neutral third party had re-audited the relevant offices.
This sort of detail isn't where lawmakers are at: they think in broad strokes
rather than the details of procedures.

~~~
cyphar
> End-to-end encryption doesn't seem to impact whether law enforcement can
> look backwards in time or not.

I think he was referring to the fact that the Signal protocol has perfect
forward secrecy -- if you break the key today, all previous communications are
still secure because they used different keys (the key is updated using the
Axolotl ratchet).

