
A Few Thoughts on Ray Ozzie’s “Clear” Proposal - Hagelin
https://blog.cryptographyengineering.com/2018/04/26/a-few-thoughts-on-ray-ozzies-clear-proposal/
======
motohagiography
The work I did on mobile encryption was framed thusly:

\- Deriving a key for all devices from a single key creates a single,
catastrophic failure mode for the solution where all devices become vulnerable
together. As soon as customers figure this out, nobody serious will adopt it
because they can't afford to accept that known risk exposure.

\- We're assuming that the HSM we're using doesn't have a bias in its key
generation RNG to limit the real key space, because if I were an intel agency,
that's probably the first lever I would pull.

\- The entropy of the additional derivation components we can source from the
individual device to locally diversify keys is really limited, and some really
smart people are going to be reversing our code. Apple (and unrelated, in my
own work, I never worked for anyone affiliated with them) relied on limiting
number of attempts in hardware (effectively) to mitigate this risk.

Personally, I think the Ozzie proposal is a red herring to give the feds
rhetorical leverage by providing their side with something few people
understand, but can get behind politically because it's sufficiently complex
as to be "our" magic vs. "their" magic. This is to drown out technical
objections and make the problem a political one where they can use their
leverage.

As The author (Green) notes, we can design some pretty crazy things, and if
the feds came out and said, "build us a ubiquitous surveillance apparatus, or
at least give us complete sovereign and executive control of all electronic
information." that is technically solvable problem, but in the US, legally
intractable. So instead, they want those effective powers without the overt
mandate.

------
shakna
> It literally refers to a giant, ultra-secure vault that will have to be
> maintained individually by different phone manufacturers

We can't even trust manufacturers to provide updates in most cases. Placing
that much trust in them is nothing short of lunacy.

------
DoctorOetker
I don't see anything new in the alleged proposal, this is the same old crypto
war. This is "just" key escrow.

One might as well propose to have the manufacturers build in the governments
public key (and autobrick phone usage) such that the phone can detect if it is
really the government reading the phone.

Another note:

"Ozzie’s proposal relies fundamentally on the ability of manufacturers to
secure massive amounts of extremely valuable key material against the
strongest and most resourceful attackers on the planet. "

This is not true: the phone encrypts the users passcode against the
manufacturers public key. If the government tries to read the phone, it will
get the encrypted passcode (useless) and send it to the manufacturer who
decrypts the passcode. A single private key is not massive amounts of
information. Not that it changes anything about protection needs: wheither its
a piece of paper containing the say 4096 bits (512 bytes), or in Matthew
Greens misinterpretation billions of 512 bytes (half a terrabyte) on a single
HDD, they both have the same value. The whole code base needs similar
protection anyway: their bootloaders already are signed by the manufacturer.

All this centralization is bad, leave the crypto genie out of the bottle
please...

~~~
kurthr
I do prefer the idea of storing it on paper... at least it's a little easier
to lock up. Even a big camera will only take a few thousand pictures before it
fills up, and physical access is a lot easier to enforce.

If we make 2 billion phones a year (Apple itself is just over 200M) and you
have a line printer running full blast (66 lines = 1page per sec) you could do
Apple with one printer... and the world in 10. It would be a lot of boxes of
paper though... about a box an hour.

edit: to be clear I was assuming that almost every dot in the matrix was a
valid bit and there were 66 keys per page... 80 or even 132 columns at 7x5
wouldn't be enough for 4096 bits otherwise.

~~~
DoctorOetker
impressive calculation for the per phone case!

but as I wrote, its not necessary in Ozzie's scheme: Apple only needs to store
the single private key. All the phones contain the same public key
corresponding to it. All phones encrypt the user passcode to the same public
key. When a user tries to unlock his own phone with his correct passcode, the
phone encrypts his passcode and arrives at thee same encrypted key, unlocking
the phone. When the government seizes the phone, with a special device they
have the phone show the encrypted pass code, dump the encrypted GB's of
encrypted phone contents, and burn an irreversible efuse in the processor
disabling it. They send the encrypted passcode to Apple, who verifies its the
government indeed. Apple uses its single private key to decrypt the user
passcode. Apple sends this pass code to the government. The government can
decrypt the image.

In the proposal there is no need for a massive database of key material. It's
nonsense.

(in practice Apple would use treshold crypography, so that at least k out of n
private keys each belonging to specially trained and screened employees are
necessary to decrypt)

(in practice each phone has a hardcoded random nonce in efuses and instead of
encrypting the user passcode, it encrypts [passcode+nonce], otherwise the
government could just bruteforce 10^4 encryptions to the public key)

I am only saying that this can be done efficiently, not saying that I agree
with the desirability of key escrow. This idea of key escrow is as old as
cryptography.

~~~
kurthr
Totally agree that they only _need_ a single secure key and a BUNCH of
insecure nonces. However, if I was forced to keep a key in escrow and wanted
it to be secure I'd put a uniquely generated (lots of lava lamps?) key for
every one on paper and force anyone who wanted to look them up to do it
physically, in person, with paper. Out go the digital public keys, in stay the
paper private keys in a well observed building full of a zillion boxes of
paper. Most insecure part is still the key generator.

If the feds want to audit, they can... but everyone will see it on video, what
boxes they opened, and what pages they (could have) looked at.

I'd hire some magicians for pen testing too.

edit: pi*10^7 sec in a year is a useful approximation

~~~
DoctorOetker
1) regarding BUNCH: the nonces are random and hence there will be as many as
there are phones drawn from suitably large n-bit space, but Apple does not
need a local copy of the nonces, if the government requests a decryption and
Apple agrees, it will decrypt and find the user pass code and the irrelevant
nonce.

2) "and force anyone who wanted to look them up to do it physically, in
person, with paper" I dont understand your proposal? If the government wants
to decrypt a phone, they should come to Apple in person with _what_ paper? How
do you insure that everyone knows when a phone is audited? (Ozzie's proposal
or what we read of it in the article does not adress insuring the populace
finds out whenever a phone is decrypted). In your scenario the well observed
building is operated by Apple or by the populace?

~~~
kurthr
The idea is that each of the secret keys (not nonces) would be kept on paper
in a well observed location, with only the public keys leaving. The building
would be operated by whoever is responsible for generating the keys and
showing that their keys aren't (hopefully yet) compromised. They could allow
the public to observe and perhaps the boxes could be marked with the range of
IMEIs/keys contained within. If the cops want to go in and get a key out of a
box, they can get a warrant to do it but everyone can know which few hundred
thousand phones have been compromised.

It doesn't completely prevent malfeasance... it just makes it a PITA.

~~~
DoctorOetker
suppose Apple owns the building:

* there is no advantage in having cops come over: either a secret is revealed or it is not. Any information to convince Apple concerning a specific case or phone could just as well be sent over the internet. Allowing them to enter looks like a serious threat vector to me, they could plant things, smuggle things out...

* Either Apple is faithfully reporting each count of the cops unlocking a phone, or it isnt. In the case of requesting over the internet the cops can't bring in devices to look through closed boxes or whatever.

* Is your fear rooted in a perceived sense of insecurity because of the small passcode (4 decimal digits) and the effect that would have on the security of the encrypted(passcode+nonce)? because that is exactly why the random nonce is there, in theory the user could select his own nonce and have it burned in efuse memory, but he would only be able to change the nonce a limited number of times. Then the user can roll as many dice as he wants and xor bits to smithereens ;)

but it all stays crap key escrow, its just a big "Eureka!"-show trial balloon
to gauge public acceptance, no?

~~~
kurthr
The real risk is that whatever the key-holder thinks is air-gapped storage
isn't and the whole lot is secretly lost to crackers, state sponsored or
not... that's a lot harder to do with 1000 tons of paper.

The point is that even a dedicated party trying to keep the keys safe probably
can't do it (for any length of time) on digital media.

------
cesarb
The main difference between this proposal and the previous ones is the
bricking step, which is supposed to make it transparent when the key has been
revealed. But once the key has been revealed, what prevents an attacker from
replacing the main board of the phone (keeping the phone's exterior and its
SIM card), and copying all the data to the new board? A non-technical user
(and even most technical users) wouldn't know the difference.

------
irq-1
Bricking the phone works _against_ law enforcement by only allowing raw access
to the data. Even if Clear worked correctly, law enforcement couldn't open
apps and see the data in the correct context. They'd have raw data files full
of indexes, hashes, and cached data. Worse, apps would start to encrypt data
on the client specifically to avoid Clear.

The only significant change between plain key escrow and Clear (bricking the
phone) would defeat the usefulness of Clear.

~~~
allenz
Apps could potentially work in read-only mode. Plus it's pretty easy to design
a tool to pretty print iMessages given raw data, and that alone would be very
useful for law enforcement.

------
weinzierl
> [..] and keep the secret key in a “vault” where hopefully it will never be
> needed.

That's only in bullet point one and where it already falls apart.

------
AluminiumPoint
I cant think of anything worse than my plastic metal and glass friend being
forced to snitch against me. Its like my best friend betrayed me. Beyond
creepy, key escrow proposals are the very definition of totalitarianism.

------
colemannugent
Can anyone explain why the government wouldn't just mandate that they be given
all the keys from the start? Why would they put up with Apple as a middleman
who could potentially refuse their requests?

Also, this key escrow scheme is near impossible to scale to more than one
government. Now we need a way to authenticate government agents, good luck
with that.

~~~
allenz
The government would be a single point of failure so it's cheaper and more
secure to privatize. Also, private control of keys acts as a check upon
government abuse.

------
throwaway84742
But why? Why give the government such a ripe target for abuse? Why tilt the
balance of power even further in its favor?

~~~
SolarNet
Also a good question, but this post focused only on why it's a stupid idea
technically on purpose.

~~~
throwaway84742
Point taken. He briefly talks about “policy” in the beginning. Policy is the
instrument through which the ruling class gradually strips away individual
freedoms. We need to prominently feature freedom in these conversations,
because ultimately that’s what at stake here.

------
rozzie
If you'd like to look at the proposal yourself rather than to interpret it
through others:

[https://github.com/rayozzie/clear/blob/master/clear-
rozzie.p...](https://github.com/rayozzie/clear/blob/master/clear-rozzie.pdf)

------
prepend
I’m not sure the benefit to Apple or other phone manufacturers. This looks
like a substantial cost with zero benefit to those others than law
enforcement. And substantial new risk for misuse or abuse.

What’s Ozzie’s true motivation? Is he looking to start a company running Clear
and raking in patent revenue? I get why the governments want this, but not why
a citizen would propose this.

If it weren’t Ray Ozzie, I would think this was just part of some propaganda
push.

~~~
JustSomeNobody
Money. His true motivation is money. Secondary to that is prestige; he
"solved" this problem.

~~~
erikpukinskis
You might be right, but you’re guessing. I’m pretty pro-crypto. I’m an
anarchist. I am generally oriented towards non-governmental solutions to
everything, including violence.

But even me, even with that bias, I still worry quite often about what evil
can lurk behind cryptographic structures, and what effect wide availability of
strong crypto will have on that.

I don’t know that it will be positive or negative... my gut says positive, but
I worry. And so it’s not crazy to me to think Ozzie might be legitimately
worried about people’s safety.

------
valiant-comma
Just a nitpick. Matthew Green uses the analogy of signing keys being leaked
often as evidence that Ozzi’s proposed system would be similarly not secure.
This is a weak analogy: signing private keys are often leaked because their
use case requires them to be “online” in some fashion (code must be signed
with the private key so it can be verified with the public key). Similarly,
CAs must use private keys operationally (to sign customer CSRs), increasing
the risk of key compromise.

In Ozzi’s proposal, the private key never actually has to exist outside the
environment it was created in, only the public key does. As pointed out in
other comments, LE would not need access to the private key, either, they
could simply submit the encrypted passcode to the manufacturer, who would then
decrypt it on their behalf using the private key.

~~~
allenz
Code signing and decryption both require access to the private key, possibly
through a hardware security module. I don't see why decryption has less
exposure.

------
johnvega
Extremely exceptional access only, in cases where thousands of people's lives
could be at stake or millions. Since we can't create a fully unbreakable
software/hardware security systems anyway, if ever, companies can use
technology + psychology. Unintentionally create an extremely difficult to find
bug that requires extremely talented engineers and large hardware resources to
break, then unintentionally share it with at most discreet way probably just
verbally to very few trusted 3rd parties. And it is not officially approved by
the top management or even knows about it. We don't live in a perfect world
and we don't have a perfect solution. JUST COMMENTS, NOT A SUGGESTION!

~~~
akira2501
> Extremely exceptional access only, in cases where thousands of people's
> lives could be at stake or millions.

And how do we determine when that's actually the case and when it's overhyped
or flawed intelligence?

> We don't live in a perfect world and we don't have a perfect solution.

Exactly, so focusing on phone encryption is probably a waste of time.

------
ggm
I don't want this scheme. I don't want key escrow. But, a critique in the
document is a 'if lost, lost forever' moment. If the escrow DB is compromised,
the article says all phone are now pwned. For that point in time, true.

But phones are online devices. why does the escrow key have to be a constant,
which if the central store is compromised means all phones prior to that date
are compromised forever?

eg, re=spin the per-phone keygen on some cycle, and you define a window of
risk, but it passes. re-spin clearly has to pass through some protocol, but
we've been doing ephemeral re-key forever with websites.

------
tosser00005
He talks about “massive amounts of extremely valuable key material“ needed to
be stored for billions of devices.

It’s not like this would be Fort Knox. All that data could be stored on a
couple of USB sticks which, really, makes it even scarier. Someone could hold
the entire contents in the palm of their hand walk away with everything.

~~~
kardos
The article makes exactly that point:

> If ever a single attacker gains access to that vault and is able to extract,
> at most, a few gigabytes of data (around the size of an iTunes movie), then
> the attackers will gain unencrypted access to every device in the world.
> Even better: if the attackers can do this surreptitiously, you’ll never know
> they did it.

------
Zigurd
What if someday we get political leadership so awful that, hypothetically, a
former CIA chief feels compelled to warn that is is fundamentally dangerous to
the nation?

One answer might be that we deserve such an outcome, and there is no reason to
insulate encryption from the negative consequences. But is that a good answer?

------
FascinatedBox
> Also, when a phone is unlocked with Clear, a special chip inside the phone
> blows itself up.

no thanks

~~~
pdpi
Assuming “blows itself up” means it’s bricked rather than “does a Samsung”,
I’m ok with that. As the article explains, it’s the only form of intrusion
detection in the whole thing

------
DoctorOetker
Personally I believe real world actions should be the focus of surveillance.
The empires are simply trying to cheap out by focusing on surveillance of
computer activity.

This is the most profound part of Matthew Green's piece in my opinion:

"While this mainly concludes my notes about on Ozzie’s proposal, I want to
conclude this post with a side note, a response to something I routinely hear
from folks in the law enforcement community. This is the criticism that
cryptographers are a bunch of naysayers who aren’t trying to solve “one of the
most fundamental problems of our time”, and are instead just rejecting the
problem with lazy claims that it “can’t work”. "

I believe the most fundamental problem is how can we decentralize real world
security? I am FOR mass surveillance but AGAINST _centralized_ mass
surveillance.

Assume every crook and cranny of the world was covered by _community_ cameras,
and the cameras encrypted the streams with treshold cryptography, such that
the populace has different parts of the secret, then one needs "enough"
citizens agreeing to reveal the contents seen by a specific camera at a
specific time. This way its public for all or public for none. Every accident,
every murder, ...

Suppose a body is found, then the group decides to reveal the imagery: oh yes,
in this case the person was murdered! look the perpetrator is walking out of
view to the next camera, then the next,... we can trace him to where he is
now. Properly trained citizens (in a now authorized police role) go and arrest
the guy. He is now in prison waiting for his trial (also with _community
cameras_ , so no broomsticks in prisoner ani). At trial time, if the person
denies, or claims to be a different person from the arrested one, we can trace
through all the imagery from his commiting a crime to his sitting in court
right there and then.

So yes, there is a real conflict between cryptographers and centralized law
enforcement. We dont need no spooks!

And the spooks can not decode the camera imagery: a large enough number of
citizens (chosen at random by cryptographic sortition) running instance of
_good citizen client_ software need to release their part of the shared
secret.

EDIT:

So there is broadly speaking 2 kinds of crimes:

* meatspace crimes (murder, negligence, rape, making childporn (automatically rape), ...)

* cyber crimes (copyright, child porn, ...)

I argue that not implementing such a _community camera_ system is a form of
negligence in itself.

It does not adress things like copyright infringement, but ... thats not
exactly the most popularly supported concept.

Then there is the problem of child porn: fake and real.

I argue that with deepfake any faked child porn will eventually become
indiscernible from real child porn.

Which leaves the problem of official child porn recorded by the _community
cameras_ used to apprehend perpetrators (since these also sign the imagery to
testify authenticity!).

Due too taboo many victims of child abuse didn't realize, or only had doubts
that they were suffering abuse, enabling the abuse to continue. Without
concrete visual examples for them to explore, to asses if they are or are not
suffering child abuse, how can they alert others of their situation? We send
these children extremely mixed messages: absolutely tell us if you are being
abused, but absolutely never falsely report a person. Merely asking someone
else for advice is automatically interpreted as a child reporting child abuse.
How can a child asses his or her situation? With abstract questions using
words and connotations it does not know?

I believe the number of reported child abuses would go up if we used these
_community cameras_ for decentralized mass surveillance.

Also for crime in general (theft, murder, ...), the knowledge that you will
with extremely high probability be caught, will decrease a lot of crime. I
would not be surprised if the crime rate of "impulsive" crimes (where the
criminal was supposedly not able to control his urges) would drop
substantially, revealing that in the current system they often get off the
hook.

There will still be rude people, getting fines for squeezing women in the ass
while drunk. But for any actual crime in general, both victim and perpetrator
would know that the victim can simply report this to the group, and that the
perpetrator can not escape by lack of evidence. The current lack of evidence
constantly discourages people from reporting crimes (as there is risk
involved: financial: lawyers, emotional: potential incredulity at police
station, ...).

One might think that this will cause criminals to escalate to murder: "if you
rob a victim, you should kill her, or else she will report you" but hiding a
body will be very hard, and if a person goes missing the friends and relatives
will report this, and instead of following the criminal we can follow the
missing person from the time and place she was last reported seen!

As long as cryptographers only draw the _privacy_ card, the law enforcement
community has a point. As long as the law enforcement community only draws the
_centralized_ power card, the cryptographers have a point.

Only when we have decentralized mass surveillance can we have _both_ privacy
(as long as you don't commit crimes or go missing) and real law enforcement.

Common FAQ:

What if say a stalker repeatedly reports his ex as "missing"? Cry wolf to many
times, or be blocked to report a person missing, and the _good citizen client_
software that the citizens individually run, will refuse to comply.

What if a stalker or group of them repeatedly reports a "murderer" in a
celebrities bedroom? we can send a local but randomly selected properly
trained (group of) citizen (in police role) to go check the room, if the
supposed dead body is not there, no reason to unlock the imagery.

(I will add more as people ask)

~~~
allenz
Regarding your distinction between real and cyber crimes, digital evidence can
certainly be relevant in a murder case, e.g. iMessages, location history,
search history. Also, the read-only bricking chip tries to allow search but
exclude ongoing surveillance, though I don't think it's technically feasible.

~~~
DoctorOetker
"Regarding your distinction between real and cyber crimes, digital evidence
can certainly be relevant in a murder case, e.g. iMessages, location history,
search history."

But the cameras are supposed to completely cover society, so we don't _need_
the cyber info. Indeed, perhaps the perpetrator has a secret paper diary,
written in code, where he writes down his exploits. Who cares? We have signed
imagery, of him commiting the crime. Any extra information is useful in the
statistical sense (to understand what drives a person to do this or that, or
to better prepare citizens on how to prevent falling victim to such and such
crime), but should be unnecessary to convict a person. The most relevant are
the actions themselves I think.

About location history: the camera system is more reliable than the cell
phones since a cell phone may be given to a friend willing to provide an
alibi, alternatively GPS spoofing etc.

The major reason these cell phone messages, search history etc are highly
relevant is simply because we lack the _community camera_ system.

Another problem is phone evidence is highly irregular: some people are more
aware of mass surveillance then others (which is also highly correlated to
status in society!) when communicating, some people refuse to have a cell
phone on them, ...

When they lack enough evidence, the prosecution is forced to grab at straws
(irrespective of guilt or innocence of the defendant), and then the value of
computer/phone activity seems very high, especially if boots on the ground or
scientifiic investigation of crime scenes is so much more expensive. Then it
is easy to view this digital data as highly relevant and reliable.

------
OliverJones
We need a new way of thinking about caches of secrets. It comes from this
unpleasant truth: all secrets eventually leak. The evidence of the past few
years teaches us that even state actors with unlimited resources cannot
prevent their secrets from leaking.

A "leak" here happens when a trusted entity loses control of the secret to one
or more untrusted and malicious entities. That's just a definition, not a
claim that any particular government, company, or person is a trusted entity.

To counter this, we need multiple layers of defense.

One is the business of bricking the phones when the leaked secrets are
exploited. That makes it plain that the secret has leaked. It's a valuable
layer of defense.

Another is to make the secrets have limited useful lifetimes. Expiration and
revocation for TLS certificates is a way to do that. Credit/debit card numbers
can be deactivated and replaced rapidly. That's another way to limit the
lifetime of a secret. Ozzie's proposal does not include a way to limit
secrets' lifetimes. (Social Security numbers are problematic secrets: they too
have unlimited lifetimes.)

A third layer is making the secrets have limited utility. If debit cards had
daily spending limits, their secret numbers would be less useful than they are
today, for example. Day-one exploits are secrets with vast utility, for
another example. Ozzie proposes a secret to unlock an entire phone. How about
limiting that to, say, the phone's call log or SMS log?

A fourth layer is to keep the caches of secrets as small as possible, so a
breach affects as few people as possible. Ozzie proposes the opposite of this.

A fifth layer: holders of caches of secrets must know they are strictly liable
for breaches proportional to the damage they do. It must not matter whether
the breach was due to negligence, carelessness, espionage, or salt water
rusting out the safe after a storm. Large scale key escrow cache systems will
never be able to meet this standard: nation states won't honor that liability,
nor will they pay private companies enough to cover the insurance for it.

(Strict liability is not unprecedented: workers' compensation and the vaccine
injury victims' compensation fund are two reasonably successful examples.)

People, companies, and governments holding secrets necessarily must consider
what happens when (not if) they leak, and provide at least some defenses in
depth like these.

Ozzie's proposal has weak and incomplete in-depth defenses. That's why it's
dangerous.

~~~
HillaryBriss
What if the proposal is changed so that the keys do not need to be kept
forever? What if Apple is allowed (or even required) to publicize the private
key(s) after, say, 10 years?

------
forapurpose
I agree about the security of a centralized vault being a key weakness, but
the article omits a few key aspects of Ozzie's proposal:[0]

* A court order is required. It's not up to the tech vendor.

* Physical control of the device is required. No remote exploits.

* Access is enabled only to one device at a time. No mass hacking.

The point of security is to increase the cost to the 'attacker' (here we'll
use that word even for legitimate government purposes); there's no perfect
security; law enforcement can access data on iPhones already. Also, attackers
focus on the weakest (i.e., least expensive) link and there's limited value in
increasing the cost beyond the 2nd weakest link.[1] Except for the
centralization of key storage and two other issues (see below), Ozzie's
proposal might increase the cost to the level of law enforcement's
alternative, acquiring a hacking tool. In fact, I've been thinking of
something similar (court order, physical access required, notification to
user) and might even have posted it to HN at some point.

Using hacking tools is much worse than Ozzie's process: There's no court (or
at least it's not as enforceable, because there's no tech company checking for
a warrant), no tech company, the user doesn't necessarily know their data has
been accessed, remote exploits are possible, and so is mass hacking.

Also remember that private citizens can still encrypt their data at the file
level using other tools, though of course most will not.

Here are weaknesses I see:

A) The use of other means of accessing devices would have to be outlawed, or
law enforcement will continue to use hacking tools and citizens gain nothing.

B) Solve the centralization problem. Probably, the keys shouldn't be in the
hands of the tech giants and should be distributed widely. EDIT: Perhaps
require two unrelated parties for access?

C) If these new access tools are built into mobile devices, what happens in
countries where people's rights have been taken away? The courts are often
ineffective. I suppose the fact that the phones get bricked at least informs
the user, and the authorities can use hacking tools anyway, so perhaps nothing
is lost.

____________

[0] [https://www.wired.com/story/crypto-war-clear-
encryption/](https://www.wired.com/story/crypto-war-clear-encryption/)

[1] If I increase the cost of exploit A to $100,000 and exploit B costs
$50,000, attackers will use B. If I increase the cost of A even further, to
$200,000, it won't provide much more security - the attackers still will use
B.

